About node embedding initialization example

The example is:
import dgl
import torch
import torch.nn as nn
from dgl.nn import GraphConv
from torch.optim import Adam

num_nodes = 5
emb_size = 5
g = dgl.rand_graph(num_nodes=num_nodes, num_edges=25)
embed = nn.Embedding(num_nodes, emb_size)
model = GraphConv(emb_size, 1)
optimizer = Adam(list(model.parameters()) + list(embed.parameters()), lr=1e-3)
labels = torch.zeros((num_nodes, 1))
criteria = nn.BCEWithLogitsLoss()
num_epochs = 5
for _ in range(num_epochs):
pred = model(g, embed.weight)
loss = criteria(pred, labels)
optimizer.zero_grad()
loss.backward()
optimizer.step()

The question is: Is this approach necessarily better than other initialization method? Personally I think using nn.Embedding directly seems universally appropriate.

It’s a common option to use nn.Embedding. Other might use one-hot features derived from degree, labels or other dataset intrinsic features.

Also someone tried random initialized features.

Thanks. Actually I want to know more about feature initialization on heterogeneous graph. Is there any practical example?
@VoVAllen

Hi,

Please take a look on dgl/examples/pytorch/metapath2vec at master · dmlc/dgl · GitHub. Random walk models might be suitable in your case if you don’t have initial node features

Oh,I see. This is a practical solution. However I’m still wondering if it’s feasible to use nn.Embedding to do feature initialization on heterogeneous graph. I’d appreciate your opinion.

I’m not sure whether it will give you good performance(accuracy), but it’s viable

This topic was automatically closed 30 days after the last reply. New replies are no longer allowed.