Will input features of nodes being updated during training?

I am a newbie here, I build a heterograph for link prediction training. The graph doesnot have features, so I initialize features like this:


my question is will the input features being updated during train.

I print the g.ndata after training, but the value remains the same.
how to learn node features from scratch?
I use dataloader for minibatch training, I know the RGCN will output the node hidden states, should I copy the hidden states to the original graph? and how to copy this tensor to the oraginal graph?

@mufeili
should I add the embed to the optimizer?
I realize that the embed is added to graph, but the optimizer only step the gridient of the model parameters.

Nice catch! You are right that you need to pass embed.parameters() to the optimizer. I’ve updated the FAQ.

@mufeili
there will be another problem doing so. I tried pass the embed to optimizer, I trained for two epoch, then when I want to continue training, it throws an error.

“RuntimeError: Trying to backward through the graph a second time, but the buffers have already been freed. Specify retain_graph=True when calling backward the first time” while using custom loss function

I googled this problem, “RuntimeError: Trying to backward through the graph a second time, but the buffers have already been freed. Specify retain_graph=True when calling backward the first time” while using custom loss function - autograd - PyTorch Forums
python - Pytorch - RuntimeError: Trying to backward through the graph a second time, but the buffers have already been freed - Stack Overflow
the only way is to rerun the embedding initial step. and train from scratch.
image
I am really confused.

The example below works.

import dgl
import torch
import torch.nn as nn
from dgl.nn import GraphConv
from torch.optim import Adam

num_nodes = 5
emb_size = 5
g = dgl.rand_graph(num_nodes=num_nodes, num_edges=25)
embed = nn.Embedding(num_nodes, emb_size)
model = GraphConv(emb_size, 1)
optimizer = Adam(list(model.parameters()) + list(embed.parameters()), lr=1e-3)
labels = torch.zeros((num_nodes, 1))
criteria = nn.BCEWithLogitsLoss()
num_epochs = 5
for _ in range(num_epochs):
    pred = model(g, embed.weight)
    loss = criteria(pred, labels)
    optimizer.zero_grad()
    loss.backward()
    optimizer.step()

This topic was automatically closed 30 days after the last reply. New replies are no longer allowed.