Would dgl.mean_nodes(batch_graph, ''hidden") break computational graph?

Hello there,

I am trying to summarize the graph by simply taking the average of the node hidden states after “apply_nodes()” and storing the hidden states to the batch graph object.

However, the gradients during the backpropagation phase seem to be zero regarding the node encoders. May I know if dgl.mean_nodes() would break the computational graph, please?

BTW, Is there a good way to summarize the individual graphs in the batch_graph object?
Thanks!

Hi Qiaochen, could you please provide more information like:

  • DGL Version (0.1.X)
  • Background Library & Version (e.g. PyTorch 0.4.1)
  • OS (e.g. Windows 10)
  • How did you install DGL (conda, pip, source)
  • Python version

and attach the code which can be used to reproduce the problem? Thanks.

It should not break the computational graph. Could you provide us with more information?

Hi Mufe and Allen,

Many thanks to your instant reply, the problem is not relevant to dgl, it is due to the corruption of my input features which contain a nan value in one dimension.

I can confirm that the gradients can be backpropogated backwards.
Thanks for your attention

1 Like

Great to learn that. Enjoy dgl.:grinning:

i meet with the same question that the grad of encoder is zero. there is gcn after encoder. my input features don’t have nan value.

@VolantBoy Could you please provide more contexts such as some code snippets that can reproduce the error?