Would dgl.mean_nodes(batch_graph, ''hidden") break computational graph?


#1

Hello there,

I am trying to summarize the graph by simply taking the average of the node hidden states after “apply_nodes()” and storing the hidden states to the batch graph object.

However, the gradients during the backpropagation phase seem to be zero regarding the node encoders. May I know if dgl.mean_nodes() would break the computational graph, please?

BTW, Is there a good way to summarize the individual graphs in the batch_graph object?
Thanks!


#2

Hi Qiaochen, could you please provide more information like:

  • DGL Version (0.1.X)
  • Background Library & Version (e.g. PyTorch 0.4.1)
  • OS (e.g. Windows 10)
  • How did you install DGL (conda, pip, source)
  • Python version

and attach the code which can be used to reproduce the problem? Thanks.


#3

It should not break the computational graph. Could you provide us with more information?


#4

Hi Mufe and Allen,

Many thanks to your instant reply, the problem is not relevant to dgl, it is due to the corruption of my input features which contain a nan value in one dimension.

I can confirm that the gradients can be backpropogated backwards.
Thanks for your attention


#5

Great to learn that. Enjoy dgl.:grinning: