Gradient correctness

Hi,
Please I have a question regarding the gradient correctness, how can we check that the gradient is computed correctly and that the computational graph was not broken, especially when we apply some functions(like selection,slicing) on the node-level representations.
Thanks.

Gradient correctness is often verified via the mathematical definition of the derivative, i.e. the quotient of the difference of loss functions and the difference of inputs. For mathematical details you can see http://deeplearning.stanford.edu/tutorial/supervised/DebuggingGradientChecking/

For our own operators though, the gradient correctness is ensured via unit tests. Could you explain why you would like to check gradient correctness?

1 Like

Thanks for your answer.
I asked this question because I am using RGCN layers as encoder of an hetero graph. As result I get the nodes level representation and then based , on the nodes encoding I apply a readout functions and selection operations and then I feed the results to a fully connected layer, and I would like to know if the readout and selection operations would not affect the correctness of gradients .

In general, if you are only using PyTorch or DGL operations and not developing your own autograd function with your own backward computation, you don’t need to worry about gradient correctness. Could you explain the problem you encountered (like your gradient is NaN or some unreasonable number or throwing errors)?