How to implement automatic differentiation of message passing functions?

I want to implement a GNN operator in pytorch. In my opinion, a GCN layer can be considered as a linear layer and a message passing functions. When we write a pytorch operator, we should inherit torch.autograd.function and implement the forward and the backward func. So Where is this part in the source code of dgl?Or is there any derivation?

All DGL’s message passing functions and GNN modules that based on it support autograd. The codebase has many layers so it is hard to pinpoint exactly the location, but if you are interested you can start with https://github.com/dmlc/dgl/blob/master/python/dgl/backend/pytorch/sparse.py#L162

Thanks! If I want to implement my message-pass function, can I just inherit it? I want to use cpp to implement it. I see the ffi function in dgl doc. the doc says that we need Compile and build the library,what is that mean? Need I Compile from the dgl source? Can you give me some suggestions?

If your new message passing function is implemented in python, yes, you can just override that interface and no compilation is needed.

This topic was automatically closed 30 days after the last reply. New replies are no longer allowed.