How to make the bridge beween Graph and GPU kernel

Hi,I am a graduate majored in CS and interested in the SPMM or SPMV kernel.Recently i have read the paper and some materiels about DGL.I try to understand how does a GNN layer build with DGL run on the GPU lastly.As far as i understand,DGL abstracts the implement of a model as a massege-passing mechanism.Specifically,there is a update_all API and some build-in massege and reduce API provided by DGL,the question is how does these API works.Learned from the DGL blog,As far as i understand,DGL can fuse the built-in massege function and reduce function ,and then impletes the fused kernel with SPMV or some other kernel.Is my understanding right? and can you give me some help on how does this process works in the source code?Lastly, if the user uses his own massege or reduce function, is there any matrix calcalation?
Best regards


Your understanding is generally correct. The core message passing logic is here: dgl/ at master · dmlc/dgl · GitHub. You can see that if a user provides own message or reduce function in native python function, it will not invoke any fused kernels like gsddmm or gspmm.

This topic was automatically closed 30 days after the last reply. New replies are no longer allowed.