Example multi-gpu implementations and tutorial

Hi,
I am new to DGL and want to get started with some example/reference implementations of multi-gpu training. I had a couple of questions on multi-gpu reference code and tutorial:

  1. For the pytorch backend, I can see some multi-gpu code for graphsage. Does any other model have similar multi-gpu reference implementations for pytorch backend? Do other backends (MXNet/TensorFlow) have more reference implementations?

  2. Also, this online tutorial https://docs.dgl.ai/tutorials/models/5_giant_graph/2_giant.html#sphx-glr-tutorials-models-5-giant-graph-2-giant-py -> is it a tutorial for multi-GPU training or large-scale CPU training? Where can I find a tutorial for multi-gpu training?

Thanks for your pointers!

Hi,

There’s no other implemented model for now and it should be easy to modify it to other GNN algorithms.

MXNet and Tensorflow is not supported yet, but we do have plan to support them after 0.5 release.

The tutorial you posted is out dated. You can start with out new tutorial at https://docs.dgl.ai/en/latest/guide/minibatch.html