Hi,
I am new to DGL and want to get started with some example/reference implementations of multi-gpu training. I had a couple of questions on multi-gpu reference code and tutorial:
-
For the pytorch backend, I can see some multi-gpu code for graphsage. Does any other model have similar multi-gpu reference implementations for pytorch backend? Do other backends (MXNet/TensorFlow) have more reference implementations?
-
Also, this online tutorial https://docs.dgl.ai/tutorials/models/5_giant_graph/2_giant.html#sphx-glr-tutorials-models-5-giant-graph-2-giant-py -> is it a tutorial for multi-GPU training or large-scale CPU training? Where can I find a tutorial for multi-gpu training?
Thanks for your pointers!