Distributed Training

Does Distributed training of DGL is same as mentioned in https://arxiv.org/pdf/2010.05337.pdf this paper.
DistDGL: Distributed Graph Neural Network
Training for Billion-Scale Graphs

Yes, that’s correct.

1 Like

in this paper, I want to ask about the training time about euler and dgl .

in this paper, I know dgl is sampling edges and euler2 is sampling nodes directly. so dgl sampling edges and fetch nodes。 this nodes shape is not equal as euler’s nodes shape (maybe small than euler nodes size).

so is it real faster than euler ?

so when training with dgl ,one step training time , maybe shake . one time is 1s another is 3s。 because of nodes shape sampled is not stable in every step

This topic was automatically closed 30 days after the last reply. New replies are no longer allowed.