How to set num_samplers in DistDGL?

I train GraphSAGE with ogbn-products dataset in two manchies.

it seems that the epoch times in the following four experiments do not differ much.

how can I get a similar conclusion as Fig. 10 in paper DistDGL: Distributed graph neural network training for billion-scale graphs.


The time varies within a certain range.
Are 6.1 and 5.5 obtained by averaging?

Setting num_samplers to 1 has some improvements, but it doesn’t seem to work as well for METIS

launch.py --num_samplers x can set the number of samplers for each trainer. As for the detailed implementation of the paper you referred, please contact the author

This topic was automatically closed 30 days after the last reply. New replies are no longer allowed.