Tutorial of distributed DGL training job

Hi, folks, I am trying to follow the tutorial here : 7.4 Tools for launching distributed training/inference — DGL 0.6.1 documentation ; but I don’t find how to generate this ‘data/ogb-product.json’ file? In fact all I want is hash based partition, can I skip this file?

I’ve also read ‘section 7.1 Preprocessing’, though this whole section talks about METIS tool. I assume a hash based option would be easier to set up?

Thanks a lot.

DGL’s graph partition algorithm uses METIS, which assigns nodes to partitions based on METIS partitioning. However, you can use your own way of assigning nodes to partitions. Right now you need to modify the code a little bit. Here you need to replace this line with your own node assignments.

Thanks Zheng! Let me look into that. :slight_smile:

I also have an alternative idea, not sure if it works. It goes like this: when loading graph, every partition reads from the whole data set. For each individual partition though, we apply a hash function, and only load nodes that fits with the given partition. Is there any flaw in doing this?

The reason for doing this is, tutorial document mentioned partition_graph script requires loading everything into memory on a SINGLE machine. And the data we want to try might be too large for a single machine (a few TBs).

do you mean having one file that contains the entire graph? then read the right nodes of each partition from the file?
Partitioning a graph for DGL requires two things: 1) physically partition the graph, 2) create the lookup table for each node/edge so that trainers can go to the right machine to access data. What you said is fine. You need to create the lookup table yourself.

I guess the reason you want to develop your own partition is that our current graph partition script cannot handle your graph. if so, you can use our distributed partitioning solution, which uses ParMETIS.


Ah! I see. Thanks, that is very helpful.

I got a wrong understanding b/c chinese version says parellel partition is NOT supported. I made a one line quick update to the chinese page, in case other folks fall into similar mis-understanding as I did (Update distributed-preprocessing.rst by HuangLED · Pull Request #2850 · dmlc/dgl · GitHub). Hope it makes sense.

Let me know if a volunteer is wanted to update chinese page with ParMETIS stuff.

1 Like

it’ll be great if you can help us update the chinese doc for the distributed partitioning

1 Like

I am evaluating doing these two things. Would you please also point me to where the system right now does: 1) creates the lookup table as said and 2) how trainer uses such lookup table ? I did some tracing but not quite sure if I am looking at the right place. Thank you!

Hi, Zhengda,

 I have read through the page you mentioned and was about to get some hand-on experience, then update the chinese version page. 

One thing I don't get is, on the page you shared, it mentions using "cluster of machines" to partition a large graph. Nevertheless, there is no instruction on how to setup such a cluster of machines for ParMetis. What did I miss? is there a separated and dedicated page for this?

I also read into the code of ParMetis and convert_partition.py tool, still don’t find any place mentioning cluster of machines. At this point I know how DistDGL manages a cluster, though I don’t see how ip_config.txt is used in any part of ParMetis or convert_partition.py.

here is how the lookup table is created: dgl/partition.py at master · dmlc/dgl · GitHub
here is how the lookup table is used: dgl/graph_partition_book.py at master · dmlc/dgl · GitHub

When partitioning a graph, we shuffle node/edge IDs so that all nodes/edge IDs in a partition are a contiguous ID range.

1 Like