Dealing a large input graph with GraphSAINT on many steps of GCN layered Networks?

I encounter some problems by input graph’s size which takes more than a Single GPU memories.[ batch size - 1]. I read some Q&As on Web but I couldn’t found some answer to me. My network has 7 steps of message passing layer while not containing graph pooling. In my opinion, some approaches in

is not suitable for my case.
To ask my question, clearly.

  1. Graph sampler method[GraphSaint, Graphsage, Cluster-gcn…etc] also well performed in mentioned above network that continuing embedding all nodes in each message passing layer?
  2. like [zheng-da] said in the above URL, Is it possible in DGL/MXnet? [I’m using Pytorch]

Thank you for reading,
sincerely.

There is an on-going PR for GraphSAINT: https://github.com/dmlc/dgl/pull/2792 . It should be merged soon. If you are looking for examples of training GNNs on multiple GPUs using sampling, you could find them here: dgl/examples/pytorch/graphsage at master · dmlc/dgl · GitHub

This topic was automatically closed 30 days after the last reply. New replies are no longer allowed.