I encounter some problems by input graph’s size which takes more than a Single GPU memories.[ batch size - 1]. I read some Q&As on Web but I couldn’t found some answer to me. My network has 7 steps of message passing layer while not containing graph pooling. In my opinion, some approaches in
is not suitable for my case.
To ask my question, clearly.
- Graph sampler method[GraphSaint, Graphsage, Cluster-gcn…etc] also well performed in mentioned above network that continuing embedding all nodes in each message passing layer?
- like [zheng-da] said in the above URL, Is it possible in DGL/MXnet? [I’m using Pytorch]
Thank you for reading,
sincerely.