Can I do graph partition and load partition works with TensorFlow or MXNet backend?

Hi, @VoVAllen , I saw your answer related to using dgl.distributed module with TensorFlow backend.

But I’m still a bit unclear about whether I can use it: Can I do graph partition and load partition tasks using dgl.distributed.partition_graph and dgl.distributed.load_partition with TensorFlow or MXNet backend?

Hope you could help with me! Thanks!

I haven’t tried that before. But basically I believe it’s viable since it is implemented in a framework-agnostic way. Also the partition_graph is not strong related to the downstream task. Why would you prefer using backend other than pytorch?

The root cause is that: though pytorch is easy-on-hand yet efficient, it hasn’t yet provide APIs to users to manually control computation graph because it controls it by itself for autograd.

But I want to control the generated computation graph on my own, e.g., manually put a computation graph for batch A from GPU to CPU, and then manually put it back for backward. The above operation, however, cannot yet be supported by pytorch. So I may need to resort to mxnet or tensorflow which provides APIs to control the computation graph.

It’s hard for dgl to support computation graph for now because we heavily relies on dlpack to communicate with the backend framework. But for the partition part, it’s isolated with other part, like offline preprocessing, thus I believe using pytorch should be fine?

@VoVAllen Thanks for your reply. For this case, pytorch is now testing hooks which transfer the computation graph between CPU and GPU now. I’m ongoing with the issue.

This topic was automatically closed 30 days after the last reply. New replies are no longer allowed.