Distributed training and graph partitioning on kubernetes

What are DGL’s officially supported operators for Graph partitioning and distributed training on Kubernetes? Found the reference of GitHub - Qihoo360/dgl-operator: The DGL Operator makes it easy to run Deep Graph Library (DGL) graph neural network training on Kubernetes in almost all the documentations (updated almost 3 years ago). Is it the only option?

Have you checked the tutorial about set-up for distributed training: dgl/examples/distributed/graphsage at master · dmlc/dgl · GitHub ? We don’t have native support for k8s.

This topic was automatically closed 30 days after the last reply. New replies are no longer allowed.