Why need to convert input graph to homogenous graph in DistDGL

When I read the code distributed training, I found that when doing graph partition, the input graph is converted to homogeneous graph in the first place, and when using DistGraph load the partition graph, the loaded graph is also homogeneous graph, even though the input training graph before partition is hetero. I am confused that why it is necessary to convert the input graph into homogeneous graph but not keey using the original type? Is there any limitation that only homogeneous graph work during the distributed training?

Thank you:)

mainly because re-use code and some limitations of partition method. pls refer to 7.5 Heterogeneous Graph Under The Hood — DGL 0.9.1 documentation for more details.
For end users, you can access the loaded DistGraph in the same way of original graph: homogeneous or heterogeneous. DistGraph is the one you should use instead of the returned graph of dgl.distributed.load_partition() which is always homogeneous.