Problem when runing R-GCN in parallel

The rgcn example code: https://github.com/classicsong/dgl/blob/rgcn-lk/examples/pytorch/rgcn/entity_classify_mp.py can run with ogbn-mag datasets which contains node features.
The implementation is a little bit tricky: https://github.com/classicsong/dgl/blob/814e74f04afd21b93db7879740b8dde98b20ac44/examples/pytorch/rgcn/entity_classify_mp.py#L434-L444

Hmm RGCN with a graph and feature of this size should be handled quite well. If it is only a part of the model, how large was your full architecture? You can probably measure the space RGCN takes by taking it off and compare the difference in GPU consumption (or how much memory your model without RGCN takes).

Thanks for the information! Is it possible to migrate the code to a customized dataset or it can only apply on OGB datasets.

The entire model work well when I used GCN, but when I migrate to RGCN, it shows out of memory

In this case, you could try turning on the low-memory option for RGCN (low_mem=True).

I didn’t find the argument low-mem in the implementation, it would be helpful if you could refer to the code. Thanks!

Never mind I found it. For those who might need it as well, there is the reference