How to reduce the memory use in big graph

I’m trying to run relgraphcov and link prediction example
the dataset I used is FB15K

entities: 14951

relations: 1345

edges: 483142

my problem came when the model comes to eval phase and the pytorch need to allocate 275GB cuda memory or cpu memory

I see that you guys added some new feature in DGL 3.0 (some awesome fuse msg passing)
and in your code (in example) the information passing function is self-define bdd-norm function or self define basis-norm funciton

is there any way to reduce the memory usage? (like modify the message passing function to obey some rules or other)

We have two RGCN examples: one implemented with homogeneous graph and another with heterogeneous graph. Which example were you running?

Thanks.


this code
the dataset is fb15k

and I see u provided a wrapped function to load fb15k data
can you tell me how did you guys split the rdf data to test set\ train set \ validation set?

I ran this command on nightly build and it ran successfully with 3GB CPU memory and 6.6GB GPU memory:

python3 link_predict.py -d FB15k --gpu 0 --eval-protocol=filtered

Could you tell us what your DGL version was?