How to fix random seeds in dgl in training?


I noticed that if we set the following random seeds in a pytorch project, the training loss will be the same for different runnings. But if I add a gcn layer with dgl, the training loss fluctuates for different runnings with the same code and settings. I do not use any random variable explicitly when adding gcn layer.
I wonder if this is relevant to dgl and how to fix the randomicity.

if args.cuda:


Hi Erutan,

Thank you for the feedback. Did you use the built-in GCN layer or one that you developed yourself? It will also be very nice if you can provide a snippet of your code with which we can reproduce your issue and perform some in depth analysis.