How to fix random seeds in dgl in training?


#1

I noticed that if we set the following random seeds in a pytorch project, the training loss will be the same for different runnings. But if I add a gcn layer with dgl, the training loss fluctuates for different runnings with the same code and settings. I do not use any random variable explicitly when adding gcn layer.
I wonder if this is relevant to dgl and how to fix the randomicity.

random.seed(args.seed)
np.random.seed(args.seed)
torch.manual_seed(args.seed)
if args.cuda:
torch.cuda.manual_seed_all(args.seed)


#2

Hi Erutan,

Thank you for the feedback. Did you use the built-in GCN layer or one that you developed yourself? It will also be very nice if you can provide a snippet of your code with which we can reproduce your issue and perform some in depth analysis.