How to fix random seeds in dgl in training?

I noticed that if we set the following random seeds in a pytorch project, the training loss will be the same for different runnings. But if I add a gcn layer with dgl, the training loss fluctuates for different runnings with the same code and settings. I do not use any random variable explicitly when adding gcn layer.
I wonder if this is relevant to dgl and how to fix the randomicity.

random.seed(args.seed)
np.random.seed(args.seed)
torch.manual_seed(args.seed)
if args.cuda:
torch.cuda.manual_seed_all(args.seed)

Hi Erutan,

Thank you for the feedback. Did you use the built-in GCN layer or one that you developed yourself? It will also be very nice if you can provide a snippet of your code with which we can reproduce your issue and perform some in depth analysis.

Hi, I meet the same question. My code is https://github.com/dmlc/dgl/pull/403.

By the way, I also use the following code in my test:

random.seed(args.seed)
np.random.seed(args.seed)
torch.manual_seed(args.seed)
if args.cuda:
torch.cuda.manual_seed_all(args.seed)

It seems that the seed seeting dose not work.

@Erutan-pku @hbsun2113 Thank you for raising the issue and sorry about that. We will need to do some further investigation on this and an issue has been opened in the repo: https://github.com/dmlc/dgl/issues/412.