I noticed that if we set the following random seeds in a pytorch project, the training loss will be the same for different runnings. But if I add a gcn layer with dgl, the training loss fluctuates for different runnings with the same code and settings. I do not use any random variable explicitly when adding gcn layer.
I wonder if this is relevant to dgl and how to fix the randomicity.
random.seed(args.seed)
np.random.seed(args.seed)
torch.manual_seed(args.seed)
if args.cuda:
torch.cuda.manual_seed_all(args.seed)