Why the order of edges in graph construction step affects the training results

I met a strange problem, when I change the order of edges to construct the graph, I find the training results(e.g. loss of each step) will be different. I wanna know why. I can reproduct result when order of edges is fixed, thus I have control other reproducibility.

pytorch 1.5
cuda 10.1
dgl 0.4.3

model graphsage
aggregator pool

I assume you have fixed all sources of randomness like below?

torch.manual_seed(seed) # cpu
torch.cuda.manual_seed_all(seed)  # gpu
torch.backends.cudnn.deterministic = True  # consistent results on the cpu and gpu

In addition, DGL has some intrinsic randomness due to its backend kernels, see issue 1471

yes, I fixed all randomness, I can reproduct all the result.
Anyway, I would like to know if the order of the edges and the order of the neighbors are fixed for each iteration?(I see no shuffle before lstm reducer in your graphsage code.)

It should be, but note that you cannot make any assumption about edge order in the message passing phase, as internal optimization will be performed there.

thanks, but I think I dont make it clear, I mean in graphsage, shuffling neighbors before feeding them into lstm makes lstm order-invariant, but in your graphsage demo code, you dont shuffle neighbors in lstm reducer. Thus I guess the edges order or neighbors order are at random in DGL for each iteration?