I am having unexpected behavior while using the GatedGraphConv layer on batched graphs. Specifically, when testing with a batch of 2 identical graphs, the outputs for the first and second graph are not the same. Here is my code:
import dgl
import numpy as np
import torch as torch
from dgl.nn import GatedGraphConv
import networkx as nx
conv = GatedGraphConv(4, 1, 1, 3)
a = dgl.graph(([0, 1],[1, 2]))
a.ndata['node'] = torch.ones((3,1))
a.edata['edge'] = torch.Tensor([[0],[1]])
a = dgl.add_self_loop(a)
batch = dgl.batch([a,a])
print(conv(a,a.ndata['node'],a.edata['edge']))
print(conv(batch,batch.ndata['node'],batch.edata['edge']))
And the output is:
tensor([[0.9503],
[0.8060],
[0.8060]], grad_fn=)
tensor([[0.9503],
[0.8060],
[0.8060],
[0.9503],
[0.9753],
[0.8060]], grad_fn=)
Thank you for any help!