Hi, I tried to apply GCN on a batch of images whose graph structure is the same (let’s say fully-connected), but the result is bad. Could you guys point out what I have done wrong? Here is my implementation.
I created one dgl.graph object. The
forward(graph,inputs) in the model takes a batch of images and a single graph object as input.
I’m aware that
GraphConv() expects the input of dimension
N,*,in_feats. I assume that the additional dimension is equivalent to the
batch_size dimension. Hence, I reshape a batch of images. The code of GCN layer is as follow:
class GCN(nn.Module): def __init__(self, in_feats, hidden_size): super(GCN, self).__init__() self.conv1 = EdgeConv(in_feats,hidden_size) self.conv2 = EdgeConv(hidden_size,in_feats) def forward(self, inputs,g): b,c,h,w = inputs.shape inputs = inputs.view(b, c,-1) # B x C x N # Expected dimension for GCN: N,*,hidden_size inputs = inputs.permute(2,0,1) # N x B x C output = self.conv1(self.g, inputs) output = torch.rel(output) output = self.conv2(self.g, output) output = output.permute(1, 2, 0) output = output.view(b, c, h, w) return output