Set features of a batched heterogeneous graph


I am trying to batch a set of heterographs to improve the training efficiency for a Heterogeneous GCN. I am using dglnn.GraphConv to define the relations within the each HeteroGraphConv layer. It works fine for inferring and training with a single graph with a feature dictionary for the nodes at a time (no edge features).

However, I have trouble understanding the format of the feature dictionary for the batched graph. As the batched graph comprise of all the nodes of the disjoint graphs and is also a heterograph, I tried concatenating the features for each node type into a single tensor within the feature dictionary; but it throws this error:
RuntimeError: mat1 and mat2 shapes cannot be multiplied (1x220 and 22x32)
Note that each disjoint graph has 22 nodes, batch size is 10 and thus the batched graph has 220 nodes (in this case all the graphs have same number of nodes, but it can change).

Is there an alternative way to define the feature dictionary for the batched heterogeneous graph? I tried Deep Graph Library, and dgl.batch — DGL 0.6.1 documentation but they don’t explicitly state about defining the batched heterograph feature dictionary.

Thanks in advance!

Just to give a code snipped, this is how I have constructed the heterogeneous GCN:

class HetGCN(nn.Module):
    def __init__(self, in_dim, hid_dim, out_dim, fc_dim):
        super(HetGCN, self).__init__()

        self.conv_1 = dglnn.HeteroGraphConv({
            'observe': dglnn.GraphConv(in_dim['j'], hid_dim['r'], activation=F.leaky_relu),
            'communicate': dglnn.GraphConv(in_dim['r'], hid_dim['r'], activation=F.leaky_relu),
            'near': dglnn.GraphConv(in_dim['j'], hid_dim['v'], activation=F.leaky_relu),
            'visit': dglnn.GraphConv(in_dim['v'], hid_dim['r'], activation=F.leaky_relu),
            'depends': dglnn.GraphConv(in_dim['j'], hid_dim['j'], activation=F.leaky_relu)
        }, aggregate='sum')

        self.conv_2 = dglnn.HeteroGraphConv({
            'observe': dglnn.GraphConv(hid_dim['j'], out_dim['r'], activation=F.leaky_relu),
            'communicate': dglnn.GraphConv(hid_dim['r'], out_dim['r'], activation=F.leaky_relu),
            'near': dglnn.GraphConv(hid_dim['j'], out_dim['v'], activation=F.leaky_relu),
            'visit': dglnn.GraphConv(hid_dim['v'], out_dim['r'], activation=F.leaky_relu),
            'depends': dglnn.GraphConv(hid_dim['j'], out_dim['j'], activation=F.leaky_relu)
        }, aggregate='sum')

        self.fc_1 = nn.Linear(fc_dim['in'], fc_dim['hidden'])
        self.fc_2 = nn.Linear(fc_dim['hidden'], fc_dim['hidden'])
        self.fc_3 = nn.Linear(fc_dim['hidden'], fc_dim['out'])
        self.intermediate = torch.empty(fc_dim['in'],requires_grad=True)

    def forward(self, g, feat_dict):
        # print(feat_dict)
        h1 = self.conv_1(g, feat_dict)
        h2 = self.conv_2(g, h1)
        self.intermediate =[h2['r'], h2['j'], h2['v']])

        h4 = F.softplus(self.fc_1(self.intermediate.transpose(0,1)))
        h5 = F.softplus(self.fc_2(h4))
        h6 = F.softplus(self.fc_3(h5))
        return h6

This is the format of the feature dictionary. For the batched heterogeneous graph, I concatenated the features for all the graphs in the batch, belonging to the same node type to the corresponding tensor in the dictionary.
{'r': tensor([[ 0.0000, 1...='cuda:0'), 'v': tensor([[ 0.0000, 0...='cuda:0'), 'j': tensor([[ 1.0000, 3...='cuda:0')}

Alright. As it turned out it is the correct way to create the feature dictionary for the batched graph. The error is thrown from the fully connected layer as the output of the batched graph mismatched with the inputs to the FC layer.

Glad that you’ve figured out the problem. Any suggestions you want to share for the team to improve the overall experience?

This topic was automatically closed 30 days after the last reply. New replies are no longer allowed.