Utilizing node features in GCN

Dear DGL community,

I’m trying to classify two groups of whole graphs. Previously I ran the graph classification with a GCN Classifier:

class GCNClassifier(nn.Module):
    def __init__(self, in_dim, hidden_dim, n_classes, hidden_layers, allow_zero_in_degree):
        super(GCNClassifier, self).__init__()
        self.layers = nn.ModuleList()
        # input layers
        self.layers.append(GraphConv(in_dim, hidden_dim, allow_zero_in_degree=allow_zero_in_degree))
        # hidden layers
        for _ in range(1, hidden_layers):
            self.layers.append(GraphConv(hidden_dim, hidden_dim, allow_zero_in_degree=allow_zero_in_degree))
        self.classify = nn.Linear(hidden_dim, n_classes)

    def forward(self, g):
        # Use node degree as the initial node feature. For undirected graphs, the in-degree
        # is the same as the out_degree.
        h = g.in_degrees().view(-1, 1).float()
        # Perform graph convolution and activation function.
        for i, gnn in enumerate(self.layers):
            h = F.relu(gnn(g, h))
        g.ndata['h'] = h
        # Calculate graph representation by averaging all the node representations.
        hg = dgl.mean_nodes(g, 'h')
        return self.classify(hg)

Now I would like to utilize node features stored in g.ndata[‘feat’]. I add my node features earlier by running g.ndata['feat'] = torch.tensor(graphs[i, :, :, 1]), where this tensor is a 25x25 feature matrix (25 nodes with 25 features each).

Changing h to h = g.ndata['feat'] leads to an error Expected object of scalar type Double but got scalar type Float for argument #3 'mat2' in call to _th_addmm_out - it’s been a long time since I last used dgl. Do you have any suggestions where my error could be?

try with g.ndata['feat'] = torch.tensor(graphs[i, :, :, 1]).double() ?

I tried with g.ndata['feat'] = torch.tensor(graphs[i, :, :, 1]).double(), but it still gives me the same error: Expected object of scalar type Double but got scalar type Float for argument #3 'mat2' in call to _th_addmm_out. I stepped in with the debugger and saw that it fails in the very first call of the gnn layers loop.

        for i, gnn in enumerate(self.layers):
            h = F.relu(gnn(g, h))

Got the solution. I had to call g.ndata['feat'] = torch.tensor(graphs[i, :, :, 1]).float() instead of g.ndata['feat'] = torch.tensor(graphs[i, :, :, 1]).double()

2 Likes

This topic was automatically closed 30 days after the last reply. New replies are no longer allowed.