Applying Batch Graph Classification Tutorial Code to DD dataset

Hi,

I’ve been trying to adapt the graph classification tutorial code which was written for the MiniGCDataset to a simple binary classification task for the DD dataset. I didn’t modify any of the code other than creating a classifier with output 2, but the model only predicts class 0 and the training loss fluctuates at 0.69 after 1 epoch, no matter how many epochs I train it for. The network weights do not change much as well after the 1st epoch. Are there any suggestions what might be the problem? The code is below. Thanks in advance!

# Sends a message of node feature h.
msg = fn.copy_src(src='h', out='m')

def reduce(nodes):
    """Take an average over all neighbor node features hu and use it to
    overwrite the original node feature."""
    accum = torch.mean(nodes.mailbox['m'], 1)
    return {'h': accum}

class NodeApplyModule(nn.Module):
    """Update the node feature hv with ReLU(Whv+b)."""
    def __init__(self, in_feats, out_feats, activation):
        super(NodeApplyModule, self).__init__()
        self.linear = nn.Linear(in_feats, out_feats)
        self.activation = activation

    def forward(self, node):
        h = self.linear(node.data['h'])
        h = self.activation(h)
        return {'h' : h}

class GCN(nn.Module):
    def __init__(self, in_feats, out_feats, activation):
        super(GCN, self).__init__()
        self.apply_mod = NodeApplyModule(in_feats, out_feats, activation)

    def forward(self, g, feature):
        # Initialize the node features with h.
        g.ndata['h'] = feature
        g.update_all(msg, reduce)
        g.apply_nodes(func=self.apply_mod)
        return g.ndata.pop('h')

class Classifier(nn.Module):
    def __init__(self, in_dim, hidden_dim, n_classes):
        super(Classifier, self).__init__()

        self.layers = nn.ModuleList([
            GCN(in_dim, hidden_dim, F.relu),
            GCN(hidden_dim,hidden_dim,F.relu)
            ])
        self.classify = nn.Linear(hidden_dim, n_classes)

    def forward(self, g):
        # For undirected graphs, in_degree is the same as
        # out_degree.
        h = g.in_degrees().view(-1, 1).float()
        for conv in self.layers:
            h = conv(g, h)
        g.ndata['h'] = h
        hg = dgl.mean_nodes(g, 'h')
        return self.classify(hg)

# Create training and test sets.
dataset = TUDataset('DD')
trainset, testset = random_split(dataset, [942,236])
# Use PyTorch's DataLoader and the collate function
# defined before.
data_loader = DataLoader(trainset, batch_size=32, shuffle=True,
                         collate_fn=collate)

#Create model
model = Classifier(1, 256, 2)
loss_func = nn.CrossEntropyLoss()
optimizer = optim.Adam(model.parameters(), lr=0.001)
model.train()

epoch_losses = []
for epoch in range(80):
    epoch_loss = 0
    for iter, (bg, label) in enumerate(data_loader):
        prediction = model(bg)
        loss = loss_func(prediction, label.squeeze())
        optimizer.zero_grad()
        loss.backward()
        optimizer.step()
        epoch_loss += loss.detach().item()
    epoch_loss /= (iter + 1)
    print('Epoch {}, loss {:.4f}'.format(epoch, epoch_loss))
    epoch_losses.append(epoch_loss)
  1. Does this dataset come with node/edge features? While the tutorial used node degrees for initial node features, this should not be preferred if possible since for highly regular graphs this might easily fail.
  2. Try replacing torch.mean by torch.sum and dgl.mean_nodes by dgl.sum_nodes. It’s possible that your graphs are relatively regular and with mean the representations become identical for many different nodes.

The DD dataset doesn’t have node/edge features. In this case, do you have any recommendations as to how I should construct the GCN module?

Hi, I just checked the dataset and it seems that the graphs have node labels with g.ndata['node_labels']. Since the task is graph classification, I presume these node labels can be used as node features for GCN?

In addition, you may find this paper to be helpful, which benchmarked graph kernel methods on various datasets including D&D.

g.ndata[‘node_labels’] turned out to be of torch.Size([7045, 1]). However, I get an error in my GCN module when I do g.update_all if I use g.ndata[‘node_labels’] as my initial features. Any suggestions what the gcn_reduce function should be in this case?

gcn_msg = fn.copy_src(src=‘h’, out=‘m’)
gcn_reduce = fn.sum(msg=‘m’, out=‘h’)

class GCN(nn.Module):
def init(self, in_feats, out_feats, activation):
super(GCN, self).init()
self.apply_mod = NodeApplyModule(in_feats, out_feats, activation)

def forward(self, g, feature):
    # Initialize the node features with h.
    g.ndata['h'] = feature
    g.update_all(gcn_msg, gcn_reduce)
    g.apply_nodes(func=self.apply_mod)
    return g.ndata.pop('h')

class Classifier(nn.Module):
def init(self, in_dim, hidden_dim, n_classes):
super(Classifier, self).init()

    self.layers = nn.ModuleList([
        GCN(in_dim, hidden_dim, F.relu), 
        GCN(hidden_dim, hidden_dim, F.relu)
        ])
    self.classify = nn.Linear(hidden_dim, n_classes)

def forward(self, g):
    # For undirected graphs, in_degree is the same as
    # out_degree.
    # h = g.in_degrees().view(-1, 1).float()
    h = g.ndata['node_labels']
    for conv in self.layers:
        h = conv(g, h)
    g.ndata['h'] = h
    hg = dgl.mean_nodes(g, 'h')
    return self.classify(hg)

What’s the error message?

Here is the traceback

Traceback (most recent call last):
File “./runner/classifier.py”, line 442, in
prediction = model(bg)
File “/usr/local/lib/python3.6/dist-packages/torch/nn/modules/module.py”, line 532, in call
result = self.forward(*input, **kwargs)
File “./runner/classifier.py”, line 69, in forward
h = conv(g, h)
File “/usr/local/lib/python3.6/dist-packages/torch/nn/modules/module.py”, line 532, in call
result = self.forward(*input, **kwargs)
File “./runner/classifier.py”, line 48, in forward
g.update_all(gcn_msg, gcn_reduce)
File “/usr/local/lib/python3.6/dist-packages/dgl/graph.py”, line 2747, in update_all
Runtime.run(prog)
File “/usr/local/lib/python3.6/dist-packages/dgl/runtime/runtime.py”, line 11, in run
exe.run()
File “/usr/local/lib/python3.6/dist-packages/dgl/runtime/ir/executor.py”, line 1200, in run
out_map)
File “/usr/local/lib/python3.6/dist-packages/dgl/backend/pytorch/tensor.py”, line 429, in copy_reduce
return CopyReduce.apply(reducer, graph, target, in_data, out_data, out_size, in_map, out_map)
File “/usr/local/lib/python3.6/dist-packages/dgl/backend/pytorch/tensor.py”, line 387, in forward
graph, target, in_data_nd, out_data_nd, in_map[0], out_map[0])
File “/usr/local/lib/python3.6/dist-packages/dgl/kernel.py”, line 372, in copy_reduce
X, out, X_rows, out_rows)
File “/usr/local/lib/python3.6/dist-packages/dgl/_ffi/_ctypes/function.py”, line 190, in call
ctypes.byref(ret_val), ctypes.byref(ret_tcode)))
File “/usr/local/lib/python3.6/dist-packages/dgl/_ffi/base.py”, line 62, in check_call
raise DGLError(py_str(_LIB.DGLGetLastError()))
dgl._ffi.base.DGLError: [15:09:50] /opt/dgl/src/kernel/cpu/…/binary_reduce_impl.h:112: Unsupported dtype:

I guess g.ndata['node_labels'] does not give you a float32 tensor? If this is the case, try g.ndata['node_labels'].float()

The results weren’t very good using node_labels as the node features. Do you happen to know how I can extract identity features (i.e. identity matrix) from the graph to use as node features?

I don’t think identity matrix will be a good feature since there are lots of possible orders of nodes. If the node labels are discrete, you can try embedding them with nn.Embedding.