This is my first time dealing with GNNs. I’ve been assigned by my principal investigator to investigate the use of GNNs for our classification task. For the past few hours, I’ve been getting two errors. When I fix one, the other appears, and vice versa.
Below is the corresponding code:
""" Full Code: """
from dgl.nn import GraphConv
class GCN(nn.Module):
def __init__(self, in_feats, out_feats_1, out_feats_2, num_classes):
super(GCN, self).__init__()
self.conv1 = GraphConv(in_feats, out_feats_1)
self.conv2 = GraphConv(out_feats_1, num_classes)
self.classify = nn.Linear(out_feats_2, num_classes)
def forward(self, g, cond_traces, dist_values): # pass in conductance traces and distance values.
print("Shape of (g, cond_traces):", (g, cond_traces))
h = self.conv1(cond_traces, g)
print("m")
h = F.relu(h)
h = self.conv2(cond_traces, g)
g.ndata['h'] = h
return dgl.mean_nodes(g, 'h')
model = GCN(in_feats=1, out_feats_1=16, out_feats_2=8, num_classes=4)
optimizer = torch.optim.Adam(model.parameters(), lr=0.01)
for epoch in range(20):
for batched_graph, labels in train_loader:
pred = model(batched_graph, batched_graph.ndata['cond'].float(), batched_graph.ndata['dist'].float())
loss = F.cross_entropy(pred, labels)
optimizer.zero_grad()
loss.backward()
optimizer.step()
num_correct = 0
num_tests = 0
for batched_graph, labels in test_loader:
pred = model(batched_graph, batched_graph.ndata['attr'].float())
num_correct += (pred.argmax(1) == labels).sum().item()
num_tests += len(labels)
print('Test accuracy:', num_correct / num_tests)
When I run this, I get the following error:
AttributeError: 'Tensor' object has no attribute 'local_scope'
I’ve found that switching the positions of cond_traces
and g
resolves this, as follows: h = self.conv1(cond_traces, g):
. However, this results in another error:
RuntimeError: mat1 and mat2 shapes cannot be multiplied (1x4151 and 1x16)
Any help would be greatly appreciated!