AttributeError: 'Tensor' object has no attribute 'local_scope'

This is my first time dealing with GNNs. I’ve been assigned by my principal investigator to investigate the use of GNNs for our classification task. For the past few hours, I’ve been getting two errors. When I fix one, the other appears, and vice versa.

Below is the corresponding code:

""" Full Code: """

from dgl.nn import GraphConv

class GCN(nn.Module):

    def __init__(self, in_feats, out_feats_1, out_feats_2, num_classes):

        super(GCN, self).__init__()

        self.conv1 = GraphConv(in_feats, out_feats_1)

        self.conv2 = GraphConv(out_feats_1, num_classes)

        self.classify = nn.Linear(out_feats_2, num_classes)

   

    def forward(self, g, cond_traces, dist_values): # pass in conductance traces and distance values.

        print("Shape of (g, cond_traces):", (g, cond_traces))

        h = self.conv1(cond_traces, g)

        print("m")

        h = F.relu(h)

        h = self.conv2(cond_traces, g)

        g.ndata['h'] = h

        return dgl.mean_nodes(g, 'h')

model = GCN(in_feats=1, out_feats_1=16, out_feats_2=8, num_classes=4)

optimizer = torch.optim.Adam(model.parameters(), lr=0.01)

for epoch in range(20):

    for batched_graph, labels in train_loader:

        pred = model(batched_graph, batched_graph.ndata['cond'].float(), batched_graph.ndata['dist'].float())

        loss = F.cross_entropy(pred, labels)

        optimizer.zero_grad()

        loss.backward()

        optimizer.step()

num_correct = 0

num_tests = 0

for batched_graph, labels in test_loader:

    pred = model(batched_graph, batched_graph.ndata['attr'].float())

    num_correct += (pred.argmax(1) == labels).sum().item()

    num_tests += len(labels)

print('Test accuracy:', num_correct / num_tests)

When I run this, I get the following error:

AttributeError: 'Tensor' object has no attribute 'local_scope'

I’ve found that switching the positions of cond_traces and g resolves this, as follows: h = self.conv1(cond_traces, g):. However, this results in another error:

RuntimeError: mat1 and mat2 shapes cannot be multiplied (1x4151 and 1x16)

Any help would be greatly appreciated!

you should call like self.conv(g, feat).

And for the mat shapes mismatch issue, could you share the full callstack so that we can know which forward it failed. it failed on self.conv1()? could you print out the shape of feat?

Full callstack, with self.conv(g, feat):

RuntimeError                              Traceback (most recent call last)
Cell In[27], line 8
      6 for epoch in range(20):
      7     for batched_graph, labels in train_loader:
----> 8         pred = model(batched_graph, batched_graph.ndata['cond'].float())
      9         loss = F.cross_entropy(pred, labels)
     10         optimizer.zero_grad()

File c:\Users\sapat\anaconda3\envs\stranddnaid\lib\site-packages\torch\nn\modules\module.py:1518, in Module._wrapped_call_impl(self, *args, **kwargs)
   1516     return self._compiled_call_impl(*args, **kwargs)  # type: ignore[misc]
   1517 else:
-> 1518     return self._call_impl(*args, **kwargs)

File c:\Users\sapat\anaconda3\envs\stranddnaid\lib\site-packages\torch\nn\modules\module.py:1527, in Module._call_impl(self, *args, **kwargs)
   1522 # If we don't have any hooks, we want to skip the rest of the logic in
   1523 # this function, and just call forward.
   1524 if not (self._backward_hooks or self._backward_pre_hooks or self._forward_hooks or self._forward_pre_hooks
   1525         or _global_backward_pre_hooks or _global_backward_hooks
   1526         or _global_forward_hooks or _global_forward_pre_hooks):
-> 1527     return forward_call(*args, **kwargs)
   1529 try:
   1530     result = None

Cell In[26], line 12
     10 def forward(self, g, cond_traces): # pass in conductance traces and distance values.
     11     print("Shape of (g, cond_traces):", (g, cond_traces))
---> 12     h = self.conv1(g, cond_traces)
     13     print("m")
     14     h = F.relu(h)

File c:\Users\sapat\anaconda3\envs\stranddnaid\lib\site-packages\torch\nn\modules\module.py:1518, in Module._wrapped_call_impl(self, *args, **kwargs)
   1516     return self._compiled_call_impl(*args, **kwargs)  # type: ignore[misc]
   1517 else:
-> 1518     return self._call_impl(*args, **kwargs)

File c:\Users\sapat\anaconda3\envs\stranddnaid\lib\site-packages\torch\nn\modules\module.py:1527, in Module._call_impl(self, *args, **kwargs)
   1522 # If we don't have any hooks, we want to skip the rest of the logic in
   1523 # this function, and just call forward.
   1524 if not (self._backward_hooks or self._backward_pre_hooks or self._forward_hooks or self._forward_pre_hooks
   1525         or _global_backward_pre_hooks or _global_backward_hooks
   1526         or _global_forward_hooks or _global_forward_pre_hooks):
-> 1527     return forward_call(*args, **kwargs)
   1529 try:
   1530     result = None

File c:\Users\sapat\anaconda3\envs\stranddnaid\lib\site-packages\dgl\nn\pytorch\conv\graphconv.py:463, in GraphConv.forward(self, graph, feat, weight, edge_weight)
    461     rst = graph.dstdata["h"]
    462     if weight is not None:
--> 463         rst = th.matmul(weight, rst)
    465 if self._norm in ["right", "both"]:
    466     degs = graph.in_degrees().to(feat_dst).clamp(min=1)

RuntimeError: size mismatch, got input (1), mat (1x16), vec (4859)

DGL Version: 2.2.1 (installed with pip)
Torch Version: 2.1.0

Feat Shape: 1x4581

Thank you for the quick response, I hope this additional information helps!

This topic was automatically closed 30 days after the last reply. New replies are no longer allowed.