KeyError: 'accum' during backward

Hi there,

Met a problem when using GAT with DGL. Forward works with no problem but when do loss.backward(), the model gives the following error:
File "", line 148, in train loss.backward() File "/mnt/cephfs2/asr/users/ming.tu/anaconda3/envs/py36/lib/python3.6/site-packages/torch/", line 118, in backward torch.autograd.backward(self, gradient, retain_graph, create_graph) File "/mnt/cephfs2/asr/users/ming.tu/anaconda3/envs/py36/lib/python3.6/site-packages/torch/autograd/", line 93, in backward allow_unreachable=True) # allow_unreachable flag File "/mnt/cephfs2/asr/users/ming.tu/anaconda3/envs/py36/lib/python3.6/site-packages/torch/autograd/", line 77, in apply return self._forward_cls.backward(self, *args) File "/mnt/cephfs2/asr/users/ming.tu/anaconda3/envs/py36/lib/python3.6/site-packages/dgl/nn/pytorch/", line 74, in backward g.ndata.pop(accum_name) File "/mnt/cephfs2/asr/users/ming.tu/anaconda3/envs/py36/lib/python3.6/", line 795, in pop value = self[key] File "/mnt/cephfs2/asr/users/ming.tu/anaconda3/envs/py36/lib/python3.6/site-packages/dgl/", line 57, in __getitem__ return self._graph.get_n_repr(self._nodes)[key] KeyError: 'accum'

Which code example and which version of DGL do you use?

I’m using v0.3 DGL with py36. The code example is this one:

If you are using the code in the link, that’s a fork of our repo and probably also quite outdated. Could you please try installing the latest version of DGL (0.4.1) and git clone our latest repo here?

Some APIs (local_var, local_scope) used in current edge_softmax implementation was brought in dgl 0.3.1, so please upgrade your dgl and see if the problem still exists.

Thanks for the reply. Do you mean I update to the latest dgl (0.4 I believe) and use the new GATConv API?