How to do only one forward propagation per epoch and multiple backward propagations on graph?

How to do only one forward propagation per epoch and multiple back propagations?
I want to get the vector representation through several layers of graph neural networks first,
then do the backward operation to optimize the loss according to the batch number.

what i want to do is in code B.
the main difference is the position of ’ user_embedding, item_embedding = model(G)’
but an error will occur in code B
: TypeError: cannot unpack non-iterable NoneType object

is there any way to do this?
the same error:link

thank you!

#code A

model = Model(G, 8, 8, 8)  
opt = torch.optim.Adam(model.parameters(), lr=0.01, weight_decay=5e-4)

for epoch in range(2):
    for batch_data  in dataset:
        optimizer.zero_grad()
        user_embedding, item_embedding = model(G,user_embedding, item_embedding)
        loss=model.getloss(batch_data)
        loss.backward()
        optimizer.step()-----------

#code B

model = Model(G, 8, 8, 8)  
opt = torch.optim.Adam(model.parameters(), lr=0.01, weight_decay=5e-4)
user_embedding, item_embedding = model(G,user_embedding, item_embedding)

for epoch in range(2):
    for batch_data  in dataset:
        optimizer.zero_grad()
        loss=model.getloss(batch_data)
        loss.backward()
        optimizer.step()

It seems that model does not return anything?

it is just a sample code.
model.getloss(batch_data) retrun the loss in every batch data.
The error will not occur in code A.
If i write like code A.it will do foward conv for many times to get user_embedding, item_embedding.

Could you please replace loss.backward() with loss.backward(retain_graph=True) and see if the issue still exists?

Thank you mufei,but the error still exists.

code like this.

What’s the error message printed?

Traceback (most recent call last):
File “/Users/cc/Downloads/dgl-examples-pytorch/code1211/hgnn/hgnn.py”, line 778, in
main()
File “/Users/cc/Downloads/dgl-examples-pytorch/code1211/hgnn/hgnn.py”, line 699, in main
loss.backward(retain_graph=True)
File “/Users/cc/PycharmProjects/testfolder1127/venvpy3/lib/python3.7/site-packages/torch/tensor.py”, line 166, in backward
torch.autograd.backward(self, gradient, retain_graph, create_graph)
File “/Users/cc/PycharmProjects/testfolder1127/venvpy3/lib/python3.7/site-packages/torch/autograd/init.py”, line 99, in backward
allow_unreachable=True) # allow_unreachable flag
File “/Users/cc/PycharmProjects/testfolder1127/venvpy3/lib/python3.7/site-packages/torch/autograd/function.py”, line 77, in apply
return self._forward_cls.backward(self, *args)
File “/Users/cc/PycharmProjects/testfolder1127/venvpy3/lib/python3.7/site-packages/dgl/backend/pytorch/tensor.py”, line 336, in backward
= ctx.backward_cache
TypeError: cannot unpack non-iterable NoneType object

thank you. i will check it.
the first batch can get the right loss.
but the error occurs before the second loss.backward(retain_graph=True).

Could you please give us a minimal snippet of code for reproducing the bug?

ok, i will simplify the code for you.thank you,mufei

Hi,

This issue is related to https://github.com/dmlc/dgl/issues/1046.

It should be fixed in the master branch. Could you please try the nightly build if you are using Linux? Currently Windows does not have nightly build:

pip install --pre dgl-cu100
(if you are using CUDA 10.0)