RuntimeError: the derivative for 'unique_dim' is not implemented

Hi! Excuse me, after I added the graph network module to other people’s code, the code that could run smoothly has this error. The situation seems to be that the output can be obtained, but there is a problem in the loss.backward() part. The complete error is as follows:

Traceback (most recent call last):
  File "D:\PyCharm 2020.1.1\plugins\python\helpers\pydev\pydevd.py", line 1438, in _exec
    pydev_imports.execfile(file, globals, locals)  # execute the script
  File "D:\PyCharm 2020.1.1\plugins\python\helpers\pydev\_pydev_imps\_pydev_execfile.py", line 18, in execfile
    exec(compile(contents+"\n", file, 'exec'), glob, loc)
  File "F:/杂七杂八的代码/EntityMatcher-master/EntityMatcher-master/run.py", line 74, in <module>
    run_experiment(model_name, dataset_dir, embedding_dir)
  File "F:/杂七杂八的代码/EntityMatcher-master/EntityMatcher-master/run.py", line 32, in run_experiment
    model.run_train(train,
  File "D:\anaconda3\envs\dm\lib\site-packages\deepmatcher\models\core.py", line 183, in run_train
    return Runner.train(self, *args, **kwargs)
  File "D:\anaconda3\envs\dm\lib\site-packages\deepmatcher\runner.py", line 338, in train
    Runner._run(
  File "D:\anaconda3\envs\dm\lib\site-packages\deepmatcher\runner.py", line 249, in _run
    loss.backward()
  File "D:\anaconda3\envs\dm\lib\site-packages\torch\_tensor.py", line 255, in backward
    torch.autograd.backward(self, gradient, retain_graph, create_graph, inputs=inputs)
  File "D:\anaconda3\envs\dm\lib\site-packages\torch\autograd\__init__.py", line 147, in backward
    Variable._execution_engine.run_backward(
RuntimeError: the derivative for 'unique_dim' is not implemented.

My environment configuration is pytorch 1.9.1

Seems that you asked the same question in RuntimeError: the derivative for 'unique_dim' is not implemented. - PyTorch Forums and they have posted a response?

It is more like an issue of PyTorch than DGL. +1 to seek answer on PyTorch forum.

Thank you for your reminder!