I have a question about the result of rgcn

Why I cannot get the result which you write in readme. For example, the accuracy of MUTAG in readme is 75%, but the accuracy I get is only 67%.

I think it’s because of the change in https://github.com/dmlc/dgl/pull/1217, we have not tuned hyper-parameters accordingly.
You can refer to the older version of code (https://github.com/dmlc/dgl/pull/1217/files) to reproduce the results we reported.

Thank you for response. I will try it.

I have changed code to the older version, but I still cannot reproduce your results.

By the way, if I want to tune hyper-parameters of current version of code, is there anything I can refer?

Could you try this python3 entity_classify.py -d mutag --l2norm 5e-4 --n-bases 30 --testing --gpu 0 --dropout 0.25 -e 80

Test Accuracy is 0.6912.

The code I run is the latest version.

Hi @yutaoming, I also observed a huge accuracy variance on the MUTAG dataset. That’s why in the latest example, I reported both the best and the average accuracy while previously only the best result is reported. Here is the result of 10 runs:

$ for i in {1..10} ; do python3 examples/pytorch/rgcn-hetero/entity_classify.py -d mutag --l2norm 5e-4 --n-bases 30 --testing --gpu 0 2>&1 | tail -n 2 ; done
Test Acc: 0.7353 | Test loss: 0.5109

Test Acc: 0.6912 | Test loss: 0.6367

Test Acc: 0.6324 | Test loss: 0.6233

Test Acc: 0.7206 | Test loss: 0.5425

Test Acc: 0.7500 | Test loss: 0.6188

Test Acc: 0.7059 | Test loss: 0.5718

Test Acc: 0.7206 | Test loss: 0.6435

Test Acc: 0.6324 | Test loss: 0.5736

Test Acc: 0.6765 | Test loss: 0.6560

Test Acc: 0.7794 | Test loss: 0.5721

Thank you for response. I want to know if the training set can be modified without changing the number of training sets. I tried to modify the training set of aifb, and the accuracy increased from 86% to 98%.

It is strange that the accuracy on aifb has been around 86% before I modified the data set.