Test Accuracy changes heavily after save and load weights

Hi, I have already trained a GraphSAGE model for link classification and then use it to test the test graph then it achieved very high validation accuracy. However, after I used the th.save(model.state_dict(), "model.pt") to save the weights , reinitialize the model then used model.load_state_dict(th.load('model.pt')) model.eval() to evaluate the test graph again. It turns out the validation accuracy has significantly dropped.

I have no idea what’s going on ? Do you have any suggestions ? Thanks in advance.

For a sanity check,

  1. Did you call model.eval() the first time you evaluated your model on the test set before saving weights?
  2. Any intrinsic randomness in your model?
  3. Any sampling components in test evaluation?

model.eval()

late. After saving the weights. I restart the runtime and then used

model.load_state_dict()

and then

model.eval()

  1. Any intrinsic randomness in your model? Nope
  2. Any sampling components in test evaluation? Nope

Cheers

What’s the version of DGL?

latest version v0.5.3

Can you open a github issue and provide a minimal code snippet for reproducing the issue?

This topic was automatically closed 30 days after the last reply. New replies are no longer allowed.