Training with the random generated synthetic features

Thanks mufeili,

But the loss decreases and it’s reasonable to think that the accuracy should be increased, right? In which case if the loss does decrease, but the accuracy does not increase?

Moreover, when I exclude the softmax function (because I use bcewithlogits as a loss function) the calculation of the accuracy (test and train) jump to >1.0, which brings me back to my initial problem. That’s when I know that maybe the shape of labels causes the wrong calculation of the accuracy.

Ok. I just realized that you are doing binary cross entropy with BCEWithLogitsLoss. In this case, your evaluation function is not correct.

def evaluate(model, g, nfeats, efeats, labels, mask):
    model.eval()
    with torch.no_grad():
        logits = model(g, nfeats, efeats)
        logits = logits[mask] 
        labels = labels[mask]
        # Assume that logits and labels all have a shape of (N, 1)                      
        pred = torch.zeros(labels.shape)
        pred[logits >= 0.] = 1
        correct = torch.sum(pred == labels)
        return correct.item() * 1.0 / len(labels)

Thanks @mufeili,

I couldn’t thank you more of the support. Thanks a lot.

1 Like