Bug reported: The evaluation function of GAT PPI

Hi,

Thank you for the great demo code for GAT. For the following code block of the evaluation function for GAT on ppi dataset (https://github.com/dmlc/dgl/blob/master/examples/pytorch/gat/train_ppi.py):

def evaluate(feats, model, subgraph, labels, loss_fcn):
    with torch.no_grad():
        model.eval()
        model.g = subgraph
        for layer in model.gat_layers:
            layer.g = subgraph
        output = model(feats.float())
        loss_data = loss_fcn(output, labels.float())
        predict = np.where(output.data.cpu().numpy() >= 0.5, 1, 0)
        score = f1_score(labels.data.cpu().numpy(),
                         predict, average='micro')
        return score, loss_data.item()

I think that the line “predict = np.where(output.data.cpu().numpy() >= 0.5, 1, 0)” should be “predict = np.where(output.data.cpu().numpy() >= 0.0, 1, 0)”, since the output variable is directly from the GAT output layer without the sigmoid function. I would appreciate it if anyone could help. Thanks in advance.

Best,

Thank you for the report and I confirm this is a bug. By replacing 0.5 by 0.0, the test performance gets improved from 0.9793 to 0.9836. I will fix it.

Thank you for the response! I appreciate it.

Thank you for your report again. This should be fixed in https://github.com/dmlc/dgl/pull/1534.