@VoVAllen @mufeili I’m having trouble understanding how to use softmax
to achieve probabilities for my binary classification problem.
Here is the code I have:
output = gcn_model(single_test_graphs)
print(type(output),
output.shape,
output)
This is the output:
<class 'torch.Tensor'>
torch.Size([18, 2])
tensor([[ 0.0212, -0.0212],
[ 0.0545, -0.0615],
[ 0.0212, -0.0212],
[ 0.0545, -0.0615],
[ 0.0212, -0.0212],
[ 0.0212, -0.0212],
[ 0.0212, -0.0212],
[ 0.0153, -0.0150],
[ 0.0212, -0.0212],
[ 0.0212, -0.0212],
[ 0.0212, -0.0212],
[ 0.0212, -0.0212],
[-0.1575, 0.0535],
[ 0.0212, -0.0212],
[ 0.0545, -0.0615],
[ 0.1218, -0.2458],
[-0.5680, 0.3171],
[ 0.0212, -0.0212]], grad_fn=<AddBackward0>)
As you can see, the model output is of shape 18x2 (the first dimension is not fixed and changes in different executions). I need to have only 2 probabilities for the 2 classes.
So I tried to use softmax on the model output like this:
softmax_output = torch.softmax(output, dim=1)
print(type(softmax_output),
softmax_output.shape,
softmax_output)
And here is the output:
<class 'torch.Tensor'>
torch.Size([18, 2])
tensor([[0.5078, 0.4922],
[0.5535, 0.4465],
[0.5078, 0.4922],
[0.5535, 0.4465],
[0.5078, 0.4922],
[0.5078, 0.4922],
[0.5078, 0.4922],
[0.5078, 0.4922],
[0.5078, 0.4922],
[0.5078, 0.4922],
[0.4781, 0.5219],
[0.5078, 0.4922],
[0.5078, 0.4922],
[0.5078, 0.4922],
[0.5535, 0.4465],
[0.5078, 0.4922],
[0.3002, 0.6998],
[0.5078, 0.4922]], grad_fn=<SoftmaxBackward>)
The output is still a tensor with shape 18x2, although the values have changed and the sum of each pair equals to 1, due to the effect of applying softmax.
But I still don’t know how to interpret it to my desired output!
I need a single pair of probabilities for each class label.