Thanks @mufeili,
I tried to print out the intermediary results which is shown below:
logits: tensor([[0.0356],
[0.4634],
[0.0000],
...,
[0.0084],
[0.0000],
[0.3191]], grad_fn=<ReluBackward0>)
logp: tensor([0., 0., 0., ..., 0., 0., 0.], grad_fn=<ViewBackward>) torch.Size([5707])
loss tensor(0.6931, grad_fn=<BinaryCrossEntropyWithLogitsBackward>)
Correct: tensor(947) | Labels: tensor([1., 0., 1., ..., 1., 1., 0.]) | Indices: tensor([0, 0, 0, ..., 0, 0, 0])
test_acc: 0.4937434827945777
Correct: tensor(993) | Labels: tensor([0., 1., 1., ..., 1., 1., 1.]) | Indices: tensor([0, 0, 0, ..., 0, 0, 0])
train_acc: 0.5105398457583548
tst_loss tensor(0.6931, grad_fn=<BinaryCrossEntropyWithLogitsBackward>)
Epoch 00000 | Loss 0.6931 | Train Acc 0.5105 | Test Acc 0.4937 | Time(s) 0.0491
***************************************************************************
logits: tensor([[0.1085],
[0.2853],
[0.0793],
...,
[0.5197],
[0.1882],
[0.2970]], grad_fn=<ReluBackward0>)
logp: tensor([0., 0., 0., ..., 0., 0., 0.], grad_fn=<ViewBackward>) torch.Size([5707])
loss tensor(0.6931, grad_fn=<BinaryCrossEntropyWithLogitsBackward>)
Correct: tensor(947) | Labels: tensor([1., 0., 1., ..., 1., 1., 0.]) | Indices: tensor([0, 0, 0, ..., 0, 0, 0])
test_acc: 0.4937434827945777
Correct: tensor(993) | Labels: tensor([0., 1., 1., ..., 1., 1., 1.]) | Indices: tensor([0, 0, 0, ..., 0, 0, 0])
train_acc: 0.5105398457583548
tst_loss tensor(0.6931, grad_fn=<BinaryCrossEntropyWithLogitsBackward>)
Epoch 00001 | Loss 0.6931 | Train Acc 0.5105 | Test Acc 0.4937 | Time(s) 0.0918
***************************************************************************
logits: tensor([[0.0543],
[0.4011],
[0.0465],
...,
[0.2958],
[0.0811],
[0.2602]], grad_fn=<ReluBackward0>)
logp: tensor([0., 0., 0., ..., 0., 0., 0.], grad_fn=<ViewBackward>) torch.Size([5707])
loss tensor(0.6931, grad_fn=<BinaryCrossEntropyWithLogitsBackward>)
Correct: tensor(947) | Labels: tensor([1., 0., 1., ..., 1., 1., 0.]) | Indices: tensor([0, 0, 0, ..., 0, 0, 0])
test_acc: 0.4937434827945777
Correct: tensor(993) | Labels: tensor([0., 1., 1., ..., 1., 1., 1.]) | Indices: tensor([0, 0, 0, ..., 0, 0, 0])
train_acc: 0.5105398457583548
tst_loss tensor(0.6931, grad_fn=<BinaryCrossEntropyWithLogitsBackward>)
Epoch 00002 | Loss 0.6931 | Train Acc 0.5105 | Test Acc 0.4937 | Time(s) 0.1351
***************************************************************************
logits: tensor([[0.0834],
[0.3571],
[0.0044],
...,
[0.0896],
[0.3438],
[0.2082]], grad_fn=<ReluBackward0>)
logp: tensor([0., 0., 0., ..., 0., 0., 0.], grad_fn=<ViewBackward>) torch.Size([5707])
loss tensor(0.6931, grad_fn=<BinaryCrossEntropyWithLogitsBackward>)
Correct: tensor(947) | Labels: tensor([1., 0., 1., ..., 1., 1., 0.]) | Indices: tensor([0, 0, 0, ..., 0, 0, 0])
test_acc: 0.4937434827945777
Correct: tensor(993) | Labels: tensor([0., 1., 1., ..., 1., 1., 1.]) | Indices: tensor([0, 0, 0, ..., 0, 0, 0])
train_acc: 0.5105398457583548
tst_loss tensor(0.6931, grad_fn=<BinaryCrossEntropyWithLogitsBackward>)
Epoch 00003 | Loss 0.6931 | Train Acc 0.5105 | Test Acc 0.4937 | Time(s) 0.1768
Although the accuracy now is between 0 and 1, but the loss and accuracy (both train and test) do not changed at all even when I set the epoch to be 100. So, I still think that something is going wrong there. But I could not figure out what causes it.
Thanks all for helping me out.