 # Understand Margin loss/Loss Log in Link Prediction

I’m following this tutorial for link prediction. In the example, it uses Margin loss on pos_graph (graph with edges) and neg_graph (graph created from random edges):

``````margin_loss = (1 - neg_score.view(n_edges, -1) + pos_score.unsqueeze(1)).clamp(min=0).mean()
``````

neg_score/pos_score contains the dot product of 2 adjacency nodes of neg_graph and pos_graph respectively.

Why we have to unsqueeze pos_score to 3D and force the broadcasting?
Can we just add the two 2D tensors together, say
`(1 - neg_score.view(n_edges, -1) + pos_score).clamp(min=0).mean()`?

I also want to try cross-entropy loss, is this correct?
`(-torch.log(torch.sigmoid(pos_score_train.unsqueeze(1))) - torch.log(1-torch.sigmoid(neg_score_train.view(n_edges_train, -1)))).mean()`

Thank you!

`pos_score` is 1D with shape `(n_edges,)` while `neg_score` is 1D with shape `(n_edges * k,)` where `k` is the number of negative examples to sample per positive example. I reshaped both to 2D so I can compare the difference of every negative example and its corresponding positive example.

Yes. Or you can also use `torch.nn.functional.binary_cross_entropy_with_logits`.

Thank you for the quick reply.
I think they’re already in 2D as I run the code from the tutorial

pos_score.shape torch.Size([6877, 1])
neg_score.shape torch.Size([34385, 1])
pos_score.unsqueeze(1).shape torch.Size([6877, 1, 1])

Can you double-check on the dimension of pos_score and neg_score?
I put my code here

Thank you!

Sorry, you are right. The `DotProductPredictor` would yield a tensor with shape `(E, 1)`. Your code works in this case.

Thanks.
If neg_score and pos_score are 2D matrices, so we do not need to unsqueeze pos_score, am I right?

I should replace
`margin_loss = (1 - neg_score.view(n_edges, -1) + **pos_score.unsqueeze(1)**).clamp(min=0).mean() `

by

`margin_loss = (1 - neg_score.view(n_edges, -1) + **pos_score**).clamp(min=0).mean() `

Is that correct?

I’m a bit confused about the unsqueeze part.

Thank you!

Yes. We should fix this.

1 Like

Thank you for your help!

@pd2020 : Have you tried this loss.
`margin_loss = (1 - neg_score.view(n_edges, -1) + **pos_score**).clamp(min=0).mean() `
I wanted to verify if this implementation is correct. The documentation seems to have old implementation (cc: @BarclayII )