For link prediction inference, how can I score every pair of nodes from new unseen graphs with no positive edges?

Mmm sorry but I dont understand. Does that mean my implentation is wrong? It follows the guidelines but i understand we have updated some things.

In the doc you sent me by calling the conv model the forward function takes G and some node type feats and updtaes them. But, since Im using my Model class, and not just my RGCN one like in this doc, and my model forward function uses both G and neg_G, in a new graph Im calling just sage, and not forward, right? So it should take G and the initial feats of a given node type. Since I have just one, I guess h = model.sage(G, ent_feats) should be ok.

Am I wrong?

The update of node representations with self.sage(g, x) should be correct. However, you did not use neg_g at all in forward.

Yeah, as I told you, my new graphs have just 1 of the 3 edge types. Should I also use the forward function and the negative g with that one? For link1 and link3 i cannot build neg_g.

neg_g is only meaningful during training. For applying to new graphs with a trained model you do not need negative samples.

1 Like

Thanks!!!

And sorry, in the code of my model that I shared I missed the second part of the return statement, which as you said it was the neg_score. Sorry for the inconvenience! Now I understand you correction. The right code was:

return self.pred(g, h, etype), self.pred(neg_g, h, etype)

In that case, given the corrections of that part of my model class, is it correct to do the following during prediction time os should I do any more changes before in my forward functions?

*heterograph with one node type*

model = Model(in, hidden, out)

Feats = G.ndata['Feats']

Updated_h = model.sage(G, Feats)

I guess you are fine.

1 Like

I was finally able to put everything together and everything works perfect, however the predictions are not really accurate compared to the annotated ones used during training.

For link3, which is symmetrics, I got the following after 5 epochs:

Train Loss: 7.385879099369049
Eval AUC: 0.8863544534424268

Then, as I told you, I take a new annotated graph with, for example, this shape:

Graph(num_nodes={'ent': 70},
      num_edges={('ent', 'link1', 'ent'): 74, ('ent', 'link2', 'ent'): 310, ('ent', 'link3', 'ent'): 40})

I save the original links in a list and then clean them in the graph, so i get the following:

link3 = [[0, 61], [38, 63], [25, 40], [41, 42], [42, 43], [44, 64], [44, 45], [50, 66],...]

and

Graph(num_nodes={'ent': 70},
      num_edges={('ent', 'link1', 'ent'): 0, ('ent', 'link2', 'ent'): 310, ('ent', 'link3', 'ent'): 0})

Then I do what you told me, I succesfully update the node embeddings with:

feats = grafo.ndata['Feats']
updated_feats = model.sage(grafo, {'ent': feats})

and generate the edge specific embeddings, which in fact are different for every type of link. I store them in a tuple with their index:

nodos_embeddings = []
embeddings = model.pred.etype_project[('ent', 'link3', 'ent')](updated_feats)
for i in range(len(embeddings)):
  nodos_embeddings.append((i, embeddings[i]))

This is when it gets weird. I calculate the similarity among all pairs of embeddings

scores = []
for x in nodos_embeddings:
  for y in nodos_embeddings:
    if x[0] != y[0]:
      score = torch.dot(x[1], y[1])
      scores.append([x[0], y[0], score])

and take those over a thershold:

thershold = 4
final = []

for tupla in scores:
  if tupla[2] > thershold:
    final.append(tupla)

To check the quality of the predictions I compare them with the list of original annotated links I saved at the begining (I order them so the first element of the tuple is always smaller than the second one, in order to avoid confussion).

I have tried with different thresholds, but when comparing this final pairs with higher similarity and those originally annotated pairs the precision and recall are always low.

In theory my model predictions were good, since the AUC for this link where around 90%, meaning that positive edges had a higher similarity score, but now it looks like those connected edges end up dissimilar.

For example, if I use 4.0 as threshold:

Total tagged: 46
Total predicted: 10
Intersection: 2
Precision: 0.2
Recall: 0.04
F-score: 0.07

If I lower the theshold recall gets obviously better but precission doesn’t improve, so I guess real pairs are completely disperse over the similarity range.

I must be doing something wrong, since those pairs embeddings were for sure similar during training.

and generate the edge specific embeddings, which in fact are different for every type of link

Did you also use edge specific embeddings during training?

I must be doing something wrong, since those pairs embeddings were for sure similar during training.

It’s likely that the model performs poorly as you have all three types of edges during training for node representation update while you only have one type of edges during test. Using edges of type link2 only during training might help.

I used this Predictor during training, so I assume I did use specific embeddings, right?

class HeteroDotProductPredictor(nn.Module):

    def __init__(self, out_feats):
        super().__init__()
        self.etype_project = {('ent', 'link1', 'ent'): MLP(out_feats), 
                              ('ent', 'link2', 'ent'): MLP(out_feats), 
                              ('ent', 'link3', 'ent'): MLP(out_feats)}

    def forward(self, graph, h, etype):
        with graph.local_scope():
            graph.ndata['h'] = self.etype_project[etype](h['ent'])
            graph.apply_edges(fn.u_dot_v('h', 'h', 'score'), etype=etype)
            return graph.edges[etype].data['score']

I have just tried training the model just on link2. After 20 epochs trained on link2 I reach an AUC of 70% on link3, Then, in a graph like this

Graph(num_nodes={'ent': 41},
      num_edges={('ent', 'link1', 'ent'): 0, ('ent', 'link2', 'ent'): 156, ('ent', 'link3', 'ent'): 0}

links3 performance is still poor:

Total tagged: 29
Total predicted: 59
Intersection: 1
Precision: 0.01694
Recall: 0.03448
F-score: 0.02272

Could it be another problem elsewhere?

I have no clues then. Perhaps it’s just difficult to predict edges of link1 and link3 based on edges of link2.

1 Like

I dont know, I’m afraid I’m computing something wrong. To check it I’ve also tried the following. I train the model on the 3 types of links. For example, for link3, AUC is 90%. Then I take a graph also used in the training set but in its initial state without updated feats. It contains all 3 type of edges. Then I update node feats and calculate the dot product similarity as I did before with the graphs with no link1 or link3 edges, and even in this case, with a model trained on 3 types of edges and with a graph with all types of them, when calculating the similarity score of link3 specific embeddings among all nodes metrics are equally poor, meaning that real edges ends up dissimilar. Given that these same edges have 90% AUC this doesn’t makes any sense, right? Maybe Im updating embeddings differently during training and prediction times or calculating dot product the wrong way?

To check it I’ve also tried the following. I train the model on the 3 types of links. For example, for link3, AUC is 90%. Then I take a graph also used in the training set but in its initial state without updated feats. It contains all 3 type of edges. Then I update node feats and calculate the dot product similarity as I did before with the graphs with no link1 or link3 edges, and even in this case, with a model trained on 3 types of edges and with a graph with all types of them, when calculating the similarity score of link3 specific embeddings among all nodes metrics are equally poor, meaning that real edges ends up dissimilar. Given that these same edges have 90% AUC this doesn’t makes any sense, right? Maybe Im updating embeddings differently during training and prediction times or calculating dot product the wrong way?

Yes, probably you are doing something wrong.

1 Like

Please, could you help me with one las thing? I think I figured it out.

During training, I’m using ‘element_wise_dot_prodict’ to score pairs of nodes embeddings in the message function. Is this a regular dot product between nodes embeddings? However, during prediction I’m using just the pytorch dot product. Could that ‘element wise’ be the diffence?

Im telling this, beacuase for a given updated and node specific embeddings, if I use the element_wise_dot_product through the Predictor:

model.pred(grafo, {'ent': embeddings}, ('ent', 'link3', 'ent'))

the score for link3 edges looks like this:

tensor([[0.5722],
        [0.5723],
        [0.7552],
        [0.5876],
        [0.5945],
        [0.6044],
        [0.6195],
        [0.6652],
        [0.6525],
        [0.7719]], grad_fn=<GSDDMMBackward>)

However, if I do regular dot product torch.dot(emb_u, emb_v)between those same embeddings, I get:

tensor(5.7470, grad_fn=<DotBackward>)
tensor(3.1959, grad_fn=<DotBackward>)
tensor(2.3977, grad_fn=<DotBackward>)
tensor(3.1823, grad_fn=<DotBackward>)
tensor(2.5632, grad_fn=<DotBackward>)
tensor(3.0046, grad_fn=<DotBackward>)
tensor(1.7340, grad_fn=<DotBackward>)
tensor(1.8927, grad_fn=<DotBackward>)
tensor(2.7095, grad_fn=<DotBackward>)
tensor(4.4812, grad_fn=<DotBackward>)

I think this is messing it up. The embeddings are the same, but the score similarity is different. What makes those dot produtcs different?

How could I do this ‘element_wise_dot_product’ during prediction?

I’ve tried to use ‘dgl.function.u_dot_v’, but it requires an etype (fn.u_dot_v('h', 'h', 'score'), etype=etype) when various edge types, so I think it cannot be used if a need to score link3 in a graph without links3…

I’ve also tried, following this: dgl.ops — DGL 0.6.1 documentation

dgl.ops.u_dot_v(g, embeddings_u, embeddings_v)

which takes just the graph and the nodes u and v embeddings, but I get:

DGLError: We only support gsddmm on graph with one edge type

So I guess they can only be used for given edges or for homographs. Maybe both dot products are the same but the one in the predictor also uses edge information? In that case, could I add it for prediction?

If that is not the case, could I do something similar to this element wise dot product, but with every pair of nodes in the new graph, even If they are not connected by an edge?

I’ve tried to find how the dot product is done in the u_dot_v function, but couldn’t find it.

Thanks again.

How was element_wise_dot_prodict defined/implemented?

Mmm it wasn’t my function its dgl message passing function.
Function is:

graph.apply_edges(fn.u_dot_v('h')

It’s the same as torch.dot. See the example below.

import dgl
import dgl.function as fn
import torch

g = dgl.graph(([0, 1], [1, 2]))
g.ndata['h'] = torch.tensor([[1., 2.], [3., 4.], [5., 6.]])
g.apply_edges(fn.u_dot_v('h', 'h', 'dot_out'))
print(g.edata['dot_out'])
# tensor([[11.],
#         [39.]])

print(torch.dot(g.ndata['h'][0, :], g.ndata['h'][1, :]))
# tensor(11.)
print(torch.dot(g.ndata['h'][1, :], g.ndata['h'][2, :]))
# tensor(39.)

Thanks, I now understand it. Then must be something else. Do you see something wrong with the following code? Maybe Im using different embeddings for each case?

feats = grafo.ndata['Feats']
updated_feats = model.sage(grafo, {'ent': feats})
updated_feats = updated_feats.values()
updated_feats = list(updated_feats)[0]
embeddings = model.pred.etype_project[('ent', 'link3', 'ent')](updated_feats)
pred = model.pred(grafo, {'ent': embeddings}, ('ent', 'link3', 'ent'))
print(pred)

This gives me the pos_score for link3 edges, and look like this:

tensor([[213.6073],
        [ 15.9547],
        [ 25.2864],
        [ 16.5195],
        [  9.9416], grad_fn=<GSDDMMBackward>)

This is used during training. However, for prediction I do the following. I calculate the same dot product between link3 edges with the following:

src, dst = grafo.edges(etype=('ent', 'link3', 'ent'))
for i in range(len(src)):
  print(torch.dot(embeddings[src[i], :], embeddings[dst[i], :]))

I get:

tensor(3274.8743, grad_fn=<DotBackward>)
tensor(240.6167, grad_fn=<DotBackward>)
tensor(412.9496, grad_fn=<DotBackward>)
tensor(238.0706, grad_fn=<DotBackward>)
tensor(128.2873, grad_fn=<DotBackward>)

Should these dot values be the same? I really dont see where I could be using the wrong embeddings…

Perhaps model.pred also called model.pred.etype_project before computing the dot product.

1 Like

That was the problem, thanks!

This topic was automatically closed 30 days after the last reply. New replies are no longer allowed.