Score function for link prediction on partially directed graphs

Hi everyone.

I have trained a link prediction model following this tutorial: 5.3 Link Prediction — DGL 0.6.1 documentation

My training heterograph has one type of nodes and 3 types of links. Two of those types of links are symmetric. However, the third type of links is asymmetric, so this score function based on computing the similarity between nodes embeddings isnt useful, since doesnt help me now the direction of the link.

Is there any other score function/approach for asymmetric link prediction that could help me add/get information of which is the source node and which the destiny node when predicting a link? This is crucial in my case, because if a link of this type exists from node1 to node2, a relation of this type cannot by any mean exist from node2 to node1.

Thank you all.

You could make the scoring function asymmetric by either an MLP or a bilinear transform with learnable weight. Specifically for the latter you could multiply the source node embeddings with a learnable weight matrix:

def forward(self, graph, x):
    graph.ndata['x'] = self.W @ x
    graph.ndata['y'] = x
    graph.apply_edges(fn.u_dot_v('x', 'y', 'score'))
    return graph.edata['score']

In general if the matrix is asymmetric then the score value will also differ by directions.

1 Like

Thank you for your answer.

Since I have both symmetric and asymmetric edges, could I use both score functions at the same time in my model?

True. For symmetric edge types you can use normal dot product. For asymmetric edge types you can use bilinear.

Thank you so much.

And how should I define self.W in my init function? Also, what does the ‘@’ mean here?

Also, how could I implement both predictors? My model class looks like this:

class Model(nn.Module):
    def __init__(self, in_features, hidden_features, out_features, rel_names):
        super().__init__()
        self.sage = RGCN(in_features, hidden_features, out_features, rel_names)
        self.pred = HeteroDotProductPredictor(out_feats=out_features)
    def forward(self, g, neg_g, x, etype):
        h = self.sage(g, x)
        return self.pred(g, h, etype), self.pred(neg_g, h, etype)

How could I add the second Predictor and then use the appropiate one? An ‘if’ clause would be enough?

Also, my other Predictor was edge specific, since my graph is a heterograph with 1 node type and 3 edge types:

class HeteroDotProductPredictor(nn.Module):

    def __init__(self, out_feats):
        super().__init__()
        self.etype_project = {('ent', 'link1', 'ent'): MLP(out_feats), 
                              ('ent', 'link2, 'ent'): MLP(out_feats), 
                              ('ent', 'link3', 'ent'): MLP(out_feats)}

    def forward(self, graph, h, etype):
        with graph.local_scope():
            graph.ndata['h'] = self.etype_project[etype](h['ent'])
            graph.apply_edges(fn.u_dot_v('h', 'h', 'score'), etype=etype)
            return graph.edges[etype].data['score']

How could I use the ‘etype_project’ in this second asymmetric Predictor?

Thank you so much.

@BarclayII Really sorry to bother. Could you please help me a bit with this implementation? Thank you so much!

You initialize it as a nn.Parameter.

That is the matrix multiplication operator.

Yes. Essentially for the undirected edges you use HeteroDotProductPredictor and for directed edges you use the other.

Since you have already initialized an individual MLP for every edge type, you could probably just make HeteroDotProductPredictor edge-type-specific. For instance, your model will look something like:

class Model(nn.Module):
    def __init__(self, ...):
        super().__init__()
        # ...
        self.pred = nn.ModuleDict({
            'link1': HeteroDotProductPredictor(out_feats, 'link1'),
            'link2': HeteroDotProductPredictor(out_feats, 'link2'),
            'link3': HeteroDirectedDotProductPredictor(out_feats, 'link3')})
    def forward(self, g, neg_g, x, etype):
        # ...
        return self.pred[etype](g, h), self.pred[etype](neg_g, h)

# This module becomes edge-type-specific; you create one instance per edge type.
class HeteroDotProductPredictor(nn.Module):
    def __init__(self, out_feats, etype):
        super().__init__()
        self.project = MLP(out_feats)
        self.etype = etype
    def forward(self, graph, h):
        with graph.local_scope():
            graph.ndata['h'] = self.project(h['ent'])
            graph.apply_edges(fn.u_dot_v('h', 'h', 'score'), etype=self.etype)
            return graph.edges[self.etype].data['score']

# Similar
class HeteroDirectedDotProductPredictor(nn.Module):
    def __init__(self, out_feats, etype):
        super().__init__()
        self.project = MLP(out_feats)
        self.etype = etype
        self.W = nn.Parameter(torch.randn(out_feats, out_feats))
    def forward(self, graph, h):
        with graph.local_scope():
            graph.ndata['h'] = self.project(h['ent'])
            graph.ndata['x'] = graph.ndata['h'] @ self.W
            graph.apply_edges(fn.u_dot_v('h', 'x', 'score'), etype=self.etype)
            return graph.edges[self.etype].data['score']

Thank you so much for your help so far. Its really useful. Exactly what I was looking for.

Couple of questions.

During training I’m using the mean of the losses from every batch (every batch is a batched graph where the negative graph is also batched, so no problem here) and AUC for evaluation, also on a test batched graph. With this new directed predictor Loss dicreases more or less continuously during training, but AUC is always around 97% after 75 epochs, it doesn’t matter that training Loss is 1000 or 3 on that epoch. Could it be a problem here? In the case of link3, for example, with the regular undirected predictor, during training loss always dicreases and eval AUC always increases using the same training method.

And one last thing. My main goal is to predict links in new unseen graphs. Therefore I was using edge specific embeddings. During prediction time on new graphs (once the model is trained) I was passing these new graphs through the model to update their feats and projecting the edge specific embeddings:

feats = new_graph.ndata['Feats']
updated_feats = model.sage(graph, {'ent': feats})
updated_feats = updated_feats.values()
updated_feats = list(updated_feats)[0]
embeddings = model.pred.etype_project[('ent', 'link1', 'ent')](updated_feats)

Then I computed the similarity among these edge specific emebedings in order to take those with the higher score as the edges. Problem is that now the new project method

embeddings = model.pred.project(updated_feats)

is not edge specific yet, so I cannot use it with new graphs during prediction time (after training) like before. How could I do something like the previous to generate edge specific embeddings for a new graph in order to calculare the similarity? Or maybe now the approach is different? I just want to score every pair of node embeddings for a given type of link. Something like pred = model.pred['link3'](graph, {'ent': updated_feats}) just gives obviously the POS_score, but I need to score every pair even though they are not connected.

Thank you so much.

It may or may not be a problem depending on your actual scenario. For instance, if your ultimate scenario is recommendation, where given one node, you would like to predict the most likely other node, then you probably want to use ranking-based metrics such as MRR.

97% AUC is usually good enough. However, if your dataset is extremely unbalanced, then even AUC may not be a good metric, and you might want to try F1 score, precision/recall or AUPRC.

If you want to evaluate for every pair, then I think the most practical solution would be computing all the node embeddings first, store them, then compute the pairwise scores in a dense fashion.

Say that your model uses HeteroDirectedDotProductPredictor where the score computation is bilinear:

class Model(nn.Module):
    def __init__(self, in_features, hidden_features, out_features, rel_names):
        super().__init__()
        self.sage = RGCN(in_features, hidden_features, out_features, rel_names)
        self.pred = HeteroDirectedDotProductPredictor(out_features, 'link3')
    def forward(self, g, neg_g, x, etype):
        h = self.sage(g, x)
        return self.pred(g, h, etype), self.pred(neg_g, h, etype)

Then h will be the node embeddings. For the edge scores between a single source node u and all destination nodes, the score will be

h[u] @ self.pred.W @ h.T
1 Like

Thank you so much!!!

Right now my AUC is:

def compute_auc(pos_score, neg_score):
    scores = torch.cat([pos_score, neg_score]).numpy()
    labels = torch.cat(
        [torch.ones(pos_score.shape[0]), torch.zeros(neg_score.shape[0])]).numpy()
    return roc_auc_score(labels, scores)

How could I use F1 in a similar way in order to use pos_score and neg_score?

What represents ‘T’ here?

For the first question, both sklearn and torchmetrics have a function for F1 score. You can look it up.

For the second question, T means transpose.

But how could I generate it?

Sorry, @BarclayII , one last thing.

In this thread For link prediction inference, how can I score every pair of nodes from new unseen graphs with no positive edges? - #33 by ogggcar I was told that if my inference graph has just links2, I should maybe train just on link2 (and not in link1 and link3). However, now my asymmetric links1 contain this W matrix, whhich must be updated during training. Any tips of what should I do now, in order to get quality predictions for link1?

This topic was automatically closed 30 days after the last reply. New replies are no longer allowed.