Link prediction for new users

Hello,

I am currently creating a Heterogeneous Graph composed of Nodes for users and job listings. Edges are drawn based on whether an interview has been scheduled between the user and the job listing.

I was successful in training with the Training Data. The problem I’m facing now is not knowing how to use this model to predict edges for all job listings in relation to a new user and to calculate the probabilities.

For one particular user’s Node, I’m creating a graph with edges connecting this user to all job listings (User: 1, Work: 59039), resulting in a graph like this.


import dgl
# define node number
num_users = 1
num_works = 59039

# make node ID
user_ids = torch.arange(num_users)
work_ids = torch.arange(num_works)
src_nodes = torch.zeros(num_works, dtype=torch.long)
dst_nodes = torch.arange(num_works)

# define edge type
edge_type_forward = ('user', 'mendan', 'work')
edge_type_backward = ('work', 'mendan', 'user')

# make graph
graph = dgl.heterograph({
    edge_type_forward: (src_nodes, dst_nodes),
    edge_type_backward: (dst_nodes, src_nodes),  
})


Graph(num_nodes={‘user’: 1, ‘work’: 59039},
num_edges={(‘user’, ‘mendan’, ‘work’): 59039, (‘work’, ‘mendan’, ‘user’): 59039},
metagraph=[(‘user’, ‘work’, ‘mendan’), (‘work’, ‘user’, ‘mendan’)])

I then adopted a method to feed this graph into the trained model.

node_features_eval = {'user': new_user_feature , 'work': work_features}
model.eval()
k=1
with torch.no_grad():  # Deactivate gradients for the following code
    # negative_graph_eval = construct_negative_graph(graph, k, ('user', 'mendan', 'work'))
    pos_score_eval, neg_score_eval = model(graph, graph, node_features_eval, ('user', 'mendan', 'work'))

However, the outcome was that the prediction scores for all the edges turned out to be the same as below.

tensor([[-18.2616],
[-18.2616],
[-18.2616],
…,
[-18.2616],
[-18.2616],
[-18.2616]])

My question is, is my approach correct?
or is there any other way to predict edge score for new users??

Thank you in advance.

It seems that you are constructing a brand new graph for the given new user? When predicting properties of new nodes, GNNs work the best if the node has some known connections to existing nodes - otherwise the user representation computation will usually degrade to an MLP. So we usually add the user nodes to the original graph to make predictions if the user already has some connections to existing nodes.

If your task is indeed predicting links for new users without any connections, then I will recommend computing the positive/negative scores from the users’ own features only during training. Say your current model computes the scores like this:

user_repr, work_repr = model(graph, {'user': user_features, 'work': work_features})
score_between_user_i_and_work_j = user_repr[i] @ work_repr[j]

I would do something like

_, work_repr = model(graph, {'user': user_features, 'work': work_features})
score_between_user_i_and_work_j = MLP(user_features[i]) @ work_repr[j]

Thank you for replying!

Yes, my task is predicting links for new users without any connections.
AS you mentioned, I tried to predict new users’ connection as below,

① train gnn model as usual
② Add the new user to the GNN used for training. Then, insert the graph into the RGCN and run a training iteration ⇒ This will yield the embedding for the new user.

③ Calculate the product of the newly embedded user and the features of all the jobs to determine which work is most likely to be associated.

Is this approach acceptable?

What do you mean by “training iteration”? If you meant a forward pass of your model, then that sounds OK. But if your new user doesn’t have any connection, while during training your users have connections and have their embeddings computed with a GNN, then there will be a distribution shift between training and testing where the degrees of test users are always 0 while training users are not. In this case, I would still recommend computing the embedding of the users directly from their own features, rather than passing the users through a GNN as well.

OK, I understand well.
Instead of running the training, you mean to execute the forward path once, right?
Then pick new user’s embeddings.

Thank you so much!
The issue I’ve been struggling with has been resolved, and I feel relieved!

Sorry again, I have another question.
I’ve created the “get_feature” function to obtain the embedding of a new user. Through this function, we obtain h_user and h_work . As the new user is positioned at the end, the computation is done using:

h_user[-1] @ h_work

Is this right approach?
After passing through cov1 cov2 layer, because the new node does not have any edges, the new user feature always become same very small number like 0.0004.

class RGCN(nn.Module):
    def __init__(self, in_feats, hid_feats, out_feats, rel_names):
        super().__init__()
        
        torch.manual_seed(0)

        net_seq_user = nn.Sequential(
            nn.Linear(305, hid_feats),
            nn.ReLU(),
            nn.Dropout(0.4),
            nn.Linear(hid_feats, hid_feats)
        )
        
        net_seq_work = nn.Sequential(
            nn.Linear(300, hid_feats),
            nn.ReLU(),
            nn.Dropout(0.4),
            nn.Linear(hid_feats, hid_feats)
        )

        self.encoder_user = net_seq_user
        self.encoder_work = net_seq_work
        
        self.conv1 = dglnn.HeteroGraphConv({
            'select':dglnn.GraphConv(hid_feats, hid_feats)
        }, aggregate='mean')
        
        self.conv2 = dglnn.HeteroGraphConv({
            rel: dglnn.GraphConv(hid_feats, out_feats)
            for rel in rel_names}, aggregate='mean')
    
    def forward(self, graph, inputs):
        # inputs are features of nodes
        h_user = inputs['user']
        h_work = inputs['work']
        
        h_user = h_user.float()
        
        h_user = self.encoder_user(h_user)
        h_work = self.encoder_work(h_work)
        
        h_dict = {'user':h_user, 'keyword': h_work}
        h = self.conv1(graph, h_dict)
        h = {k: F.relu(v) for k, v in h.items()}
        h = self.conv2(graph, h)
        return h
    
    def get_features(self, graph, inputs):
        h_user = inputs['user']
        h_work = inputs['work']
        
        h_user = h_user.float()
        
        h_user = self.encoder_citing_paper(h_user)
        h_work = self.encoder_cited_sentence(h_work)
        
        return h_user, h_work

Thank you in advance.

How was your loss function computed?

This topic was automatically closed 30 days after the last reply. New replies are no longer allowed.