Creating computational blocks that ignore future edges (Sequential graph)

Hi,

I am building a recommencer system using DGL. Currently, there is little temporality involved in generating recommendations for users. However, I would like to model the task in a sequential fashion, i.e. use all K previous items at time t to predict the next items that a user might interact with, in the fashion of a RNN.

To do so, I would like that when I create the computational blocks for my training edges, all edges that are more recent than the training edge are excluded (and would be considered as the ground truth).

Currently, even if I incorporate sequential message passing techniques like LSTM aggregation, “future edges” will be included in the message passing, since they are part of the sampling graph.

As I browsed the multiple DGL functionalities, I understood that this might not be a trivial task. Would anyone have advices on how to do so?

Thanks in advance!

Hi,

It seems that you are doing minibatch training with neighbor sampling, where the neighbors allowed to be sampled depend on the feature of each example in the minibatch. To my knowledge, there is no easy and efficient solution in DGL, and that sounds like a reasonable feature request.

Could you give a pointer to a paper your model will be based on? I could then evaluate what is the best way to approach this.

Thanks!

Indeed, this is a great summary of the situation. My model is inspired by a few different papers.
Notably, the “sequential graph” idea is inspired by Graph Neural Networks in Recommender Systems: A Survey, section 5.

I did not actually implement all of the ideas presented there, but these papers use sequential graph in some ways: Memory Augmented Graph Neural Networks for Sequential Recommendation; Personalizing Graph Neural Networks with Attention Mechanism for Session-based Recommendation.

Thanks!

1 Like

This topic was automatically closed 30 days after the last reply. New replies are no longer allowed.