Node dropout in contrastive learning

I am trying to implement the node dropout mentioned in paper “Self-supervised Graph Learning for Recommendation”. Should I modify the message passing in conv module (e.g., GATConv)? or should I create a new graph with nodes dropped before the training? Any advices is appreciated!

Does DGL provides any support to data augmentation or contrastive learning?

Thank you all in advance.

For the first option, you can explicitly remove the nodes using remove_nodes. For the second option, depending on the GNN you are using, you may either multiply the input node features by a binary mask or need to directly modify the module.

It might be a good idea to have some built-in data augmentation functions in DGL and thank you for the suggestions.

1 Like

Thank you so much for your advice. This is really helpful!

1 Like

This topic was automatically closed 30 days after the last reply. New replies are no longer allowed.