Considering tutorial reference and my code, how should I specify the graph_sampler
for the Dataloader
to only consider sampling non-self loop edges, while always keeping self-loops for every node output from dataloader.
is training with supervision edges helpful in link prediction task as shown here and here, I haven’t seen such in any dgl examples.
So, a set of supervision edges are those used to predict the loss and these are the only edges used for backpropagation in training in transductive setting, but what are the supervision edges in inductive val/test split for? those are the edges for accuracy calculation in those splits?
How will the graph_sampler
look like for transductive link prediction split with supervision edges for train/val/test? a code example would be helpful.
Has anyone seen time-based split in any papers or implementations, where train edges are included in val graph and train+val edges in test graph, for transductive link prediction as in above reference?
Thanks!