DistDGL SAGE doesn't seem to be training inductively

Some time back I asked a question about inductive vs. transductive training
on DistDGL:

Ultimately, the answer was that DistDGL does inductive training.

That doesn’t seem to be the case, however, at least according to the print of the sampled
graph for a run of ogbn-products:

0 [Block(num_src_nodes=1824360, num_dst_nodes=829139, num_edges=20727375), Block(num_src_nodes=829139, num_dst_nodes=196615, num_edges=1965710)]

ogbn-products only has around 200k nodes that are considered training nodes.
This may be a question of what the definition of “inductive” that I’m working with is,
but my understanding is that in inductive training, the training graph should not
include nodes that are not the training node in the graph being used. Here, there
number of nodes used is 1.8 million, which would mean that it must include
non-training nodes. In other words, according to this definition of inductive,
DistDGL SAGE training is not inductive but rather transductive.

Is is correct, or is there a misunderstanding somewhere? Thank you.

this depends on your definition of inductive. Inductive usually means that whether the trained model can be applied on new nodes. It doesn’t that we cannot use the non-training nodes. GNN is a good method for semi-supervised learning. in the semi-supervised setting, training will take advantage of the data points from validation set and test set.

This topic was automatically closed 30 days after the last reply. New replies are no longer allowed.