I am writing my graduate project with DGL and I find it very amazing.
I notice that in the GAT tutorial, the training process is the same as GCN in a transductive way,
logits = net(features)
logp = F.log_softmax(logits, 1)
loss = F.nll_loss(logp[mask], labels[mask])
in which, if I get this right, the whole graph is feed into the network but loss only comes from the training nodes.
However, if I want to go in a inductive way, meaning slicing the nodes into training/dev/test nodes and feed them into the model respectively (using mask, without generating a single graph for each training/dev/test set), is there a way I could do that? P.S. I want to operate on a DGLHeteroGraph.
Thanks to anyone who can give me any advice!