How to do inductive learning with the graph?

I am writing my graduate project with DGL and I find it very amazing.
I notice that in the GAT tutorial, the training process is the same as GCN in a transductive way,

logits = net(features)
logp = F.log_softmax(logits, 1)
loss = F.nll_loss(logp[mask], labels[mask])

in which, if I get this right, the whole graph is feed into the network but loss only comes from the training nodes.
However, if I want to go in a inductive way, meaning slicing the nodes into training/dev/test nodes and feed them into the model respectively (using mask, without generating a single graph for each training/dev/test set), is there a way I could do that? P.S. I want to operate on a DGLHeteroGraph.
Thanks to anyone who can give me any advice!

In transductive learning, we have access to both the node features and topology of test nodes while inductive learning requires testing on graphs unseen in the training. GAT can be used for both transductive learning and inductive learning. The tutorial mostly works with transductive learning. For inductive learning, i.e. the test graphs are unseen in training, see the example here.

Thanks for your answer! I already found the api subgraph() to slice nodes through masks.