In the coding snippet 216 ~ 226 of /examples/pytorch/gcmc/train_sampling.py
I don’t know why the following function is required to implement multiple process training
def prepare_mp(g):
"""
Explicitly materialize the CSR, CSC and COO representation of the given graph
so that they could be shared via copy-on-write to sampler workers and GPU
trainers.
This is a workaround before full shared memory support on heterogeneous graphs.
"""
for etype in g.canonical_etypes:
g.in_degree(0, etype=etype)
g.out_degree(0, etype=etype)
g.find_edges([0], etype=etype)