Transition_prob in NeighborSampler cause Segement fault (core dumped) error when using GPU

When I put a graph to GPU, the following code will cause Segement error. Below code is an example.

from dgl.contrib.sampling.sampler import NeighborSampler

g1 = dgl.DGLGraph()
g1.add_nodes(10)

for i in range(10):
for j in range(10):
g1.add_edge(i, j)
g1.add_edge(j, i)

g = dgl.DGLGraph(g1, readonly = True)
g.readonly()
g.edata[‘w’] = torch.randn(g1.number_of_edges())
g.to(torch.device(‘cuda:1’))
for nf in NeighborSampler(g, 2, 2, shuffle = True, num_hops = 1, transition_prob = ‘w’ ):
print(nf)
input()

And if I remove the “transition_prob = ‘w’” in NeighborSampler, or delete “g.to(torch.device(‘cuda:1’))”, everything will be ok.
Could someone help me?
Thanks in advance!

The sampling is performed in CPU, so probably you should remove g.to(torch.device('cuda:1)) and move the features of nodeflow to GPU once it is constructed.

Thanks for your prompt reply, really helps me !