Transition_prob in NeighborSampler cause Segement fault (core dumped) error when using GPU

When I put a graph to GPU, the following code will cause Segement error. Below code is an example.

from dgl.contrib.sampling.sampler import NeighborSampler

g1 = dgl.DGLGraph()

for i in range(10):
for j in range(10):
g1.add_edge(i, j)
g1.add_edge(j, i)

g = dgl.DGLGraph(g1, readonly = True)
g.edata[‘w’] = torch.randn(g1.number_of_edges())‘cuda:1’))
for nf in NeighborSampler(g, 2, 2, shuffle = True, num_hops = 1, transition_prob = ‘w’ ):

And if I remove the “transition_prob = ‘w’” in NeighborSampler, or delete “‘cuda:1’))”, everything will be ok.
Could someone help me?
Thanks in advance!

The sampling is performed in CPU, so probably you should remove'cuda:1)) and move the features of nodeflow to GPU once it is constructed.

Thanks for your prompt reply, really helps me !