Using distributed RGCN produces error

I tried the distributed program under example/pytorch/rgcn/experimental several times. I followed the instructions on the readme to complete it step by step, but when the program was running, an error occurred. I’m not quite sure what caused it, it seems that when I run the distributed graphsage program, I can get the correct results.

Traceback (most recent call last):
File “/root/miniconda3/lib/python3.8/multiprocessing/queues.py”, line 239, in _feed
obj = _ForkingPickler.dumps(obj)
File “/root/miniconda3/lib/python3.8/multiprocessing/reduction.py”, line 51, in dumps
cls(buf, protocol).dump(obj)
File “/root/miniconda3/lib/python3.8/site-packages/torch/multiprocessing/reductions.py”, line 367, in reduce_storage
df = multiprocessing.reduction.DupFd(fd)
File “/root/miniconda3/lib/python3.8/multiprocessing/reduction.py”, line 198, in DupFd
return resource_sharer.DupFd(fd)
File “/root/miniconda3/lib/python3.8/multiprocessing/resource_sharer.py”, line 48, in init
new_fd = os.dup(fd)
OSError: [Errno 9] Bad file descriptor
node paper has data feat
Traceback (most recent call last):
File “entity_classify_dist.py”, line 644, in
main(args)
File “entity_classify_dist.py”, line 584, in main
run(args, device, (g, n_classes, train_nid, val_nid, test_nid, labels, all_val_nid, all_test_nid))
File “entity_classify_dist.py”, line 411, in run
model = model.to(device)
File “/root/miniconda3/lib/python3.8/site-packages/torch/nn/modules/module.py”, line 987, in to
return self._apply(convert)
File “/root/miniconda3/lib/python3.8/site-packages/torch/nn/modules/module.py”, line 639, in _apply
module._apply(fn)
File “/root/miniconda3/lib/python3.8/site-packages/torch/nn/modules/module.py”, line 639, in _apply
module._apply(fn)
File “/root/miniconda3/lib/python3.8/site-packages/torch/nn/modules/module.py”, line 639, in _apply
module._apply(fn)
File “/root/miniconda3/lib/python3.8/site-packages/torch/nn/modules/module.py”, line 662, in _apply
param_applied = fn(param)
File “/root/miniconda3/lib/python3.8/site-packages/torch/nn/modules/module.py”, line 985, in convert
return t.to(device, dtype if t.is_floating_point() or t.is_complex() else None, non_blocking)
File “/root/miniconda3/lib/python3.8/site-packages/torch/cuda/init.py”, line 221, in _lazy_init
raise AssertionError(“Torch not compiled with CUDA enabled”)
AssertionError: Torch not compiled with CUDA enabled

It seems that you have installed pytorch without CUDA support. Could you verify it by checking the value of torch.cuda.is_available()?

thx, I did ignore this when running.

This topic was automatically closed 30 days after the last reply. New replies are no longer allowed.