Error when running dgl and torch on GPu

I am working on model using torrch and dgl on GPU but this error apeears after 1 epoch
I reduced #batches =4 and set os.environ[“PYTORCH_CUDA_ALLOC_CONF”] = “max_split_size_mb:512” but it does not work

RuntimeError: CUDA out of memory. Tried to allocate 352.00 MiB (GPU 0; 6.00 GiB total capacity; 4.78 GiB already allocated; 0 bytes free; 5.19 GiB reserved in total by PyTorch) If reserved memory is >> allocated memory try setting max_split_size_mb to avoid fragmentation. See documentation for Memory Management and PYTORCH_CUDA_ALLOC_CONF

Hi @MMamdouh , I think your code have run out of GPU memory. You can check whether the graph you used and its data are on GPU and whether they can fit into the GPU. You can also provide a minimum code snippet for us to reproduce the error.

This topic was automatically closed 30 days after the last reply. New replies are no longer allowed.