I am working on model using torrch and dgl on GPU but this error apeears after 1 epoch
I reduced #batches =4 and set os.environ[“PYTORCH_CUDA_ALLOC_CONF”] = “max_split_size_mb:512” but it does not work
RuntimeError: CUDA out of memory. Tried to allocate 352.00 MiB (GPU 0; 6.00 GiB total capacity; 4.78 GiB already allocated; 0 bytes free; 5.19 GiB reserved in total by PyTorch) If reserved memory is >> allocated memory try setting max_split_size_mb to avoid fragmentation. See documentation for Memory Management and PYTORCH_CUDA_ALLOC_CONF