Does GPU help with IO bottleneck

I read how DGL is leverage GPU to accelerate the computation, while it looks like the data sample is happens in the CPU and it will not leverage the GPU, does that mean we will not benefit any data loading speed up when applying GPU? Currently we are experiencing the IO bottleneck when loading data in.


You might want to try GPU-based neighbor sampling, which happens when you put the graph structure on GPU and set the output device of NodeDataLoader to GPU as well. If your graph structure cannot fit on GPU, then you probably have little choices. Using unified memory could help but that will need to change lots of underlying code I guess.

Thanks for the suggestion! Will experiment more with the data.