Dgl.batch single process

Hi,

When I batch graphs using dgl.batch(). It seems like it’s trying to use all cores available for batching. This also happens when I use pytorch dataloader with num_workers = 0. However, when I combine it with PyTorch data loader with num_workers>0, the CPU usage is constrained by the num_workers specified. I was wondering if I can also have the CPU-constrained behavior on num_workers = 0, so it doesn’t overuse the CPU?

Thanks

Hi, @woodcutter, you can use DGL_PARALLEL_FOR_GRAIN_SIZE to set the grain size which further limit the thread number.

Sorry for the late response. Thanks a lot!

This topic was automatically closed 30 days after the last reply. New replies are no longer allowed.