Hi,
When I batch graphs using dgl.batch(). It seems like it’s trying to use all cores available for batching. This also happens when I use pytorch dataloader with num_workers = 0. However, when I combine it with PyTorch data loader with num_workers>0, the CPU usage is constrained by the num_workers specified. I was wondering if I can also have the CPU-constrained behavior on num_workers = 0, so it doesn’t overuse the CPU?
Thanks