Training GNN takes too much CPU

I found that while training GNN on CUDA using mini-batch, CPU usage is very high (~1900%).
How can I fix this?
image

Usually this is because it uses OpenMP in places other than model propagation (e.g. neighbor sampling is a common case).

What is your use case? If you were indeed training a node classification model with NodeDataLoader, it supports GPU-based neighbor sampling by placing in the graph on GPU.

High utilization is by design. Why do you prefer lower utilization?

Thanks for your reply.
My DGL version is 0.5.3, neighbor sampling requires the graph to be on CPU.

Thanks for your reply.
I thought this was caused by some bugs in my code.

This topic was automatically closed 30 days after the last reply. New replies are no longer allowed.