Wonder about the promotion while using GPU

Hi, I’m new to DGL and it’s just great in geometric deep learning. Now I have a question while i run the GCN benchmark posted on https://github.com/dglai/dgl-benchmark/tree/master/gcn

I found that the GPU usage always stayed at about 20% even I later added more layers from 1 to 10.
Although it did faster than using cpu, i still wonder whether it can be more faster or not?
(actually, there is a repository named st-gcn, https://github.com/yysijie/st-gcn , which using normal conv2d and conv1d to simulate the graph convolution and finally got a amazing performance both on accuracy and training speed.)

Thanks for any suggestions. :slight_smile:

I finally found that data.cuda() has taken a lot of time, is there any solution to this?

For profile on GPU with PyTorch, be sure you have called torch.cuda.synchronize() before and after each CUDA operation(PyTorch is executed in an asynchronous way). the time you spend on data.cuda() may include initialization time, allocation time, etc.

This should not be the case, could you please tell me which version of DGL you are using?

sorry for the late reply. I were using DGL version 0.2