I started using DGL for NLP research. But there was a problem using DGL.
When I train my model with my PC, loss.backward() takes too much time. Same code works well when I use Google Colab. I think GPU/CUDA operation on DGL is not working properly on my PC. Other model without DGLGraph works well on my PC.
My PC setting (not working)
GPU : RTX3090
CUDA version : 11.1
Can I know where the problem occured?