Deploy a DGL model in C++ for inference

Hi DGL developers,

I’m working on a customized model using DGL with PyTorch backend. For our research purposes, I’m very interested in loading the pre-trained model into C++ for model inference and obtaining gradients, like what could do for native PyTorch, to speed up the data processing pipeline.

I noticed that currently there is no official DGL C++ frontend published. Then - Is it possible to deploy a pretrained model in C++ without extensive interfacing works? Thanks a lot in advance!

Thanks for your interest but we don’t have plan for the C++ frontend. However, we are also actively investigating the integration with torchscript too. If we could integrate it with torchscript, does this fix your problem? Currently we use custom python autograd function. If you found any other solution, we would be happy to look into!

Integrating it with torchscript will be very helpful! If you are already working on such a plan, do you have an expected timeline to implement it? I’ll also be happy to use and contribute to ‘beta’ versions on this issue.

Hello! Is there a way to deploy DGL model in C++ for inference now? It seems that, there’s still no torchscript support added. Is there any other way to achieve it?

Bumping this ^, I am also very interested.

The new sparse matrix API is compatible with TorchScript so any GNNs implemented in dgl.sparse should be suitable for C++ inference.