DGL Serving Best Practice

Hi team, want to know your best practice about how DGL model deployment & serving.

I only know standard model save like: th.save(**.pt).

Found a history discuss mentioned that dgl model not support save to onnx format:
Exporting GraphSAGE model into onnx format : train_sampling.py.

Would like to know your best practice how dgl deployment on industry.

Hi,

Actually this depends on your whole pipeline. We don’t have special support for serving yet, therefore flask+python could be one choice.

Another option is like Pinsage, which pretrained the embeddings and used it for downstream tasks. This means the graph needs to be fixed for inferencing.

This topic was automatically closed 30 days after the last reply. New replies are no longer allowed.