I would like to know if it’s possible to save the model I train using dgl as a Tensorflow model, in a way that makes it possible to use Tensorflow Serving Models APIs to deploy and online predictor.
Sorry but you cannot use tensorflow serving to serve dgl models because some sparse kernels implemented in dgl have not been incorporated in tf-serve.
@zihao sorry for bringing up this old topic.
Your answer to this question dated back to April 20th, that is when DGL was still at v0.4.1. Since then DGL has been updated to version v0.5.0 and v0.6.0. With the current state of the library, is there a possibility now to use Tensorflow Serving to serve DGL models?
Hi @zinzinhust96 , we do not have enough bandwidth to support TF-based serving. Currently we are more focusing on distributed training and inference based on PyTorch.
You are welcomed to contribute to this part if you are interested.
@zihao thanks for the update
I would also want to know about the possibility of TorchServe with DGL. Can TorchServe be used to serve DGL models?
@zinzinhust96 If my understanding is correct, TorchServe supports customizing a server handler to serve a mode, so technically you could write some server side logic to do inference using DGL.