IS 0.5.2 support server and trainer deploy separate in diff servers?

now I saw , in newest distributed training
graph server and pytorch dist deploy in the same machine.

is or not support deploy separate ?

For now we cannot do that. But we can add this in the future. The reason we put server and trainer together is that we can use local shared-memory to speedup the data communication.

3q relay, expect next version