Scalability of Pytorch version

Hello! Congratulations on the impressive library. I notice that the main example you give of training on very large graph is an mxnet implementation of stochastic steady state embedding, but that this example is not implemented in Pytorch. Is this because the pytorch version is not as scalable as the mxnet version? Or would be possible to run a pytorch version of stochastic steady state embedding on a similarly large graph and get performance of the same order of magnitude?

That’s a good question. We are working on an example for large graph training using graph sampling method, which we think is more useful for real scenarios. For the up-coming example, we will release both Pytorch and MXNet versions.

Thanks. Look forward to seeing the new example.

Any updates on this? If you need help creating a tutorial for giant graphs using pytorch, what would be a good place to start?

cc @zhengda1936 for visibility.