How can I estimate required hardware in order to use DGL?

How can I estimate required hardware in order to use DGL?
If we would like to apply node prediction on 100,000 nodes with 50 attributes and 3 million edges with 2 attributes, how can I estimate how many GPUs are required if I would like to get the result of node prediction within a day.

Is there a hardware estimation guidelines or some benchmarks available?

It’s hard to say because you can use full-graph training or mini-batch training they are different.

I would suggest to start with an instance with relatively cpu large memory. And you may not need GPU at the starting point. As long as the CPU memory can take the full graph and do the computation, I think it’s fine.

Thank you!
I found ths page, this is what I was looking for.
Performance Benchmarks — DGL 0.7.2 documentation

This topic was automatically closed 30 days after the last reply. New replies are no longer allowed.