Memory consumption by GraphSage Model

I am using torch.cuda.memory_allocated() to get the memory allocated to the model in the GPU, by adding this line before and after the model creation. The difference I get is very small, is it because the model memory size does not depend on the input graph and only on the number of layers and hidden nodes? Or does it depend on the input dataset as well?

In DNN training, the model itself only consumes a small part of the GPU memory it utilizes during training, most of the GPU memory are used for storing activations (for back propagation). For GNN training, we also store graphs on GPUs, which is not reflected in torch.cuda.memory_allocated().

1 Like

Thanks for the answer, what is the way of getting the memory consumption for storing the graph? And also if you could help me out with calculating the memory used for storing activations, it would be awesome.

Currently we don’t have such functionality.
I’m working on this API, here is the feature request:

1 Like

As for the memory consumption for storing activations, it’s always linear to the number of layers.
This project might help:

1 Like