Questions for the size and graph structure of DGLBlock

I have several questions regarding DGLBlock.

  1. How to query the size of the DGLBlock object?
  • Basically, DGLBlock have features of edge or node, and graph structure format (e.g., CSC, CSR, or COO)
  • To know the size of DGLBlock object, I have used torch.cuda.memory_allocated() before and after graph.to('cuda:0').
  • However, I wonder whether APIs exist to query the size of the DGLBlock object. If not, are there useful techniques to know it?
  1. What is the graph format (csr, csc, or coo) used for update_all?
  • It seems that DGL uses various graph formats during runtime.

Unfortunately we don’t have an easy way to query the size (specifically, the GPU memory consumption) of a graph. Although there is a more complicated way to do so. First, you can check which formats your graph object uses by

g.formats()

Then for each created format, you can get the underlying sparse tensors via

g.adj_sparse('coo')   # coo for instance

This directly surfaces the underlying COO/CSR/CSC tensors and does not incur any additional memory usage.

For your second question, update_all by default uses both CSC and CSR first, for forward and backward propagation. That being said, you can limit the formats DGL can use with g.formats().

1 Like

Thank you for your answers.

To further understand what you mentioned, I have additional questions. To summarize,

  1. Does DGL automatically determine which type of graph structure (b/w CSR and CSC) to be used by its own analysis during runtime?

  2. If I limit the structure of graph (e.g., CSR), does DGL generate other types of graph structure (e.g., CSC and COO) if it is need while training (asumming sampling and sub graph construction is not performed on GPU) ?

  1. Yes.
  2. No. However some operations might fail if it cannot find a suitable format. For instance, update_all requires either COO or (CSR and CSC) to work.
1 Like

Thank you! It helps a lot for me.

This topic was automatically closed 30 days after the last reply. New replies are no longer allowed.