Graph pooling model support?

In this link I found some pooling layers modules:https://docs.dgl.ai/api/python/nn.pytorch.html#module-dgl.nn.pytorch.glob
the input feature is (N,∗) shaped and the return value of the forward function is (∗) shaped,I’m confused about the return value, what’s the significance of the result?

And there are some model implemented such as sortpooling and GlobalAttentionPooling
,do you have some plan to add models such as SAGPool and graph U-net or can I implement the model by myself with existing pooling layers relatively convenient?

Thanks !

dgl.nn.*.glob only contains global pooling modules, which means returning a fixed length feature for each graph. * in (N,*) means any shape, this being said the node feature might not be one-dimensional vector but a high-dimensional tensor. If the input feature is (N, 3, 4, 5), the returned value would be (3, 4, 5) for individual graph and (B, 3, 4, 5) for batched graph.

SAGPooling/Graph U-Net are different, they works more like pooling layers in CNN models, we call such layers “sequential pooling” and they’re in our v0.5 release roadmap.

Currently, we already have a DiffPooling example written in PyTorch, it’s similar to Graph U-Net and SAGPooling and you can find how it’s implemented in DGL.

Hope this helps!

I ran the diffpool example and got the error:

File “train.py”, line 205, in collate_fn
graph.ndata[key] = torch.FloatTensor(value)
TypeError: expected Float (got Double)

and I change the type of value by :

value = value.float()

then I got cuda out-of-memory error

RuntimeError: CUDA out of memory. Tried to allocate 650.00 MiB (GPU 0; 15.75 GiB total capacity; 9.83 GiB already allocated; 122.12 MiB free; 4.64 GiB cached)

I want to know is that because my modification was wrong or the model needs GPU memory more than 16G?

Thanks!

I think your modification is right. Thanks for your feedback ! I’ll check the implementation and see why there is a OOM error.

Hi, I wonder if this issue has been resolved. Thank you!

Hello, I wonder whether this out-of-memory
issue has been resolved. Thank you!