DGL vs. Pytorch Geometric

Hello,

What are the merits of using dgl over pytorch_geometric and vice versa?
What are some situations in which using one is arguably better than using the other?

I’d appreciate any insight!

1 Like

In my view, they are quite similar, and DGL with a better design, but pytorch_geometric support more things at this time.

Hi, what kind of operation do you think DGL should support? We will support graph pooling layers recently: https://github.com/dmlc/dgl/pull/669

I agree that dgl has better design, but pytorch geometric has reimplementations of most of the known graph convolution layers and pooling available for use off the shelf. I think that’s a big plus if I’m just trying to test out a few GNNs on a dataset to see if it works.

I suggest to have pooling and unpooling layers added.

Here is a list of pooling layers we will support soon:

Global Pooling

  • Sum pooling
  • Avg pooling
  • Max pooling
  • Global Attention Pooling
  • Set2Set
  • SortPooling
  • Set Transformer

Sequential Pooling

  • Diffpool

As for unpooling, currently I don’t see it much different from pooling layers (just with larger k).
PyG support one kind of unpooling layers: knn_interpolate, do you have other suggestions?

Hey there, welcome to the community. Overall, I think both frameworks have their merits. PyG is very light-weighted and has lots of off-the-shelf examples. In DGL, we put a lot of efforts to cover a wider range of scenarios. Many of them are not necessarily GNNs but share the principles of structural/relational learning. Examples are CapsuleNet, Transformer and TreeLSTM. We also notice that graph generative models are important and could be quite flexible (i.e, adding/removing one node/edge at a time), so we spend extra efforts in the design for them (examples are DGMG and JTNN).

Real world graph can be gigantic, and training on large graphs requires special supports. That’s why we developed fused message passing technique and you could see how important is it in our blog. We also have supported graph sampling and distributed training, and have examples and tutorials ready. Please check out.

The graph/structure deep learning community is still at the stage of rapid growth. Many new ideas are being developed and at the same time many new users are right at the door curb. That’s why we want our curated models/examples to be as complete as possible, best with accompany tutorials and blogs. Writing them is definitely time-consuming, leading to a long cycle to add new ones. We are working hard on this and any community help is extremely welcomed.

As said, there are plenty of rooms for DGL to improve. The next release will include many new features: heterogenous graph, pooling/unpooling module, better data i/o, etc. If you have feature requests, please reply in the roadmap issue. We always prioritize community requests as we have done in the past.

4 Likes

@minjie Is it possible to request a list of Cons and Pros of both libraries? I’d be curious and I am sure it would be very helpful for future users.


Related to my request I’ve compiled a list of useful links for comparing the two libraries that I’ve found online:

Comparison of DGL vs PyG by original developers

Question asking to compare DGL vs PyG

3rd party comparison:

DDP with DGL:

4 Likes

For me, DGL does not support torch.nn.DataParallel, which means I need more code modifications to achieve simple multi-card parallel training just on a single node.

For that, I suggest you try dgl.graphbolt released in DGL2.1. It has amazing performance on multi-card parallel training.