Gat error : There are 0-in-degree nodes in the graph

I have used GATConv layers for link prediction, and my code worked Properly until a few hours ago, but now I faced this warning and error.

My graph doesn’t have self_loop itself, so I added self-loop to the graph (but not using dgl.add_self_loop(g)) but the error is established. Also, the error says " Setting allow_zero_in_degree to be True", but I don’t use DotGatConv to set this parameter. What should I do now?

How did you add self loops to your graph? If you have self-loops added, you shall not receive this warning any longer.

first i used ‘‘g.add_edges’’ for adding self-loop.but i think it doesn’t mater the way to add becouse i try ‘‘dgl.add_self_loop(g)’’ too.
I have to explain that i need to use the subgraph of G for train,validation and test graph (eg: train_g = g.edge_subgraph(train_edge_ids, preserve_nodes=True).and the problem is when adding self-loop to all of these 3 subgraphs (although its wrong because one edge can’t be in all of the train valid and test data) I faced this error:

and when I used all self-loop on the train I faced this error:

You need to do g = dgl.add_self_loop(g)

I did it, unfortunately the result didn’t change.

and the problem is when adding self-loop to all of these 3 subgraphs (although its wrong because one edge can’t be in all of the train valid and test data)

GATs require self-loops to work. The self-loops themselves are not part of the training/validation/test set. My suggestion would be adding self-loops to the training/validation/test graph first, then compute the scores on the edges for the edges that are not the added self-loops.

I faced this error:

This is strange. How was your graph created and how did you add self-loops? Does the following code work for you?

g = dgl.graph(([0, 1, 2], [1, 2, 3]))
g = dgl.add_self_loop(g)

I create my graph this way :

also, I tried to convert my graph to heteroGraph by ‘dgl.as_heterograph(g)’ and I get this warning:

if DGLGraph and DGLHeteroGraph have been merged in new version, why I get this error? :

The code above worked for me with DGL 0.5.

What is your DGL version? If it’s earlier than 0.5, could you update it?


My dgl version is 0.5.0. and i don’t have any problem before the new version is released.
I use this ( link prediction model with Gatconv layers and that error occurs when I want to train the Model not when I create the graph.

Could you tell us the following:

  • The value of DGL_LIBRARY_PATH environment variable, if exists.
  • The value of PYTHONPATH environment variable, if exists.
  • The Linux distribution and version.
  • The output of executing python -c 'import dgl; print(dgl.__path__)' from shell.
  • The output of executing
    openssl md5 `python -c 'import dgl; print(dgl.__path__[0])'`/`
    from shell.
  • Whether the following code (based on your example above) throws an error:
    import dgl
    import torch
    g = dgl.DGLGraph()
    g.add_nodes(5, {'x': torch.randn(5, 4)})
    g.add_edges([0, 1], [1, 2])
    g.add_nodes(7, {'x': torch.randn(7, 4)})
    g.add_edges([1, 3], [5, 7])
    train_g = g.edge_subgraph([0, 1], preserve_nodes=True)
    train_g = dgl.add_self_loop(train_g)
    val_g = g.edge_subgraph([2], preserve_nodes=True)
    val_g = dgl.add_self_loop(val_g)
    test_g = g.edge_subgraph([3], preserve_nodes=True)
    test_g = dgl.add_self_loop(test_g)

(Of course, please do remove all sensitive information from the outputs)

I run my code on google colab . and do " !pip install dgl" for install dgl library. and this is what you want:
and creating graph not throwing an error , Just when I train the model I get that error :


So add_self_loop no longer throw the Expected type graph.Graph error, but the training error you have seen is There are 0-in-degree nodes?

I also meet the problem when running this tutorial.

I found the explanation: Set allow_zero_in_degree
to True for those cases to unblock the code and handle zere-in-degree nodes manually, from official DGL website.

Then I add allow_zero_in_degree=True in conv.GraphConv( ). Work for me


added a bit more clear where to put the allow_zero_in_degree. However, I am not sure if this way might cause any problem in training or affect the training ?

class Classifier(nn.Module):
    def __init__(self, in_dim, hidden_dim, n_classes):
        super(Classifier, self).__init__()
        self.conv1 = GraphConv(in_dim, hidden_dim)
        self.conv2 = GraphConv(hidden_dim, hidden_dim)
        self.avgpooling = AvgPooling()
        self.classify = nn.Linear(hidden_dim, n_classes)