Using a model trained with a previous version of DGL

Hi! I have a DGL model trained that I did with version 0.4.3. I saved the model.pt file

Now I am trying to use it for validation and checking the predictions with the current version and I get the error:

[/usr/local/lib/python3.7/dist-packages/torch/nn/modules/module.py](https://lg1emf2t11a-496ff2e9c6d22116-0-colab.googleusercontent.com/outputframe.html?vrz=colab-20210825-060005-RC00_392863177#) in _call_impl(self, *input, **kwargs) 1049 if not (self._backward_hooks or self._forward_hooks or self._forward_pre_hooks or _global_backward_hooks 1050 or _global_forward_hooks or _global_forward_pre_hooks): -> 1051 return forward_call(*input, **kwargs) 1052 # Do not call functions when jit is used 1053 full_backward_hooks, non_full_backward_hooks = [], []

[/content/allmodels.py](https://lg1emf2t11a-496ff2e9c6d22116-0-colab.googleusercontent.com/outputframe.html?vrz=colab-20210825-060005-RC00_392863177#) in forward(self, g, num_layers, pooling) 37 h = g.ndata['h_n'].float() 38 # Perform graph convolution and activation function. ---> 39 h = F.relu(self.conv1(g, h)) 40 if num_layers=='2': 41 h = F.relu(self.conv2(g, h))

[/usr/local/lib/python3.7/dist-packages/torch/nn/modules/module.py](https://lg1emf2t11a-496ff2e9c6d22116-0-colab.googleusercontent.com/outputframe.html?vrz=colab-20210825-060005-RC00_392863177#) in _call_impl(self, *input, **kwargs) 1049 if not (self._backward_hooks or self._forward_hooks or self._forward_pre_hooks or _global_backward_hooks 1050 or _global_forward_hooks or _global_forward_pre_hooks): -> 1051 return forward_call(*input, **kwargs) 1052 # Do not call functions when jit is used 1053 full_backward_hooks, non_full_backward_hooks = [], []

[/usr/local/lib/python3.7/dist-packages/dgl/nn/pytorch/conv/graphconv.py](https://lg1emf2t11a-496ff2e9c6d22116-0-colab.googleusercontent.com/outputframe.html?vrz=colab-20210825-060005-RC00_392863177#) in forward(self, graph, feat, weight, edge_weight) 377 """ 378 with graph.local_scope(): --> 379 if not self._allow_zero_in_degree: 380 if (graph.in_degrees() == 0).any(): 381 raise DGLError('There are 0-in-degree nodes in the graph, '

[/usr/local/lib/python3.7/dist-packages/torch/nn/modules/module.py](https://lg1emf2t11a-496ff2e9c6d22116-0-colab.googleusercontent.com/outputframe.html?vrz=colab-20210825-060005-RC00_392863177#) in __getattr__(self, name) 1129 return modules[name] 1130 raise AttributeError("'{}' object has no attribute '{}'".format( -> 1131 type(self).__name__, name)) 1132 1133 def __setattr__(self, name: str, value: Union[Tensor, 'Module']) -> None:

AttributeError: 'GraphConv' object has no attribute '_allow_zero_in_degree'

Is there a way to make this work? Since it was computationally expensive to train it, I would like to avoid retraining it if it is possible :blush: Thanks in advance!


if it helps, this is the definition of the model used in the old version:


class Classifier_gen(nn.Module):
    def __init__(self, in_dim, hidden_dim_graph,hidden_dim1,n_classes,dropout,num_layers,pooling):
        super(Classifier_gen, self).__init__()
        if num_layers ==1:
            self.conv1 = GraphConv(in_dim, hidden_dim1)
        if num_layers==2:
            self.conv1 = GraphConv(in_dim, hidden_dim_graph)
            self.conv2 = GraphConv(hidden_dim_graph, hidden_dim1)
        if pooling == 'att':
            pooling_gate_nn = nn.Linear(hidden_dim1, 1)
            self.pooling = GlobalAttentionPoolingPMG(pooling_gate_nn)
        self.classify = nn.Sequential(nn.Linear(hidden_dim1,hidden_dim1),nn.Dropout(dropout))
        self.classify2 = nn.Sequential(nn.Linear(hidden_dim1, n_classes),nn.Dropout(dropout))
        self.out_act = nn.Sigmoid()
    def forward(self, g,num_layers,pooling):
        # Use node degree as the initial node feature. For undirected graphs, the in-degree
        # is the same as the out_degree.
        # Perform graph convolution and activation function.
        # Calculate graph representation by averaging all the node representations.
        h = g.ndata['h_n'].float()
        # Perform graph convolution and activation function.
        h = F.relu(self.conv1(g, h))
        if num_layers=='2':
            h = F.relu(self.conv2(g, h))
            
        g.ndata['h'] = h
        # Calculate graph representation by averaging all the node representations.
        #hg = dgl.mean_nodes(g, 'h')
        if pooling == "max":
            hg = dgl.max_nodes(g, 'h')
        elif pooling=="mean":
            hg = dgl.mean_nodes(g, 'h')
        elif pooling == "sum":
            hg = dgl.sum_nodes(g, 'h') 
        elif pooling =='att':  
            # Calculate graph representation by averaging all the node representations.
            [hg,g2] = self.pooling(g,h) 
        
        g2 = hg
        a2=self.classify(hg)
        a3=self.classify2(a2)
        return self.out_act(a3),g2,hg

How did you save and load the model?

Usually what I do is save the model with

torch.save(model.state_dict(), 'model.pt')

and load it with

model.load_state_dict(torch.load('model.pt'))

Because the above only saves the parameter dictionaries, this should work across versions with different implementations unless the names of the parameters change.

If you were just saving with torch.save(model, 'model.pt'). You could load it back with the older DGL version, save the parameter dictionary with the code above, and then load the dictionary in newer versions.

1 Like

Many thanks again @BarclayII! I tested it and it was that exact issue, it works perfectly now! I had been trying to work around this since January, I hadn’t thought it was this easy :smiley:

This topic was automatically closed 30 days after the last reply. New replies are no longer allowed.