Stack several layers of NNconv

Hello everyone:

I am trying to create a model consisting of several layers of NNConv. The problem is that the module takes three elements as parameters (graph, features, edge_features) and only return one parameter (features after the convolution). So I don’t know what are the edge features that I should input into the next layer.

Next, you have the code I’ve tried so far. As you can see in the forward method, I use the same edge features for all layers. Is that correct? Do you have any piece of advice to improve the code?

import numpy as np
import torch as the
import dgl
from dgl.nn import NNConv


class MPNN(th.nn.Module):
    def __init__(self, num_feats, n_classes, hidden, num_edge_feats, aggregator_type='mean',bias=True, residual=False, norm=None, activation=None):
        super(MPNN, self).__init__()
        self._num_feats = num_feats
        self._n_classes = n_classes
        self._num_hiden_features = hidden
        self._num_edge_feats = num_edge_feats
        self._aggregator = aggregator_type
        self._activation = activation
        self._norm = norm

        # Input layer
        edge_function = self.edge_function(self._num_edge_feats, self._num_feats*self._num_hiden_features[0])
        self.NNconv_input = NNConv(self._num_feats, self._num_hiden_features[0], edge_function, self._aggregator, residual, bias)

        # Hidden layers
        self.layers = th.nn.ModuleList()
        for idx in range(1, len(self._num_hiden_features)):
            edge_function = self.edge_function(self._num_edge_feats, self._num_hiden_features[idx-1]*self._num_hiden_features[idx])
            self.layers.append(NNConv(self._num_hiden_features[idx-1], self._num_hiden_features[idx], edge_function, self._aggregator, residual, bias))

        # Output layer
        edge_function = self.edge_function(self._num_edge_feats, self._num_hiden_features[-1]*self._n_classes)
        self.NNConv_output = NNConv(self._num_hiden_features[-1], self._n_classes, edge_function, self._aggregator, residual, bias)

    @staticmethod
    def edge_function(f_in, f_out):
        return th.nn.Sequential(
            th.nn.Linear(f_in, 10),
            th.nn.ReLU(),
            th.nn.Linear(10, f_out)
        )

    def forward(self, graph, feat, defeat):
        x = self.NNconv_input(graph, feat, defeat)

        # activation
        if self._activation is not None:
            x = self._activation(x)
        # normalization
        if self._norm is not None:
            x = self._norm(x)

        for idx, layer in enumerate(self.layers, 1):
            x = layer(graph, x, defeat)
            # activation
            if self._activation is not None:
                x = self._activation(x)
            # normalization
            if self._norm is not None:
                x = self._norm(x)

        x = self.NNConv_output(graph, x, efeat)

    return x

Another thing, I want to use a more complex edge function than just a linear layer. So I have created the method edge_funtion that add an additional layer. However, I have tried it and it doesn’t work very well. Am I doing something wrong there?

Thank you very much.

As you can see in the forward method, I use the same edge features for all layers. Is that correct? Do you have any piece of advice to improve the code?

Yes, this is correct. You may find a full example here.

So I have created the method edge_funtion that add an additional layer. However, I have tried it and it doesn’t work very well. Am I doing something wrong there?

How does it perform relative to a single linear layer?

Thank you very much for your answer.
About the multilayer edge function, I thought it was the problem but actually it was a mistake in other part of the code. Now it work fine.

This topic was automatically closed 30 days after the last reply. New replies are no longer allowed.