Is message passing GCN a kind of spatial convolution?

I am new to both DGL and GNNs, so it may not seem clear to me how DGL works from the bottom up.

I learned from this post to create my own GCN. It copes with both node features and edge features.

My codes are like:

class GCNLayer(nn.Module):
def __init__(self, ndim, edim, out_dim):
    super(GCNLayer, self).__init__()
    self.W_msg = nn.Linear(ndim + edim, out_dim)
    self.W_apply = nn.Linear(ndim + out_dim, out_dim)

# use both edge features and node features
def msg_func_node_edge(self, edges):
    return {'m': F.relu(self.W_msg(torch.cat([edges.src['h'], edges.data['h']], dim=1)))}

def forward(self, g_dgl, nfeats, efeats):
    with g_dgl.local_scope():
        g = g_dgl
        g.ndata['h'] = nfeats  # node features
        g.edata['h'] = efeats  # edge features

        # update all: message passing, collect msg 'm' from neighbors and save to mailbox 'h_neigh'
        g.update_all(self.msg_func_node_edge, fn.sum(msg='m', out='h_neigh'))
        
        # update hidden state of nodes
        g.ndata['h'] = F.relu(self.W_apply(torch.cat([g.ndata['h'], g.ndata['h_neigh']], dim=1)))
        return g.ndata['h']

class GCN(nn.Module):
def __init__(self, ndim, edim, outdim):
    super(GCN, self).__init__()
    # create a 2-layer GCN
    self.layers = nn.ModuleList()
    self.layers.append(GCNLayer(ndim, edim, 8))   # 8: GCN hidden dim
    self.layers.append(GCNLayer(8, edim, outdim))

def forward(self, g, nfeats, efeats):
    # conv twice
    for i, layer in enumerate(self.layers):
        nfeats = layer(g, nfeats, efeats)
    g.ndata['h'] = nfeats
    h = dgl.mean_nodes(g, 'h')
    return h

I learned that spectral GCN requires undirected graphs, and spatial GCN can deal with both the undirected and the directed. Is my GCN the spatical one?

And when I printed the parameters of the net, I noticed that adjacent matrix was not used. It’s more like a kind of affine transformation plus non-linear activation among the ajacent nodes, is that right? Can I still call it a GCN?

Thank you for your help.

Strictly speaking, GCN was inspired by spectral graph theory, which assumes undirected graphs. However, it doesn’t matter in practice if you apply a GCN to a directed or undirected graph. This is because you can still propose GCN from the standpoint of spatial GNNs, in particular if you replace the symmetrically normalized adjacency matrix by the unnormalized adjacency matrix.

In DGL, we replace adjacency matrices by DGLGraphs, which can store and generate some sparse adjacency matrices when necessary. In your particular case, a DGLGraph is an input argument to your model in forward computation and is not part of the model parameters. It is still used in computation though. You can still call it a GCN.

Thank you for your kindly reply.

So when I code like this, actually I’m not using the vanilla GCN by Thomas Kipf and Max Welling (2017).
Another question, DGL uses message passing frame, is that inspired by Message Passing Neural Network (MPNN) ?

If I use DGL in my work, what publication of your team should I cite?

I guess DGL was motivated by MPNN to some extent to employ the message passing framework. To cite DGL, you can find a BibTeX entry here under the “Cite” section.

I really appreciate your warm help. Thank you.

This topic was automatically closed 30 days after the last reply. New replies are no longer allowed.