I am new to both DGL and GNNs, so it may not seem clear to me how DGL works from the bottom up.
I learned from this post to create my own GCN. It copes with both node features and edge features.
My codes are like:
class GCNLayer(nn.Module):
def __init__(self, ndim, edim, out_dim):
super(GCNLayer, self).__init__()
self.W_msg = nn.Linear(ndim + edim, out_dim)
self.W_apply = nn.Linear(ndim + out_dim, out_dim)
# use both edge features and node features
def msg_func_node_edge(self, edges):
return {'m': F.relu(self.W_msg(torch.cat([edges.src['h'], edges.data['h']], dim=1)))}
def forward(self, g_dgl, nfeats, efeats):
with g_dgl.local_scope():
g = g_dgl
g.ndata['h'] = nfeats # node features
g.edata['h'] = efeats # edge features
# update all: message passing, collect msg 'm' from neighbors and save to mailbox 'h_neigh'
g.update_all(self.msg_func_node_edge, fn.sum(msg='m', out='h_neigh'))
# update hidden state of nodes
g.ndata['h'] = F.relu(self.W_apply(torch.cat([g.ndata['h'], g.ndata['h_neigh']], dim=1)))
return g.ndata['h']
class GCN(nn.Module):
def __init__(self, ndim, edim, outdim):
super(GCN, self).__init__()
# create a 2-layer GCN
self.layers = nn.ModuleList()
self.layers.append(GCNLayer(ndim, edim, 8)) # 8: GCN hidden dim
self.layers.append(GCNLayer(8, edim, outdim))
def forward(self, g, nfeats, efeats):
# conv twice
for i, layer in enumerate(self.layers):
nfeats = layer(g, nfeats, efeats)
g.ndata['h'] = nfeats
h = dgl.mean_nodes(g, 'h')
return h
I learned that spectral GCN requires undirected graphs, and spatial GCN can deal with both the undirected and the directed. Is my GCN the spatical one?
And when I printed the parameters of the net, I noticed that adjacent matrix was not used. It’s more like a kind of affine transformation plus non-linear activation among the ajacent nodes, is that right? Can I still call it a GCN?
Thank you for your help.