Dear all,
I’ve been trying to understand how to use the GCN for node classification that takes into account node and edge features. Here is the glance of the features:
Node feature:
g_dgl.ndata[‘n_feat’]: tensor([[[1, 1, 1],[1, 1, 1],[0, 1, 1]],…,[[1, 1, 0],[1, 1, 1],[1, 1, 1]]]) -> torch.Size([116054, 3, 3])
Edge feature1:
g_dgl.edata[‘e_feat1’]: tensor([[0, 0, 1],…,[0, 0, 1]]) -> torch.Size([346320, 3])
Edge feature2:
g_dgl.edata[‘e_feat2’]:tensor([[0, 1, 1],…,[1, 0, 1]]) -> torch.Size([346320, 3])
Edge feature3:
g_dgl.edata[‘e_feat3’]:tensor([[0, 1, 1],…,[1, 0, 1]]) -> torch.Size([346320, 3])
So far, I have got several directions and come with the implementation to create GCN as followed:
class GCN(nn.Module):
def __init__(self, dim_in, dim_out, activation, dropout):
super(GCN, self).__init__()
self.layers = nn.ModuleList()
self.layers.append(GraphConv(dim_in, dim_hid, activation=activation))
self.layers.append(GraphConv(dim_hid, dim_out))
self.dropout = nn.Dropout(p=dropout)
def forward(self, g_dgl, feats):
h = feats
for i, layer in enumerate(self.layers):
if i != 0:
h = self.dropout(h)
h = layer(g_dgl, h)
return h
The model is created with model = GCN(dim_in, dim_out, F.relu, dropout)
, train with model.train()
, and perform inference with model(g_dgl, d_dgl.ndata["n_feat"])
.
As seen, the class doesn’t take into account the edge’s features yet. I have been suggested with this class:
class GNNLayer(nn.Module):
def __init__(self, node_dims, edge_dims, output_dims, activation):
super(GNNLayer, self).__init__()
self.W_msg = nn.Linear(node_dims + edge_dims, output_dims)
self.W_apply = nn.Linear(output_dims * 2, output_dims)
self.activation = activation
def message_func(edges):
return {'m': F.relu(self.W_msg(th.cat([edges.src['h'], edges.data['h']], 0)))}
def forward(self, g, node_features, edge_features):
with g.local_scope():
g.ndata['n_feat'] = node_features
g.edata['e_feat'] = edge_features
g.update_all(self.message_func, fn.sum('m', 'h_neigh'))
g.ndata['h'] = F.relu(self.W_apply(th.cat([g.ndata['h'], g.ndata['h_neigh']], 0)))
return g.ndata['h']
With that class, I replace self.layers.append(GraphConv(dim_in, dim_hid, activation=activation))
with self.layers.append(GNNLayer(3, 3, 1, activation=activation))
.
- However, I’m not sure yet if by just replacing that line, it already take into account all of my 3 edges features I mentioned.
- In addition, with
self.layers.append(GraphConv(dim_in, dim_out, activation=activation))
, it is clear that the dim_out of previous layer becomes the dim_in for the next layer. But when GNNLayer class takes node and edge features, how is the same principle is applied (dim_out becomes dim_in)?
Thanks all for any suggestions and directions.