Proper way to propagate edge representations


I am working on a project that classifies an edge on my graph.
In my homogenous multi-graph (simplified graph attached), I have nodes attributes of size of 3, and edge attributes of size of 2. For each edge, there is a binary label. My goal is to classify edge labels in the graph.

With above, I now have two questions.
1. what should be the appropriate way to organize the edge and node features together?
What I am doing now is I pad two zeros at the end of each node feature and pad three zeros at the beginning of the edge feature, so that each edge and node will have a feature vector of same size of 5.
Is it a proper way to organize features?

2. how should I propagate edge features?
What I am doing now is after update_all() nodes, in the apply_edge() I add the src and dst nodes to each edge:[‘ef’] = edges.src[‘nf’] +[‘ef’] + edges.dst[‘nf’]
Same question here: Is it a proper way to propagate edge features?
Or you have any suggestions on propagating edge features.

Any suggestions would be appreciated.



Usually people use FullyConnect Layer(nn.Linear/keras.layers.Dense) to convert vector to the same dimension.

For example,

g.edata['ef_new'] = dense(g.edata['ef']) # now ef_new has the same dimension has same dimensions

Apply edges is one way to do so, and you can also do the same thing inside message functions

Hi VoVAllen,

Thanks for your tips. Will try.

Still looking for more suggestions… Any other tips?

For your two questions:

  1. what should be the appropriate way to organize the edge and node features together?

You don’t have to actually make different features of nodes and edges to have the same size. In DGL, we only require that the node/edge features with the same name to have the same size. So essentially you can write something like this and you will be good:

g.ndata['x'] = torch.tensor([[0.5,1,1],[1,0,1],[0.1,0,0]])
g.edata['x'] = torch.tensor([[3,1],[0.5,1],[2,1]])
  1. how should I propagate edge features?

I assume that you want to compute new edge features from old edge features as well as those on the endpoints. There are many ways to do this and it is usually your design choice.

If your node and edge features have the same size, adding them up is technically OK. In practice I would probably use an MLP:

module = nn.Linear(node_feature_size * 2 + edge_feature_size, edge_feature_size)

def apply_edge(edges):
    x_src, x_edge, x_dst = edges.src['x'],['x'], edges.dst['x']
    return {'x': F.relu(module([x_src, x_edge, x_dst], 1)))}