Out-degree using edge weights

Hi

How can I go about summing the outgoing edge weights for a node? For example, in the GATConv layer, given source node i, we have adjacent nodes \mathcal{N_{i}}. For j \in \mathcal{N_{i}}, we have the following:

\sum_{j \in \mathcal{N_{i}} } \alpha_{i,j} = 1

where \alpha_{i,j} is the attention that node i pays to node j. The GATConv layer computes the weighted sum of adjacent node features as:

graph.update_all(fn.u_mul_e('ft', 'a', 'm'), 
                         fn.sum('m', 'ft'))

We could compute the in-degree of attention weights as (which sums to 1 per node, by definition):

graph.update_all(fn.copy_e('a', 'm'), 
                         fn.sum('m', 'in_degree'))

How can I go about computing the out_degree?

\sum_{j \in \mathcal{N_{i}}} \alpha_{j,i}

I came up with this solution:

rev = dgl.reverse(graph)
rev.edata['a'] = graph.edata['a']
rev.update_all(fn.copy_e('a', 'm'), 
                         fn.sum('m', 'out_degree'))
graph.ndata['out_degree'] = rev.ndata['out_degree']

but is there a more conventional / efficient way for doing so?

k

I think your solution is right, we also expose the dgl.ops API to user:

which could simplify your code as:

import dgl.ops as ops
rev = dgl.reverse(graph)
out_degree = ops.copy_e_sum(rev, a)  # suppose a is the edge weight tensor.

I think introducing an extra direction/reverse argument in dgl.ops would make the code more concise:

out_degree = ops.copy_e_sum(graph, a, direction='vu')

@kristianeschenburg WDYT?

Hi @zihao

That seems reasonable – should I create a pull request to update the ops.copy_e_sum function?

k

Hi @kristianeschenburg I’ll create a PR and invite you to review.

This topic was automatically closed 30 days after the last reply. New replies are no longer allowed.