Graph conv without shared parameters

Hi,
I’m new to DGL. Is there a way to build a locally connected graph layer (for instance, a graph conv without shared parameters) where we learn the distinct parameters for the edges of each layer rather than sharing parameters?

I’ve built this in the past using raw tensorflow doing groupby tensor operations where each node had a hierarchy of ids representing which nodes it convolved with, but that was very complex to implement and the codebase became a mess and if I had to implement something similar in the future I hope I could rely on DGL.

Let me know if this is possible in DGL. Any advice on how to implement would be appreciated!

Thanks!

Assume the features of the edges is a tensor efeat of shape (E, M, 1) and the weights of the edges is a tensor eweight of shape (E, N, M), where E is the number of edges, M is the input edge feature size, N is the output edge feature size. You can first perform a batch matrix multiplication using torch.bmm.

import torch

efeat = torch.bmm(eweight, efeat)

and then perform message passing based on edges.

import dgl
import dgl.function as fn

g.edata['feat'] = efeat.squeeze(-1)
g.update_all = dgl.update_all(fn.copy_e('feat', 'm'), fn.sum('m', 'feat'))

Update: Previously the last line was g.update_all = dgl.update_all(fn.copy_u('feat', 'm'), fn.sum('m', 'feat')). As @BarclayII mentioned, fn.copy_u should be fn.copy_e.

To add: copy_u should be a copy_e instead.

1 Like