I’m trying to convert to DGL some existing GNN I’ve implemented manually, and I can’t seem to achieve the same performance when using built-in functions. I’m passing messages in a heterogenous bi-partite graph with two types of nodes, one for each partition, and I’ve managed to partially reduce the issue to the following short piece of code which is supposed to just sum neighbors embeddings:
G['l2c'].update_all(fn.copy_src('emb', 'm'), fn.sum('m', 'h')) result = G.nodes['clause'].data['h']
Which I would have expected, since I’m using builtin functions, to be translated to something like:
result = torch.mm(G.adjacency_matrix(etype='l2c'),G.nodes['literal'].data['emb'])
But the update_all version seems to take about 5-10 times longer compared to manually doing the sparse-dense multiplication, though they return the same result. Am I missing something about how to do that correctly?