hi @ mufeili
I want to ask a question about add self loop function.
here is a toy example:
g = dgl.graph((torch.tensor([0, 1, 2, 0]), torch.tensor([1, 2, 0, 2])))
g.ndata[‘feat’]=torch.tensor([1,2,3])
g.edata[‘w’]=torch.tensor([1,2,3,4])
when I called this function, g = dgl.add_self_loop(g)
the edge data become: {‘w’: tensor([1, 2, 3, 4, 0, 0, 0])} 0 is added to the end.
then if I want to apply edge weight during message passing using a gcn layer.
the weight of selfloop is 0, so the add selfloop function doesn’t change the message passing process, Right?
edge weight code from Graphconv:
graph.edata[’_edge_weight’] = edge_weight
aggregate_fn = fn.u_mul_e(‘h’, ‘_edge_weight’, ‘m’)