Here I researched the code of GCN in dal-master/examples/pytorch/gcn/train.py
and I don’t know where the code did the degree normalization in the paper
I have seen the variable declared:
degs = g.in_degrees().float()
norm = torch.pow(degs, -0.5)
norm[torch.isinf(norm)] = 0
if cuda:
norm = norm.cuda()
g.ndata[‘norm’] = norm.unsqueeze(1)
but I think the data “g.ndata[‘norm’]” hasn’t been used in the following code…
So my question is: Where does this variable work?Or is the normalization process has been packaged in the function dgl.nn.pytorch.GraphConv() already??
So is the meaning of it like follow:
the GCNLayer Class in code gcn_spmv is same to the function GraphConv in dgl.nn.pytorch?
Hi, GraphConv
module computes normalizer on-the-fly because one of the arugment is a graph object. This is useful when the graph structure is dynamic. So the codes you showed is not used in GraphConv
module. They should be moved to gcn_spmv.py
. Thanks for pointing this out!
OK sorry for my late reply.
So what I mean is: if I just use the GraphConv(), should I use the above codes to declare and compute the normalization value?
I think it maybe compute in the function GraphConv() automatically?
Is my understanding correct??