Forgive me for my ignorance, but I can’t seem to figure out what feat_dict
is or how to access it in the example on https://doc.dgl.ai/tutorials/basics/5_hetero.html:
def forward(self, G, feat_dict):
# The input is a dictionary of node features for each type
funcs = {}
for srctype, etype, dsttype in G.canonical_etypes:
# Compute W_r * h
Wh = self.weight[etype](feat_dict[srctype])
# Save it in graph for message passing
G.nodes[srctype].data['Wh_%s' % etype] = Wh
# Specify per-relation message passing functions: (message_func, reduce_func).
# Note that the results are saved to the same destination feature 'h', which
# hints the type wise reducer for aggregation.
funcs[etype] = (fn.copy_u('Wh_%s' % etype, 'm'), fn.mean('m', 'h'))
...
I don’t see it explicitly used in the forward
call:
def forward(self, G):
h_dict = self.layer1(G, self.embed)
...
I tried modifying the code from that example for my own data but keep getting KeyError: 'h'
when I run logits = model(g)
. I suspect it has something to do with the data from the example having some kind of node or edge data associated with it that my data didn’t add yet (e.g., with something like G.nodes['author'].data['data_type'] = data_tensor
).
When I tried to access feat_dict
running through the example code (which runs fine for me), I can’t seem to figure out how to get it. I did notice, however, that the “author” nodes in the example code has 2 tensors of data:
>>> G.nodes['author'].data
{'Wh_writing': tensor([[-0.1177, 0.0685, 0.1610, ..., 0.1161, -0.2778, -0.0606],
[-0.1201, 0.0551, 0.1668, ..., 0.1063, -0.2688, -0.0790],
[-0.1154, 0.0608, 0.1625, ..., 0.1147, -0.2693, -0.0691],
...,
[-0.1147, 0.0511, 0.1690, ..., 0.0934, -0.2735, -0.0687],
[-0.1150, 0.0465, 0.1714, ..., 0.0888, -0.2744, -0.0668],
[-0.1069, 0.0594, 0.1608, ..., 0.1047, -0.2769, -0.0605]],
grad_fn=<AddmmBackward>), 'h': tensor([[ 0.1134, 0.0760, -0.1769, ..., -0.0628, 0.2023, -0.2436],
[ 0.1136, 0.0733, -0.1780, ..., -0.0668, 0.2072, -0.2475],
[ 0.1119, 0.0793, -0.1668, ..., -0.0486, 0.2022, -0.2488],
...,
[ 0.1181, 0.0576, -0.1734, ..., -0.0683, 0.2064, -0.2453],
[ 0.1169, 0.0723, -0.1631, ..., -0.0627, 0.2070, -0.2407],
[ 0.1130, 0.0713, -0.1882, ..., -0.0726, 0.1963, -0.2378]],
grad_fn=<CopyReduceBackward>)}
While my data only has one:
>>> my_g.nodes['author'].data
{'Wh_reviews': tensor([[ 0.2737, 0.2431, -0.0131, ..., 0.0392, 0.2737, 0.1629],
[ 0.2742, 0.2428, -0.0123, ..., 0.0359, 0.2758, 0.1602],
[ 0.2729, 0.2412, -0.0140, ..., 0.0388, 0.2742, 0.1618],
...,
[ 0.2744, 0.2422, -0.0111, ..., 0.0374, 0.2754, 0.1589],
[ 0.2752, 0.2439, -0.0137, ..., 0.0381, 0.2743, 0.1613],
[ 0.2738, 0.2447, -0.0124, ..., 0.0357, 0.2757, 0.1607]],
device='cuda:0', grad_fn=<AddmmBackward>)}
But I suspect that’s also because in the example graph “author” appears in two canonical edge types but in my graph “author” only appears once.
Thanks in advance for your tips and helpful feedback!