Thank you for your reply,
The graph has 4 types of nodes including the target node: lets call them: target_node, node_type_1, node_type_2, node_type_3.
There are 3 relation types (Undirected): target_node<>node_type_1, node_type_1<>node_type_2, node_type_1<>node_type_3.
Each node type has a different feature dimensions size:
- target_node features size: 8.
- node_type_1 features size: 22.
- node_type_2 features size: 3.
- node_type_3 features size: 3.
Right now, the solution Im using is giving these features to a MLP in order to get all features size to the same dimensions, but the best solution would be to take each node’s features as they are. So in the convolution operation I tried passing the source nodes features size:
rel_dict = {'node_type_3 <>node_type_1 ': 3, 'node_type_2 <>node_type_1 ': 3, 'node_type_1 <>node_type_3 ': 22, 'node_type_1 <>node_type_2 ': 22, 'node_type_1 <>target_node ': 22, 'target_node <>node_type_1 ': 8}
self.conv = dglnn.HeteroGraphConv({
rel : dglnn.GraphConv(rel_in_feat, out_feat, norm='right', weight=False, bias=False)
for rel, rel_in_feat in rel_dict.items()
})
but this results in the following error when it comes to the stacking of the feature tensors:
logits = model()[category]
File "/home/jupyter/.cache/pypoetry/virtualenvs/dia-fraud-detection-DUVXqc5e-py3.7/lib64/python3.7/site-packages/torch/nn/modules/module.py", line 1051, in _call_impl
return forward_call(*input, **kwargs)
File "/home/jupyter/notebook/datalake-fraud-detection/dia_fraud_detection/gnn_fraud_detection_dgl/pytorch_model_test.py", line 248, in forward
h = layer(self.g, h)
File "/home/jupyter/.cache/pypoetry/virtualenvs/dia-fraud-detection-DUVXqc5e-py3.7/lib64/python3.7/site-packages/torch/nn/modules/module.py", line 1051, in _call_impl
return forward_call(*input, **kwargs)
File "/home/jupyter/notebook/datalake-fraud-detection/dia_fraud_detection/gnn_fraud_detection_dgl/pytorch_model_test.py", line 110, in forward
hs = self.conv(g, inputs, mod_kwargs=wdict)
File "/home/jupyter/.cache/pypoetry/virtualenvs/dia-fraud-detection-DUVXqc5e-py3.7/lib64/python3.7/site-packages/torch/nn/modules/module.py", line 1051, in _call_impl
return forward_call(*input, **kwargs)
File "/home/jupyter/.cache/pypoetry/virtualenvs/dia-fraud-detection-DUVXqc5e-py3.7/lib64/python3.7/site-packages/dgl/nn/pytorch/hetero.py", line 179, in forward
rsts[nty] = self.agg_fn(alist, nty)
File "/home/jupyter/.cache/pypoetry/virtualenvs/dia-fraud-detection-DUVXqc5e-py3.7/lib64/python3.7/site-packages/dgl/nn/pytorch/hetero.py", line 221, in aggfn
stacked = th.stack(inputs, dim=0)
RuntimeError: stack expects each tensor to be equal size, but got [14899, 3] at entry 0 and [14899, 8] at entry 2
How would one counter this problem for each node type features size?