RGCN for heterographs

Firstly, I wanted to thank all of you for creating this wonderful library for Graph computations. I am working on link prediction task with RGCNs involving a heterogeneous network with 3 kinds of nodes and several kinds of relationships between the nodes. I am closely following two examples to achieve my goal:

  1. RGCN
  2. RGCN-hetero

My first question is whether in the rgcn-hetero example if it is possible for node attributes for each node type to have different dimensions? (are there any tutorials for this?)

Secondly, I am trying to write my own model class LinkPredict with heterographs similar to example 1 but by modifying example 2 which has a EntityClassify task. If I understand this correctly, I would need to modify the Input 2 Hidden (i2h) layer to take into account different input node attribute sizes for my 3 different node types. Currently it seems to me that in_feat is common for ALL node types?

Sorry if my questions are a little naive. I’m just starting out with DGL. Many thanks!

That’s very possible and is one of the reasons why we put many efforts in supporting heterogeneous graphs. Just set your node attributes using the nodes[ntype].data[name] syntax.

G = ... # some heterograph
G.nodes['user'].data['h'] = torch.randn((g.number_of_nodes('user'), 5))  # dim=5
G.nodes['item'].data['h'] = torch.randn((g.number_of_nodes('item'), 3))  # dim=3

That’s very correct.

Thanks a lot for the quick and informative reply. It’s really very reassuring to get your input on this.
One of the reasons I really like the DGL library is because of this very active community here. Keep up the great work!

Could you give a demo that how to modify the i2h layer? I meet the same problem and the different feature size of node type cause runtime error: stack expects each tensor to be equal size, but got [2743, 12] at entry 0 and [2743, 8] at entry 1

I assume 2743 is the number of nodes and 12, 8 are feature size for different types? Then something like torch.cat([a, b], dim=1) should work. If you can provide an example for reproducing the issue we may be able to learn more about what’s going on.

Im having the same problem as @wdyreborn, when it comes to different feature sizes for each node type. @mufeili your answer seems to be right, but how to change the stack function into torch.cat since the code is in pytorch module?

I did not get that. Could you provide a code snippet and associate your problem with it?

Thank you for your reply,
The graph has 4 types of nodes including the target node: lets call them: target_node, node_type_1, node_type_2, node_type_3.
There are 3 relation types (Undirected): target_node<>node_type_1, node_type_1<>node_type_2, node_type_1<>node_type_3.
Each node type has a different feature dimensions size:

  • target_node features size: 8.
  • node_type_1 features size: 22.
  • node_type_2 features size: 3.
  • node_type_3 features size: 3.

Right now, the solution Im using is giving these features to a MLP in order to get all features size to the same dimensions, but the best solution would be to take each node’s features as they are. So in the convolution operation I tried passing the source nodes features size:

        rel_dict = {'node_type_3 <>node_type_1 ': 3, 'node_type_2 <>node_type_1 ': 3, 'node_type_1 <>node_type_3 ': 22, 'node_type_1 <>node_type_2 ': 22, 'node_type_1 <>target_node ': 22, 'target_node <>node_type_1 ': 8}
        self.conv = dglnn.HeteroGraphConv({
                rel : dglnn.GraphConv(rel_in_feat, out_feat, norm='right', weight=False, bias=False)
                for rel, rel_in_feat in rel_dict.items()

but this results in the following error when it comes to the stacking of the feature tensors:

logits = model()[category]
  File "/home/jupyter/.cache/pypoetry/virtualenvs/dia-fraud-detection-DUVXqc5e-py3.7/lib64/python3.7/site-packages/torch/nn/modules/module.py", line 1051, in _call_impl
    return forward_call(*input, **kwargs)
  File "/home/jupyter/notebook/datalake-fraud-detection/dia_fraud_detection/gnn_fraud_detection_dgl/pytorch_model_test.py", line 248, in forward
    h = layer(self.g, h)
  File "/home/jupyter/.cache/pypoetry/virtualenvs/dia-fraud-detection-DUVXqc5e-py3.7/lib64/python3.7/site-packages/torch/nn/modules/module.py", line 1051, in _call_impl
    return forward_call(*input, **kwargs)
  File "/home/jupyter/notebook/datalake-fraud-detection/dia_fraud_detection/gnn_fraud_detection_dgl/pytorch_model_test.py", line 110, in forward
    hs = self.conv(g, inputs, mod_kwargs=wdict)
  File "/home/jupyter/.cache/pypoetry/virtualenvs/dia-fraud-detection-DUVXqc5e-py3.7/lib64/python3.7/site-packages/torch/nn/modules/module.py", line 1051, in _call_impl
    return forward_call(*input, **kwargs)
  File "/home/jupyter/.cache/pypoetry/virtualenvs/dia-fraud-detection-DUVXqc5e-py3.7/lib64/python3.7/site-packages/dgl/nn/pytorch/hetero.py", line 179, in forward
    rsts[nty] = self.agg_fn(alist, nty)
  File "/home/jupyter/.cache/pypoetry/virtualenvs/dia-fraud-detection-DUVXqc5e-py3.7/lib64/python3.7/site-packages/dgl/nn/pytorch/hetero.py", line 221, in aggfn
    stacked = th.stack(inputs, dim=0)
RuntimeError: stack expects each tensor to be equal size, but got [14899, 3] at entry 0 and [14899, 8] at entry 2

How would one counter this problem for each node type features size?

Could you provide a runnable code snippet to reproduce the issue? You may replace the real graph with a synthetic graph.