Heterogeneous Graph Regression

Hi there, I am trying to debug an issue for predicting a 25 value array for a given graph. For example, there could be 175 nodes with 300 edges, and this graph will give a tensor of 25 values. Also, there is only 1 type of node that has one feature, but there are 2 types of edges (beats, and loses_to). I have been trying to follow along to this tutorial.

class RGCN(nn.Module):
    def __init__(self, in_feats, hid_feats, out_feats, rel_names):
        super().__init__()

        self.conv1 = dglnn.HeteroGraphConv({
            rel: dglnn.GraphConv(in_feats, hid_feats)
            for rel in rel_names}, aggregate='sum')
        self.conv2 = dglnn.HeteroGraphConv({
            rel: dglnn.GraphConv(hid_feats, out_feats)
            for rel in rel_names}, aggregate='sum')

    def forward(self, graph, inputs):
        # inputs is features of nodes
        h = self.conv1(graph, inputs)
        h = {k: F.relu(v) for k, v in h.items()}
        h = self.conv2(graph, h)
        return h

class HeteroClassifier(nn.Module):
    def __init__(self, in_dim, hidden_dim, n_outputs, rel_names):
        super().__init__()

        self.rgcn = RGCN(in_dim, hidden_dim, hidden_dim, rel_names)
        self.classify = nn.Linear(hidden_dim, n_outputs)

    def forward(self, g):
        h = g.ndata['feat']
        h = self.rgcn(g, h)
        with g.local_scope():
            g.ndata['h'] = h
            # Calculate graph representation by average readout.
            hg = 0
            for ntype in g.ntypes:
                hg = hg + dgl.mean_nodes(g, 'h', ntype=ntype)
            return self.classify(hg)

Here is the training step:

# etypes is the list of edge types as strings.
model = HeteroClassifier(1, 20, 25, ['beats', 'loses_to'])
opt = torch.optim.Adam(model.parameters())
for epoch in range(20):
    for batched_graph, labels in cf_dataloader:
        #print(batched_graph, labels)
        predictions = model(batched_graph)
        loss = F.MSELoss(predictions, labels)
        opt.zero_grad()
        loss.backward()
        opt.step()

However, the error I am receiving is this:

---------------------------------------------------------------------------
RuntimeError                              Traceback (most recent call last)
<ipython-input-185-457c3968e0c7> in <module>()
      5     for batched_graph, labels in cf_dataloader:
      6         #print(batched_graph, labels)
----> 7         predictions = model(batched_graph)
      8         loss = F.MSELoss(predictions, labels)
      9         opt.zero_grad()

6 frames
/usr/local/lib/python3.7/dist-packages/torch/nn/modules/module.py in _call_impl(self, *input, **kwargs)
   1128         if not (self._backward_hooks or self._forward_hooks or self._forward_pre_hooks or _global_backward_hooks
   1129                 or _global_forward_hooks or _global_forward_pre_hooks):
-> 1130             return forward_call(*input, **kwargs)
   1131         # Do not call functions when jit is used
   1132         full_backward_hooks, non_full_backward_hooks = [], []

<ipython-input-170-34c06fd537f6> in forward(self, g)
     26     def forward(self, g):
     27         h = g.ndata['feat']
---> 28         h = self.rgcn(g, h)
     29         with g.local_scope():
     30             g.ndata['h'] = h['school']

/usr/local/lib/python3.7/dist-packages/torch/nn/modules/module.py in _call_impl(self, *input, **kwargs)
   1128         if not (self._backward_hooks or self._forward_hooks or self._forward_pre_hooks or _global_backward_hooks
   1129                 or _global_forward_hooks or _global_forward_pre_hooks):
-> 1130             return forward_call(*input, **kwargs)
   1131         # Do not call functions when jit is used
   1132         full_backward_hooks, non_full_backward_hooks = [], []

<ipython-input-170-34c06fd537f6> in forward(self, graph, inputs)
     12     def forward(self, graph, inputs):
     13         # inputs is features of nodes
---> 14         h = self.conv1(graph, inputs)
     15         h = {k: F.relu(v) for k, v in h.items()}
     16         h = self.conv2(graph, h)

/usr/local/lib/python3.7/dist-packages/torch/nn/modules/module.py in _call_impl(self, *input, **kwargs)
   1128         if not (self._backward_hooks or self._forward_hooks or self._forward_pre_hooks or _global_backward_hooks
   1129                 or _global_forward_hooks or _global_forward_pre_hooks):
-> 1130             return forward_call(*input, **kwargs)
   1131         # Do not call functions when jit is used
   1132         full_backward_hooks, non_full_backward_hooks = [], []

/usr/local/lib/python3.7/dist-packages/dgl/nn/pytorch/hetero.py in forward(self, g, inputs, mod_args, mod_kwargs)
    185                 if rel_graph.number_of_edges() == 0:
    186                     continue
--> 187                 if stype not in inputs:
    188                     continue
    189                 dstdata = self.mods[etype](

/usr/local/lib/python3.7/dist-packages/torch/_tensor.py in __contains__(self, element)
    784         raise RuntimeError(
    785             "Tensor.__contains__ only supports Tensor or scalar, but you passed in a %s." %
--> 786             type(element)
    787         )
    788 

RuntimeError: Tensor.__contains__ only supports Tensor or scalar, but you passed in a <class 'str'>.

Can anyone assist me in what I am doing wrong? This community has been so helpful and great, but I haven’t found anything related to what I am trying to accomplish.

What do you get from g.ndata['feat']? It will be great if you can provide a runnable script with synthetic graph data to reproduce the issue.

I’d be happy to provide that! What is typically the best way to share that? I have all of the data in a dataset, as well as a data loader. That way you can see all of the graphs. Still new-ish to PyTorch so I’m not sure the standard way of sharing that. Pickle file?

To answer your other question, g.ndata[‘feat’] represents the singular feature of each node, which is a value between 1-30

Here is one of the graphs from the dataset:

(Graph(num_nodes={'school': 200},
       num_edges={('school', 'beats', 'school'): 375, ('school', 'loses_to', 'school'): 375},
       metagraph=[('school', 'school', 'beats'), ('school', 'school', 'loses_to')]),
 tensor([ 1.,  2.,  3.,  4.,  5.,  7.,  8.,  6.,  9., 10., 12., 11., 14., 13.,
         16., 19., 15., 17., 18., 20., 22., 30., 21., 30., 30.]))
g.ndata = {'feat': tensor([30., 30.,  1., 30., 30., 30., 22., 30., 30., 30., 30., 30., 30., 30.,
        30., 17., 30., 30., 30., 30., 30., 30., 30., 30., 30., 30., 30.,  3.,
        30., 30., 30., 30., 30., 30., 30., 30., 30., 30., 30., 30., 18., 30.,
        30., 30.,  8., 30., 23., 30.,  6., 30., 30., 30., 30., 30., 30., 30.,
        30., 30., 30., 30., 30., 30., 30., 30., 30., 30., 30., 30., 30., 30.,
        10., 30., 30., 30., 30., 30.,  7., 30., 30., 25., 30., 30., 14., 30.,
        19., 30., 30., 30., 30., 30., 30., 30., 30., 30., 30., 30., 30., 30.,
        30., 30., 30., 30., 30., 30., 30., 30., 30., 30., 30., 30., 30., 16.,
        30., 30., 30.,  4., 11., 21., 30., 24.,  2., 30., 30., 30., 30., 30.,
        30., 30., 30., 30., 30., 30., 30., 30., 30., 30., 30., 30., 30., 13.,
        30., 30., 30., 30., 30., 30., 30., 30., 30., 30.,  5., 30., 30., 30.,
        30., 30., 30., 30., 30.,  9., 30., 20., 30., 30., 30., 30., 30., 30.,
        30., 30., 30., 12., 30., 30., 30., 30., 30., 30., 30., 30., 30., 30.,
        30., 30., 30., 30., 30., 15., 30., 30., 30., 30., 30., 30., 30., 30.,
        30., 30., 30., 30.])}
g.edata = defaultdict(<class 'dict'>, {'feat': {('school', 'beats', 'school'): tensor([17.,  7., 24., 23., 28., 38., 31., 27.,  1., 21., 15., 27.,  4.,  4.,
        24.,  1., 17., 10., 16., 28., 18., 13., 49.,  3., 35., 20., 35., 45.,
        10.,  8.,  7., 70., 18., 33., 26., 21., 25.,  3.,  6.,  3., 22., 18.,
        50., 63., 20., 45., 25.,  3., 45., 10., 32., 34., 34., 44., 14., 51.,
         3., 38.,  7.,  7.,  9., 66., 21.,  8.,  3., 49.,  8.,  3., 10., 32.,
        38., 14., 42., 14., 28., 18., 55., 14., 38.,  5., 19., 28.,  9., 37.,
        15., 21., 34., 38.,  6., 38., 48., 21., 14., 32., 20., 39., 26., 42.,
        10., 63., 28., 49.,  3., 44.,  2., 32., 49., 15., 57., 35., 37., 32.,
        11., 14.,  7.,  6., 43.,  2.,  2., 21., 21., 55.,  3., 38., 13., 48.,
        63., 18., 17., 25., 10.,  4.,  6., 19., 37.,  9., 29., 35., 39., 11.,
        19., 14., 23.,  1.,  7., 29., 16.,  3., 45., 21., 10.,  9.,  7., 22.,
         5., 14., 34., 32., 20.,  4., 14., 17., 31., 21.,  2., 22., 18.,  1.,
        38., 28., 17., 45., 44.,  3.,  7., 24., 55., 28.,  7., 54., 10.,  3.,
        21.,  6., 57., 10., 38., 18., 12., 57., 10., 18., 30., 32.,  4.,  2.,
        63., 11., 56.,  9.,  7., 21.,  3., 64., 21.,  7., 21.,  3., 10.,  2.,
        25., 12.,  1., 19.,  3., 72., 17., 14.,  8., 56., 21.,  7., 76.,  8.,
        24., 35.,  3., 31., 38.,  6., 24.,  5., 56.,  4., 14.,  3., 34., 41.,
         4., 31., 39., 49., 37., 14., 63., 14., 24., 48.,  1.,  5., 14., 70.,
        29., 41.,  4., 26., 25., 55.,  4., 14.,  3., 17., 10., 19., 22., 31.,
         7., 33., 46., 42., 17.,  7., 27., 28.,  9.,  3., 36., 31., 28., 31.,
         7., 11., 24., 12., 17., 27.,  7.,  7., 16., 29., 34., 17., 49., 14.,
         3., 14., 31., 19.,  2., 56., 25., 12., 18.,  4., 17., 17., 18., 21.,
        28., 22., 10., 14.,  3.,  7., 38., 53., 21., 39.,  5., 21.,  1.,  7.,
        17.,  7., 18., 20., 38., 23.,  3., 63., 21., 28., 12., 20., 42., 10.,
        12.,  1., 21., 24., 40., 14., 27.,  3.,  7., 29., 15.,  4., 21.,  3.,
        15.,  3., 35., 14., 22.,  1.,  7., 47., 41.,  4., 24., 33., 49.,  8.,
         6.,  3., 20., 23.,  3.,  3.,  7., 10., 31.,  3., 10.],
       dtype=torch.float64), ('school', 'loses_to', 'school'): tensor([17.,  7., 24., 23., 28., 38., 31., 27.,  1., 21., 15., 27.,  4.,  4.,
        24.,  1., 17., 10., 16., 28., 18., 13., 49.,  3., 35., 20., 35., 45.,
        10.,  8.,  7., 70., 18., 33., 26., 21., 25.,  3.,  6.,  3., 22., 18.,
        50., 63., 20., 45., 25.,  3., 45., 10., 32., 34., 34., 44., 14., 51.,
         3., 38.,  7.,  7.,  9., 66., 21.,  8.,  3., 49.,  8.,  3., 10., 32.,
        38., 14., 42., 14., 28., 18., 55., 14., 38.,  5., 19., 28.,  9., 37.,
        15., 21., 34., 38.,  6., 38., 48., 21., 14., 32., 20., 39., 26., 42.,
        10., 63., 28., 49.,  3., 44.,  2., 32., 49., 15., 57., 35., 37., 32.,
        11., 14.,  7.,  6., 43.,  2.,  2., 21., 21., 55.,  3., 38., 13., 48.,
        63., 18., 17., 25., 10.,  4.,  6., 19., 37.,  9., 29., 35., 39., 11.,
        19., 14., 23.,  1.,  7., 29., 16.,  3., 45., 21., 10.,  9.,  7., 22.,
         5., 14., 34., 32., 20.,  4., 14., 17., 31., 21.,  2., 22., 18.,  1.,
        38., 28., 17., 45., 44.,  3.,  7., 24., 55., 28.,  7., 54., 10.,  3.,
        21.,  6., 57., 10., 38., 18., 12., 57., 10., 18., 30., 32.,  4.,  2.,
        63., 11., 56.,  9.,  7., 21.,  3., 64., 21.,  7., 21.,  3., 10.,  2.,
        25., 12.,  1., 19.,  3., 72., 17., 14.,  8., 56., 21.,  7., 76.,  8.,
        24., 35.,  3., 31., 38.,  6., 24.,  5., 56.,  4., 14.,  3., 34., 41.,
         4., 31., 39., 49., 37., 14., 63., 14., 24., 48.,  1.,  5., 14., 70.,
        29., 41.,  4., 26., 25., 55.,  4., 14.,  3., 17., 10., 19., 22., 31.,
         7., 33., 46., 42., 17.,  7., 27., 28.,  9.,  3., 36., 31., 28., 31.,
         7., 11., 24., 12., 17., 27.,  7.,  7., 16., 29., 34., 17., 49., 14.,
         3., 14., 31., 19.,  2., 56., 25., 12., 18.,  4., 17., 17., 18., 21.,
        28., 22., 10., 14.,  3.,  7., 38., 53., 21., 39.,  5., 21.,  1.,  7.,
        17.,  7., 18., 20., 38., 23.,  3., 63., 21., 28., 12., 20., 42., 10.,
        12.,  1., 21., 24., 40., 14., 27.,  3.,  7., 29., 15.,  4., 21.,  3.,
        15.,  3., 35., 14., 22.,  1.,  7., 47., 41.,  4., 24., 33., 49.,  8.,
         6.,  3., 20., 23.,  3.,  3.,  7., 10., 31.,  3., 10.],
       dtype=torch.float64)}})

Let me know if there’s anything else that I could provide to make things more helpful.

I think the error is due to a wrong format of inputs to the RGCN object. It expects a dictionary with a single key school and the value being the node features of the school nodes. Typically a heterogeneous graph has multiple node types, hence an HeteroGraphConv object expects such a dictionary.

So do you think that in the forward function, h = g.ndata['feat'] is what is causing the issues because it is expecting a dictionary of multiple node types? Would I just need to regenerate that portion of the code to have it return something like this?

{'school' : tensor([30., 30.,  1., 30., 30., 30., 22., 30., 30., 30., 30., 30., 30., 30.,
        30., 17., 30., 30., 30., 30., 30., 30., 30., 30., 30., 30., 30.,  3.,
        30., 30., 30., 30., 30., 30., 30., 30., 30., 30., 30., 30., 18., 30.,
        30., 30.,  8., 30., 23., 30.,  6., 30., 30., 30., 30., 30., 30., 30.,
        30., 30., 30., 30., 30., 30., 30., 30., 30., 30., 30., 30., 30., 30.,
        10., 30., 30., 30., 30., 30.,  7., 30., 30., 25., 30., 30., 14., 30.,
        19., 30., 30., 30., 30., 30., 30., 30., 30., 30., 30., 30., 30., 30.,
        30., 30., 30., 30., 30., 30., 30., 30., 30., 30., 30., 30., 30., 16.,
        30., 30., 30.,  4., 11., 21., 30., 24.,  2., 30., 30., 30., 30., 30.,
        30., 30., 30., 30., 30., 30., 30., 30., 30., 30., 30., 30., 30., 13.,
        30., 30., 30., 30., 30., 30., 30., 30., 30., 30.,  5., 30., 30., 30.,
        30., 30., 30., 30., 30.,  9., 30., 20., 30., 30., 30., 30., 30., 30.,
        30., 30., 30., 12., 30., 30., 30., 30., 30., 30., 30., 30., 30., 30.,
        30., 30., 30., 30., 30., 15., 30., 30., 30., 30., 30., 30., 30., 30.,
        30., 30., 30., 30.] }

Also, is this tensor correct? Should it be reshaped to be a N X 1 tensor, and not a 1 X N?

Would I just need to regenerate that portion of the code to have it return something like this?

Yes

Also, is this tensor correct? Should it be reshaped to be a N X 1 tensor, and not a 1 X N?

It should be N X 1. Also you probably want to normalize it.

Got it to work! Thank you so much @mufeili, really means so much. Hoping I won’t have too many more questions along the way, but I’m sure I’ll be able to find it in some of the other forums.

For anyone in the future who runs into this problem, this is what I had to change my forward function to.

    def forward(self, g):
        #h = g.ndata['feat']
        h = {'school': g.ndata['feat']}
        h = self.rgcn(g, h)
        #print(h)
        with g.local_scope():
            g.ndata['h'] = h['school']
            # Calculate graph representation by average readout.
            hg = 0
            for ntype in g.ntypes:
                hg = hg + dgl.mean_nodes(g, 'h', ntype=ntype)
            return self.classify(hg)
2 Likes

I meant to revisit your point about normalizing it. I’m seeing different functions in dgl that have to do with normalization techniques that seem contradicting to traditional normalization in machine learning. For example, dgl.transforms. RowFeatNormalizer normalizes each value in the array so that each row will add up to 1. Why would someone need to do that? If I bring in 15 different features, each one representing values on different scales, shouldn’t it be normalized on that scale looking at all nodes? Thats the “traditional” way for normalizing it in machine learning, so I’m trying to figure out if I’m missing something here.

I actually created a separate discussion here: Graph / Node / Edge Normalization Techniques

I agree with you that this highly depends on the particular scenario.

A common scenario is citation networks, where nodes represent papers. Often the node features are generated based on paper abstracts with methods like bag-of-words. RowFeatNormalizer then yields a distribution over these words for each paper.

This topic was automatically closed 30 days after the last reply. New replies are no longer allowed.