I am trying to implement attentivefp to a set of graphs of mine. I keep getting the error above when executing the apply_edges1 function in the GetContext Class.
def apply_edges1(edges):
"""Edge feature update."""
return {'he1': torch.cat([edges.src['h'][0], edges.data['e']], dim=1)}
Every time I get the error that 2 extra rows exist in the edge update. For example, I tried this function on one sample graph in my dataset, with 220 edges and 55 nodes. The error says: DGLError: Expected data to have 220 rows, got 222. If I try it on another sample with 100 edges it would say DGLError: Expected data to have 100 rows, got 102. In case you are wondering why I added [0] to edges.src[‘h’] it’s because of the dimensional different between the edge feature (1) and node feature size (2), so other I would get a concatenation error due to dimension difference.
Any help is appreciated. Where are the 2 extra rows coming from ?
Best regards
Ali