Problem with understanding GATConv output shape

Hi! I’m using a model with 2 GATConv layer followed by a FC layer for binary graph classification.

For a batch size of 1:

  1. Shape of input features is [86,11] (86 nodes and 11 features)
  2. Shape of features after 1st GATConv: [86, 5, 64] which is flattened to [86, 320] (86 nodes, 5 heads, 64 is hidden dim)
    3.Similarly, shape of features from last GATConv is [86, 64].
  3. Output of FC is coming as [86, 1]

So as expected, the shape of my label is [1]

As i am using CELoss, I don’t face an error while computing the loss, but this loss calculation is prone to error right?

Shouldn’t the output of the last layer be of shape [1] too?

Sounds like you need to compute graph representations out of node representations before loss computation.

1 Like

Oh I just realized this was an error in my code. I was analysing the output of the graphs and redefined the forward() of the custom model and forgot to add the readout functions before the FC.

Please go ahead and delete this topic.
I am so sorry for the trouble.