Self-Loop Edge Attention Weight vs Node Importance

I extracted attention weights for each edge from a GATConv Layer. Now while giving input of a graph in dgl, we need to give the self-loops as well by convention as far as I have learned from the documentation. Now, for this, I also got edge attention weights of the self-loops as well. Can these self-loop attentions be treated as the corresponding node importance? Is it something related to self-attention?

Can these self-loop attentions be treated as the corresponding node importance?

If so, how do you plan to use the attention scores of the self-loops?

Is it something related to self-attention?

I don’t think the concept of self-attention matters here.

This topic was automatically closed 30 days after the last reply. New replies are no longer allowed.