I am confused about the implementation of GNNExplainer. In the code, removing edge is replaced by using zero edge-weight, however, when reducing messages, such as ‘mean’ aggregation, those zero weighted edges are still being considered in the denominator of ‘mean’, right? Is this okay?
Yes, you are right. With zero edge-weight, any message passing through that edge will be multiple with the zero edge-weight, becoming zeros in all dimensions. Then no matter what aggregation operation used, e.g. ‘mean’, these zeros will be applied, but make no difference.
But the edge-weights are not discrete, meaning either 1s or 0s, but continuous values, which are learnable with the GNNExplainer algorithm. So during training, there edge-weights change, and impact the messages passing through edges.
Would this answer your question?
Continuous edge weights are supposed to still break the normal magnitude of aggregated node features. In other words, the aggregated node features tends to be smaller than before.
That is what the GNNExplainer tries to do. If smaller node features after aggregation make no difference to the final prediction results, that means the original edges are less important. Then we can cut it off, leaving only the most important edges and nodes to form the subgraph, which becomes the Explainer for a specific node.