Importance of features by GNNExplainer

Hello,GNNExplainer is used to extract the importance of the node features in the graph.
(which has 10 node features)

GNNExplainer
Referring to this page, you can extract the importance of the features of the subgraph as shown here.

feat_mask
tensor([0.2362, 0.2497, 0.2622, 0.2675, … , 0.2649, 0.2962, 0.2533])

If I want to extract the importance of a feature for the entire graph, should I calculate the average of all subgraphs? Or is there another way?

For node classification, yes you can calculate that for all nodes and then take the average. For graph classification, you only need to do once.

Thanks so much @mufeli

I was doing graph classification.

For graph classification, you only need to do once.

Is it just the feat_mask of any given subgraph? Or is it something else entirely?
Sorry for my lack of knowledge.

It is based on the entire graph and the features of all nodes in the graph.

Thank you for your reply again,@mufeli

The value of feat_mask is different for each subgraph.

It is based on the entire graph and the features of all nodes in the graph.

For graph classification, is this meaning that the feat_mask of any subgraph and the feat_mask of the entire graph set are the same?

For graph classification, the GNNExplainer module outputs a feat_mask for a graph. Subgraphs do not apply here. For node classification, subgraphs apply because we want to explain predictions for individual nodes and hence extract node-centered subgraphs.

This topic was automatically closed 30 days after the last reply. New replies are no longer allowed.