Get sparse feature importance from GNNExplainer

Hi to everyone!

I have tried to increase the beta1 and beta 2 to larger values in order to get sparse feature importance vector, but no matter how large the two parameters are, the feature importance are not sparse.

Does ‘sparse’ mean the feature importance should have zeros? If so, why GNNExplainer fail to output sparse results?

Thanks a lot!

GNNExplainer controls sparsity by adding penalty to the mean value and entropy of masks. Have you observed a drop on these two metrics after you enlarge beta1 and beta2?

On the other side, please check the loss ratio between that for the edge mask and the feature mask. One possible reason could be too large loss for the edge mask, thus the loss for the feature mask is ignored. You may set alpha1 and alpha2 to zeros for a sanity check.

Dear dyru,

thanks a lot for your reply. I am using the cora dataset. Following your valuable suggestions, I set alpha1=alpha2=0, and increase beta1 and beta2 from 1 to 10000. The resulted feature importance vectors are pretty similar, with elements value range from 0.2 to 0.3 (please see the figure where I print out the feature mask tensor).

this is for beta=1:

feat_mask tensor([0.2167, 0.2505, 0.2401, …, 0.2642, 0.2597, 0.2713])

this is for beta=10000:

feat_mask tensor([0.2476, 0.2583, 0.2576, …, 0.2336, 0.2659, 0.2645])

The codes are :

Could you have another look on this? Thanks!

This topic was automatically closed 30 days after the last reply. New replies are no longer allowed.