GNNExplainer - Help

Hi to everyone!
Im trying to use GNNExplainer. This is my setting:

  1. I have a graph with around 7000 nodes associated with a feature vector of 20 elements.
  2. each node is connected to another one iff they are adjacent.
  3. each node is associated with one label (1,2,3 or 4)

I want to use GNNExp. to understand how the predictions have been made based on the labels. So, if I correctly understood i need a multi instance explanation.

I thought that i can extract a subgraph of nodes having an label ‘c’ and use ‘explain_graph’ function that returns feat mask and edge mask.

What i need is:

  1. how to interpret these two mask
  2. how to understand neighborg nodes influence on nodes having these labels
  3. understand why they are ‘explaining’ ? it seems that they just gave nothing special than statistical informations. So why it is used the term ‘explain’?

Thank you in advance!

If you are tacking node classification, you should consider using explain_node instead.

  1. how to interpret these two mask

The two masks indicate the importance of features and edges for the prediction explained.

  1. how to understand neighborg nodes influence on nodes having these labels

You can get something like that from the edge importance scores.

  1. understand why they are ‘explaining’ ? it seems that they just gave nothing special than statistical informations. So why it is used the term ‘explain’?

GNNExplainer simply tells you which features and edges are most important for the GNN model to make the prediction you are interested.

Clear, but:

  1. Given a tensor of edges with importance values. Example tensor([ 0.2, 0.2, 0.8, 0.2]) Once I understood that the third edge is more important how can I use this information to understand the explanation? In my case I have adjacency as relation.

  2. In the official GNNExplainer paper “4.3 Multi-instance explanations through graph prototypes
    The output of a single-instance explanation (Sections 4.1 and 4.2) is a small subgraph of the input
    graph and a small subset of associated node features that are most influential for a single prediction.
    To answer questions like “How did a GNN predict that a given set of nodes all have label c?”, we
    need to obtain a global explanation of class c. Our goal here is to provide insight into how the
    identified subgraph for a particular node relates to a graph structure that explains an entire class.
    GNNEXPLAINER can provide multi-instance explanations based on graph alignments and prototypes.
    Our approach has two stages:
    First, for a given class c (or, any set of predictions that we want to explain), we first choose a
    reference node vc, for example, by computing the mean embedding of all nodes assigned to c. We
    then take explanation GS(vc) for reference vc and align it to explanations of other nodes assigned to
    class c. Finding optimal matching of large graphs is challenging in practice. However, the singleinstance GNNEXPLAINER generates small graphs (Section 4.2) and thus near-optimal pairwise graph
    matchings can be efficiently computed.

So now I’m confused if i need multi-instance or single-instance. If I should apply mutli-instance, how to do that?

Sorry i did’t tagged you

  1. Given a tensor of edges with importance values. Example tensor([ 0.2, 0.2, 0.8, 0.2]) Once I understood that the third edge is more important how can I use this information to understand the explanation? In my case I have adjacency as relation.

This basically says that if you can only keep one edge from the original graph, then using the third edge is likely to yield a prediction closest to your original prediction.

So now I’m confused if i need multi-instance or single-instance. If I should apply mutli-instance, how to do that?

It’s possible that you need multi-instance. Currently DGL’s built-in module only supports single-instance cases. You may need to check the original codebase and see if there is an implementation that you can borrow.

1 Like

This topic was automatically closed 30 days after the last reply. New replies are no longer allowed.