You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
I’m currently working on explainability in Graph Neural Networks (GNNs) and facing some challenges.
My model architecture consists of:
(SAGEConv) × 2 + SAGPooling, followed by
global_max_pool concatenated with global_mean_pool, leading to
A classification head for a whole-graph binary classification task.
I am able to extract relevant subgraphs using pooled nodes and their corresponding attention scores. However, I am looking for a way to validate these explanations, potentially using GNNExplainer. Unfortunately, I’ve read that GNNExplainer doesn’t work well with dynamic pooling layers like SAGPooling (which involves nested TopKPooling).
I also tried PGExplainer, but the results were not promising. I observed:
Fidelity values of {(0,0)}
A loss that doesn’t decrease during training (stucked at 1.344... also changing LR and number of epochs)
Does anyone have suggestions on how to explain this model effectively?
Any help would be greatly appreciated!
reacted with thumbs up emoji reacted with thumbs down emoji reacted with laugh emoji reacted with hooray emoji reacted with confused emoji reacted with heart emoji reacted with rocket emoji reacted with eyes emoji
-
Hello everyone,
I’m currently working on explainability in Graph Neural Networks (GNNs) and facing some challenges.
My model architecture consists of:
A classification head for a whole-graph binary classification task.
I am able to extract relevant subgraphs using pooled nodes and their corresponding attention scores. However, I am looking for a way to validate these explanations, potentially using GNNExplainer. Unfortunately, I’ve read that GNNExplainer doesn’t work well with dynamic pooling layers like SAGPooling (which involves nested TopKPooling).
I also tried PGExplainer, but the results were not promising. I observed:
Does anyone have suggestions on how to explain this model effectively?
Any help would be greatly appreciated!
Thanks in advance!
Beta Was this translation helpful? Give feedback.
All reactions