Integrated gradient for Captumexplainer #8612
Closed
arthurserres
started this conversation in
General
Replies: 1 comment 1 reply
-
So to answer your question 'What is the path of edges that we use in the IG computation for edges as objects ?' this is |
Beta Was this translation helpful? Give feedback.
1 reply
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
-
Pyg propose a library called explain that allows to obtain a bit of explainability for GNNs.
I am using it with the Captumexplainer('IntegratedGradients') in an edge classification task. I also use the parameters node_mask_type='attributes' and edge_mask_type='object'.
I looked at the Integratedgradient paper as well as the source code (of explain as well as captum), but I still do not manage to get how attribution is computed for edges as object. The idea of IG is to sum the contribution of the derivatives of the objective along a continuous path. Whereas we can easily imagine such a path for nodes attributes, I don't manage to see what it could be for edges as objects. Either some edge is in the graph either it is not, what would be a continuous path for the edges as object (we could do it with the edge weights or attributes but it's not what we look at now as we only consider the edges as objects)... More precisely the question is: 'What is the path of edges that we use in the IG computation for edges as objects ?'
I would be super thankful if someone can clarify this as I'm kind of lost :)
Many thanks in advance!
Beta Was this translation helpful? Give feedback.
All reactions