Replies: 1 comment 4 replies
-
|
Beta Was this translation helpful? Give feedback.
4 replies
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
Uh oh!
There was an error while loading. Please reload this page.
-
In the example captum_explainability.py, I didn't quite understand the code(node level explanation):
captum_model = to_captum(model, mask_type='node', output_idx=output_idx)
ig = IntegratedGradients(captum_model)
ig_attr_node = ig.attribute(data.x.unsqueeze(0), target=target,
additional_forward_args=(data.edge_index),
internal_batch_size=1)
If the target node is a test node, then how do we get the gradient of the test node feature. I guess it should be the gradient of loss with respect to the test node feature, then I am just wondering this loss is the training loss or the loss of all the nodes(including training node, val node and testing node)?
Thanks!
Beta Was this translation helpful? Give feedback.
All reactions