Repository for benchmarking different post-hoc XAI explanation methods on image datasets.
To install and use the project, follow the step explained in the documentation.
For a rapid trial, you can find the extracted masks, the saliency maps computed for each method, the checkpoints of the trained models and the necessary occurrences for weight of evidence computation in the drive at the following link
Prediction | Image | GradCAM | LIME | RISE | SIDU |
---|---|---|---|---|---|
Golf ball | ![]() |
![]() |
![]() |
![]() |
![]() |
Glacier | ![]() |
![]() |
![]() |
![]() |
![]() |
Prediction | Image | GradCAM | LIME | RISE | SIDU |
---|---|---|---|---|---|
Golf ball | ![]() |
![]() |
![]() |
![]() |
![]() |
Glacier | ![]() |
![]() |
![]() |
![]() |
![]() |
Example of classification with VGG11 model, explained using GRADCAM as saliency method. The concepts extracted with Florence2 model of GroundedSAM2 are "Head" and "Paws".
Original Image | Saliency map | Concepts extracted | Saliency + Concepts |
---|---|---|---|
![]() |
![]() |
![]() |
![]() |