Adversarial Attacks on Post Hoc Explanation Techniques (LIME/SHAP)
-
Updated
Dec 8, 2022 - Jupyter Notebook
Adversarial Attacks on Post Hoc Explanation Techniques (LIME/SHAP)
Reinforced Causal Explainer for Graph Neural Networks, TPAMI2022
Benchmark to Evaluate EXplainable AI
An eXplainable AI system to elucidate short-term speed forecasts in traffic networks obtained by Spatio-Temporal Graph Neural Networks.
[ACM MM 2024] Holistic-CAM: Ultra-lucid and Sanity Preserving Visual Interpretation in Holistic Stage of CNNs
[SIGIR 2025] Class Activation Values: Lucid and Faithful Visual Interpretations for CLIP-based Text-Image Retrievals.
Human-centered XAI via a Concept-Informed Prompt-based Validation framework for saliency maps [CIProVa]
[AAAI'23 Paper] A machine learning defense for auditors of black box automated decision-making systems.
CAM, Grad-CAM, Grad-CAM++ and Guided Backpropagation post-hoc explanation methods
Applying post-hoc attention with CAM to the MNIST dataset
Astrapia - A Friendly XAI Explainer Evaluation Framework
This repository presents a comprehensive research paper exploring the role of Explainable Artificial Intelligence (XAI) in modern Machine Learning. It aims to shed light on the interpretability of 'black-box' models like Neural Networks, Explainable AI and highlights the need for transparent, human-understandable ML systems.
Repository containing code for Black-box Association-Rule Based Explanations (BARBE). Main code is in the barbe directory, see the Readme below for more details.
Add a description, image, and links to the post-hoc-explanation topic page so that developers can more easily learn about it.
To associate your repository with the post-hoc-explanation topic, visit your repo's landing page and select "manage topics."