Reference tables to introduce and organize evaluation methods and measures for explainable machine learning systems
- 
            Updated
            Mar 26, 2022 
Reference tables to introduce and organize evaluation methods and measures for explainable machine learning systems
[NeurIPS 2024] CoSy is an automatic evaluation framework for textual explanations of neurons.
Replication package for the KNOSYS paper titled "An Objective Metric for Explainable AI: How and Why to Estimate the Degree of Explainability".
Open and extensible benchmark for XAI methods
Semantic Meaningfulness: Evaluating counterfactual approaches for real world plausibility
CNN architectures Resnet-50 and InceptionV3 have been used to detect whether the CT scan images is covid affected or not and prediction is validated using explainable AI frameworks LIME and GradCAM.
ConsisXAI is an implementation of a technique to evaluate global machine learning explainability (XAI) methods based on feature subset consistency
Repository for ReVel framework to Measure Local-Linear Explanationsfor Black-Box Models
Code for evaluating saliency maps with classification metrics.
This repository is the code basis for the paper titled "Balancing Privacy and Explainability in Federated Learning"
Research on AutoML and Explainability.
This project poses a new methodology for assessing and improving sequential concept bottleneck models (CBMs). The research undertaken in this project builds upon the model proposed by Grange et al., of which I was one of the co-authors.
Saliency Metrics is a Python package that implements various metrics for comparing saliency maps generated by explanation methods.
A course project on explainable AI
Video capsule endoscopy (VCE) is an important innovation for gastroenterology and enables minimally invasive GI investigations. But VCE creates enormous amounts of data. The proposed model uses CNN architecture and ensemble learning to address this issue. And also used XAI methods like SHAP, LIME, Grad-CAM for explaining the model.
Scripts and trained models from our paper: M. Ntrougkas, V. Mezaris, I. Patras, "P-TAME: Explain Any Image Classifier with Trained Perturbations", IEEE Open Journal of Signal Processing, 2025. DOI:10.1109/OJSP.2025.3568756.
Classify applications using flow features with Random Forest and K-Nearest Neighbor classifiers. Explore augmentation techniques like oversampling, SMOTE, BorderlineSMOTE, and ADASYN for better handling of underrepresented classes. Measure classifier effectiveness for different sampling techniques using accuracy, precision, recall, and F1-score.
Método de XAI basado en CBR para generar explicaciones basadas en ejemplos y contraejemplos a través de técnicas de Visual Question Answering
A dual-headed deep learning model built using TensorFlow and Keras to classify fruit type (Apple, Banana, Guava, Orange) and their quality condition (Good or Bad) from images. The system includes Grad-CAM-based visual explanations and a responsive Streamlit web interface for real-time predictions using uploaded images or webcam input.
Add a description, image, and links to the xai-evaluation topic page so that developers can more easily learn about it.
To associate your repository with the xai-evaluation topic, visit your repo's landing page and select "manage topics."