This repository contains two explainability methods developed for the AutoFair project:
- FACTS: Fairness-Aware Counterfactuals for Subgroups — a model-agnostic, highly parameterizable framework for auditing subgroup fairness through counterfactual explanations.
- GLANCE: Global Actions in a Nutshell for Counterfactual Explainability — a versatile and adaptive framework for generating global counterfactual explanations.
- FCX: Feasible Counterfactual Explanations - a novel framework that generates realistic and low-cost counterfactuals by enforcing both hard feasibility constraints provided by domain experts and soft causal constraints inferred from data.
Full API documentation is available at: https://humancompatible-explain.readthedocs.io/en/latest/index.html
humancompatible/ folder contains the corresponding code for the implemented methods.
We recommend using Anaconda or Python virtual environments to avoid package conflicts.
git clone https://github.com/humancompatible/explain.git
cd explain
Using Conda:
conda create --name explain python=3.10.4
conda activate explain
Using by using Python venv:
python3 -m venv env
source env/bin/activate
pip install -e .
python -m ipykernel install --user --name=autofair --display-name "AutoFair Env"
jupyter notebook
Explore the functionality through example notebooks in the examples/ directory:
-
demo_FACTS.ipynb – Demonstrates FACTS usage and subgroup fairness evaluation with the UCI Adult dataset.
-
demo_GLANCE.ipynb – Demonstrates GLANCE with the UCI Adult dataset.
These notebooks offer adjustable parameters and serve as entry points for integrating your own models or datasets.