Code to accompany the paper "Learning Actionable Counterfactual Explanations in Large State Spaces".
-
Packages are mentioned in requirements.txt
-
Fully synthetic data generation. The scripts and .ipynbs files for creating the instances of agents and their hl-discrete CFEs are detailed in the hl-discrete-CFEGen folder
- Edit the parameters based on what you need. E.g., the number of actionable features,
$p_a$ ,$p_f$ , e.t.c.
- Edit the parameters based on what you need. E.g., the number of actionable features,
-
The real-world datasets are added in the folder other datasets
-
In this repository, there are 4 data-driven CFE generators added: hamming distance (baseline), hl-continuous (high-level continuous), hl-discrete (high-level discrete), and hl-id (high-level-identifier) data-driven CFE generators.
- Each generator has scripts and sample notebooks
- There is also code on varied actions access and varied features satisfiability CFE generation.
-
If you need access to the semi-synthetic datasets or any other files, please email us and we will send it to you.
If you find this work useful in your research, please cite our paper:
@article{naggita2025learningactionablecounterfactualexplanations,
title = {{Learning Actionable Counterfactual Explanations in Large State Spaces}},
author = {Keziah Naggita and Matthew R. Walter and Avrim Blum},
year = 2025,
journal = {Transactions on Machine Learning Research (TMLR) (To Appear)}
}
This project is licensed under the MIT License. See the LICENSE file for details.
For questions or collaborations, please contact us here.