Skip to content

humancompatible/train

Repository files navigation

Benchmarking Stochastic Approximation Algorithms for Fairness-Constrained Training of Deep Neural Networks

License Setup

This repository provides a tool to compare stochastic-constrained stochastic optimization algorithms on a fair learning task.

Table of Contents

  1. Basic installation instructions
  2. Reproducing the Benchmark
  3. Extending the benchmark
  4. License and terms of use
  5. References

Humancompatible/train is still under active development! If you find bugs or have feature requests, please file a Github issue.

Basic installation instructions

The code requires Python version 3.11.

  1. Create a virtual environment

bash (Linux)

python3.11 -m venv fairbenchenv
source fairbenchenv/bin/activate

cmd (Windows)

python -m venv fairbenchenv
fairbenchenv\Scripts\activate.bat
  1. Install from source (as an editable package).
git clone https://github.com/humancompatible/train.git
cd train
pip install -r requirements.txt
pip install -e .

Warning: it is recommended to use Stochastic Ghost with the mkl-accelerated version of the scipy package with Stochastic Ghost; to install it, run

pip install --force-reinstall -i https://software.repos.intel.com/python/pypi scipy

after installing requirements.txt; otherwise, the algorithm will run slower. However, this is not supported on MacOS and may fail on some Windows devices.

Reproducing the Benchmark

Running the algorithms

The benchmark comprises the following algorithms:

  • Stochastic Ghost [2],
  • SSL-ALM [3],
  • Stochastic Switching Subgradient [4].

To reproduce the experiments of the paper, run the following:

cd experiments
python run_folktables.py data=folktables alg=sslalm
python run_folktables.py data=folktables alg=alm
python run_folktables.py data=folktables alg=ghost
python run_folktables.py data=folktables alg=ssg
python run_folktables.py data=folktables alg=sgd     # baseline, no fairness
python run_folktables.py data=folktables alg=fairret # baseline, fairness with regularizer

Each command will start 10 runs of the alg, 30 seconds each. The results will be saved to experiments/utils/saved_models and experiments/utils/exp_results.

This repository uses Hydra to manage parameters; see experiments/conf for configuration files.

  • To change the parameters of the experiment, such as the number of runs for each algorithm, run time, the dataset used (note: for now supports only Folktables) - use experiment.yaml.
  • To change the dataset settings - such as file location - or do dataset-specific adjustments, use data/{dataset_name}.yaml
  • To change algorithm hyperparameters, use alg/{algorithm_name}.yaml.
  • To change constraint hyperparameters, use constraint/{constraint_name}.yaml

Producing plots

The plots and tables like the ones in the paper can be produced using the two notebooks. experiments/algo_plots.ipynb houses the convergence plots, and experiments/model_plots.ipynb - all the others.

Extending the benchmark

To add a new algorithm, you can subclass the Algorithm class. Before you can run it, you will need to follow these steps:

  1. In the experiments/conf/alg folder, add a .yaml file with import_name: {ClassName} (so the code knows which algorithm to import) and the desired keyword parameter values under params:
import_name: ClassName

params:
  param_name_1: value
  param_name_2: value
  1. In src/algorithms/__init__.py, add from .{filename} import {ClassName} (so the code is able to import it).

Now you can run the algorithm by executing python run_folktables.py data=folktables alg={yaml_file_name}, or by changing the experiment config files.

To add a different constraint formulation, you can use the FairnessConstraint class by passing your callable function to the constructor as fn. If you use run_folktables.py, you can add a new constraint function by following the steps:

  1. Add a .yaml file with import_name: {FunctionName}, along with the desired batch size and bound (to be reworked for more generality), to the experiments/conf/constraint folder
  2. Import it in src/constraints/__init__.py as in step 2 above.

Now, to run the code with your constraint, use the constraint field in the main config.

License and terms of use

Humancompatible/train is provided under the Apache 2.0 Licence.

The package relies on the Folktables package, provided under MIT Licence. It provides code to download data from the American Community Survey (ACS) Public Use Microdata Sample (PUMS) files managed by the US Census Bureau. The data itself is governed by the terms of use provided by the Census Bureau. For more information, see https://www.census.gov/data/developers/about/terms-of-service.html

Future work

  • Add support for fairness constraints with >=2 subgroups (limitation of the code, not of the algorithms)
  • Add support to datasets besides Folktables
  • Move towards a more PyTorch-like API for optimizers

References

If you use this work, we encourage you to cite our paper,

@misc{kliachkin2025benchmarkingstochasticapproximationalgorithms,
      title={Benchmarking Stochastic Approximation Algorithms for Fairness-Constrained Training of Deep Neural Networks}, 
      author={Andrii Kliachkin and Jana Lepšová and Gilles Bareilles and Jakub Mareček},
      year={2025},
      eprint={2507.04033},
      archivePrefix={arXiv},
      primaryClass={cs.LG},
      url={https://arxiv.org/abs/2507.04033}, 
}

[1] Ding, Hardt & Miller et al. (2021) Retiring Adult: New Datasets for Fair Machine Learning, Curran Associates, Inc..

[2] Facchinei & Kungurtsev (2023) Stochastic Approximation for Expectation Objective and Expectation Inequality-Constrained Nonconvex Optimization, arXiv.

[3] Huang, Zhang & Alacaoglu (2025) Stochastic Smoothed Primal-Dual Algorithms for Nonconvex Optimization with Linear Inequality Constraints, arXiv.

[4] Huang & Lin (2023) Oracle Complexity of Single-Loop Switching Subgradient Methods for Non-Smooth Weakly Convex Functional Constrained Optimization, Curran Associates Inc..

About

Fairness-constrained training of machine learning models in TensorFlow, PyTorch, and beyond

Resources

License

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published

Contributors 4

  •  
  •  
  •  
  •