A Python package to assess and improve fairness of machine learning models.
-
Updated
Aug 12, 2025 - Python
A Python package to assess and improve fairness of machine learning models.
Responsible AI Toolbox is a suite of tools providing model and data exploration and assessment user interfaces and libraries that enable a better understanding of AI systems. These interfaces and libraries empower developers and stakeholders of AI systems to develop and monitor AI more responsibly, and take better data-driven actions.
😎 Everything about class-imbalanced/long-tail learning: papers, codes, frameworks, and libraries | 有关类别不平衡/长尾学习的一切:论文、代码、框架与库
A library for generating and evaluating synthetic tabular data for privacy, fairness and data augmentation.
Tensorflow's Fairness Evaluation and Visualization Toolkit
Code for reproducing our analysis in the paper titled: Image Cropping on Twitter: Fairness Metrics, their Limitations, and the Importance of Representation, Design, and Agency
Fair Resource Allocation in Federated Learning (ICLR '20)
LangFair is a Python library for conducting use-case level LLM bias and fairness assessments
WEFE: The Word Embeddings Fairness Evaluation Framework. WEFE is a framework that standardizes the bias measurement and mitigation in Word Embeddings models. Please feel welcome to open an issue in case you have any questions or a pull request if you want to contribute to the project!
The LinkedIn Fairness Toolkit (LiFT) is a Scala/Spark library that enables the measurement of fairness in large scale machine learning workflows.
Code for using CDEP from the paper "Interpretations are useful: penalizing explanations to align neural networks with prior knowledge" https://arxiv.org/abs/1909.13584
Code accompanying our papers on the "Generative Distributional Control" framework
Open source AI governance platform with support for ISO 42001, ISO 27001 and EU AI Act. Join our Discord channel: https://discord.com/invite/d3k3E4uEpR
Train Gradient Boosting models that are both high-performance *and* Fair!
Toolkit for Auditing and Mitigating Bias and Fairness of Machine Learning Systems 🔎🤖🧰
Flexible tool for bias detection, visualization, and mitigation
A Python toolkit for analyzing machine learning models and datasets.
Dataset associated with "BOLD: Dataset and Metrics for Measuring Biases in Open-Ended Language Generation" paper
Papers and online resources related to machine learning fairness
Fairness Aware Machine Learning. Bias detection and mitigation for datasets and models.
Add a description, image, and links to the fairness-ml topic page so that developers can more easily learn about it.
To associate your repository with the fairness-ml topic, visit your repo's landing page and select "manage topics."