Skip to content

AI Fairness and Explainability Toolkit (AFET) is an open-source project aimed at providing tools and frameworks to assess, visualize, and mitigate bias in machine learning models. It supports multiple ML frameworks and offers a comprehensive suite of metrics and visualization components to enhance model transparency and fairness.

License

Notifications You must be signed in to change notification settings

TaimoorKhan10/AI-Fairness-Explainability-Toolkit

Repository files navigation

AI Fairness and Explainability Toolkit

Python Version License: MIT Code style: black PyPI DOI

🌟 Overview

The AI Fairness and Explainability Toolkit is an open-source platform designed to evaluate, visualize, and improve AI models with a focus on fairness, explainability, and ethical considerations. Unlike traditional benchmarking tools that focus primarily on performance metrics, this toolkit helps developers understand and mitigate bias, explain model decisions, and ensure ethical AI deployment.

🎯 Mission

To democratize ethical AI development by providing tools that make fairness and explainability accessible to all developers, regardless of their expertise in ethics or advanced ML techniques.

✨ Key Features

  • Comprehensive Fairness Assessment: Evaluate models across different demographic groups using multiple fairness metrics
  • Intersectional Fairness Analysis: Analyze how multiple protected attributes interact to affect model outcomes
  • Bias Mitigation: Implement pre-processing, in-processing, and post-processing techniques
  • Interactive Visualization: Explore model behavior with interactive dashboards, radar plots, heatmaps, and other visualizations
  • Synthetic Data Generation: Create datasets with controlled bias for testing fairness metrics and mitigation techniques
  • Model Comparison: Compare multiple models across fairness and performance metrics
  • Explainability Tools: Understand model decisions with various XAI techniques
  • Production-Ready: Easy integration with existing ML workflows
  • Extensible Architecture: Add custom metrics and visualizations

πŸš€ Quick Start

Installation

# Install from PyPI
pip install ai-fairness-toolkit

# Or install from source
pip install git+https://github.com/TaimoorKhan10/AI-Fairness-Explainability-Toolkit.git

Basic Usage

from ai_fairness_toolkit import FairnessAnalyzer, BiasMitigator, ModelExplainer
from sklearn.ensemble import RandomForestClassifier
from sklearn.datasets import fetch_openml
import pandas as pd

# Load sample data
data = fetch_openml(data_id=1590, as_frame=True)
X, y = data.data, data.target

# Initialize analyzer
analyzer = FairnessAnalyzer(sensitive_features=X['sex'])

# Train a model
model = RandomForestClassifier()
model.fit(X, y)

# Evaluate fairness
results = analyzer.evaluate(model, X, y)
print(results.fairness_metrics)

# Generate interactive report
analyzer.visualize().show()

πŸ—οΈ Project Structure

ai-fairness-toolkit/
β”œβ”€β”€ ai_fairness_toolkit/      # Main package
β”‚   β”œβ”€β”€ core/                 # Core functionality
β”‚   β”‚   β”œβ”€β”€ metrics/          # Fairness and performance metrics
β”‚   β”‚   β”œβ”€β”€ bias_mitigation/  # Bias mitigation techniques
β”‚   β”‚   β”œβ”€β”€ explainers/       # Model explainability tools
β”‚   β”‚   └── visualization/    # Visualization components
β”‚   β”œβ”€β”€ examples/             # Example notebooks
β”‚   └── utils/                # Utility functions
β”œβ”€β”€ tests/                    # Test suite
β”œβ”€β”€ docs/                     # Documentation
β”œβ”€β”€ examples/                 # Example scripts
└── scripts/                  # Utility scripts

πŸ› οΈ Technology Stack

  • Core: Python 3.8+
  • ML Frameworks: scikit-learn, TensorFlow, PyTorch
  • Visualization: Plotly, Matplotlib, Seaborn
  • Testing: pytest, pytest-cov
  • Documentation: Sphinx, ReadTheDocs
  • CI/CD: GitHub Actions

πŸ“š Documentation

For detailed documentation, please visit ai-fairness-toolkit.readthedocs.io.

🀝 How to Contribute

We welcome contributions from the community! Here's how you can help:

  1. Report bugs: Submit issues on GitHub
  2. Fix issues: Check out the good first issues
  3. Add features: Implement new metrics or visualizations
  4. Improve docs: Help enhance our documentation
  5. Share feedback: Let us know how you're using the toolkit

Development Setup

# Clone the repository
git clone https://github.com/TaimoorKhan10/AI-Fairness-Explainability-Toolkit.git
cd AI-Fairness-Explainability-Toolkit

# Create and activate virtual environment
python -m venv venv
source venv/bin/activate  # On Windows: venv\Scripts\activate

# Install development dependencies
pip install -e .[dev]

# Run tests
pytest

Code Style

We use Black for code formatting and flake8 for linting. Please ensure your code passes both before submitting a PR.

# Auto-format code
black .

# Run linter
flake8

πŸ“„ License

This project is licensed under the MIT License - see the LICENSE file for details.

πŸ“š References

πŸ“¬ Contact

For questions or feedback, please open an issue on our GitHub repository or contact taimoorkhaniajaznabi2@gmail.com.

🀝 Contributors

This project follows the all-contributors specification. Contributions of any kind welcome!

πŸ—ΊοΈ Roadmap

  • Phase 1: Core fairness metrics and basic explainability tools
  • Phase 2: Interactive dashboards and visualization components
  • Phase 3: Advanced mitigation strategies and customizable metrics
  • Phase 4: Integration with CI/CD pipelines and MLOps workflows
  • Phase 5: Domain-specific extensions for healthcare, finance, etc.

πŸ“œ License

MIT License


AFET is currently in development. We're looking for contributors and early adopters to help shape the future of ethical AI evaluation!

About

AI Fairness and Explainability Toolkit (AFET) is an open-source project aimed at providing tools and frameworks to assess, visualize, and mitigate bias in machine learning models. It supports multiple ML frameworks and offers a comprehensive suite of metrics and visualization components to enhance model transparency and fairness.

Topics

Resources

License

Code of conduct

Security policy

Stars

Watchers

Forks

Packages

No packages published

Languages