The AI Fairness and Explainability Toolkit is an open-source platform designed to evaluate, visualize, and improve AI models with a focus on fairness, explainability, and ethical considerations. Unlike traditional benchmarking tools that focus primarily on performance metrics, this toolkit helps developers understand and mitigate bias, explain model decisions, and ensure ethical AI deployment.
To democratize ethical AI development by providing tools that make fairness and explainability accessible to all developers, regardless of their expertise in ethics or advanced ML techniques.
- Comprehensive Fairness Assessment: Evaluate models across different demographic groups using multiple fairness metrics
- Intersectional Fairness Analysis: Analyze how multiple protected attributes interact to affect model outcomes
- Bias Mitigation: Implement pre-processing, in-processing, and post-processing techniques
- Interactive Visualization: Explore model behavior with interactive dashboards, radar plots, heatmaps, and other visualizations
- Synthetic Data Generation: Create datasets with controlled bias for testing fairness metrics and mitigation techniques
- Model Comparison: Compare multiple models across fairness and performance metrics
- Explainability Tools: Understand model decisions with various XAI techniques
- Production-Ready: Easy integration with existing ML workflows
- Extensible Architecture: Add custom metrics and visualizations
# Install from PyPI
pip install ai-fairness-toolkit
# Or install from source
pip install git+https://github.com/TaimoorKhan10/AI-Fairness-Explainability-Toolkit.git
from ai_fairness_toolkit import FairnessAnalyzer, BiasMitigator, ModelExplainer
from sklearn.ensemble import RandomForestClassifier
from sklearn.datasets import fetch_openml
import pandas as pd
# Load sample data
data = fetch_openml(data_id=1590, as_frame=True)
X, y = data.data, data.target
# Initialize analyzer
analyzer = FairnessAnalyzer(sensitive_features=X['sex'])
# Train a model
model = RandomForestClassifier()
model.fit(X, y)
# Evaluate fairness
results = analyzer.evaluate(model, X, y)
print(results.fairness_metrics)
# Generate interactive report
analyzer.visualize().show()
ai-fairness-toolkit/
βββ ai_fairness_toolkit/ # Main package
β βββ core/ # Core functionality
β β βββ metrics/ # Fairness and performance metrics
β β βββ bias_mitigation/ # Bias mitigation techniques
β β βββ explainers/ # Model explainability tools
β β βββ visualization/ # Visualization components
β βββ examples/ # Example notebooks
β βββ utils/ # Utility functions
βββ tests/ # Test suite
βββ docs/ # Documentation
βββ examples/ # Example scripts
βββ scripts/ # Utility scripts
- Core: Python 3.8+
- ML Frameworks: scikit-learn, TensorFlow, PyTorch
- Visualization: Plotly, Matplotlib, Seaborn
- Testing: pytest, pytest-cov
- Documentation: Sphinx, ReadTheDocs
- CI/CD: GitHub Actions
For detailed documentation, please visit ai-fairness-toolkit.readthedocs.io.
We welcome contributions from the community! Here's how you can help:
- Report bugs: Submit issues on GitHub
- Fix issues: Check out the good first issues
- Add features: Implement new metrics or visualizations
- Improve docs: Help enhance our documentation
- Share feedback: Let us know how you're using the toolkit
# Clone the repository
git clone https://github.com/TaimoorKhan10/AI-Fairness-Explainability-Toolkit.git
cd AI-Fairness-Explainability-Toolkit
# Create and activate virtual environment
python -m venv venv
source venv/bin/activate # On Windows: venv\Scripts\activate
# Install development dependencies
pip install -e .[dev]
# Run tests
pytest
We use Black for code formatting and flake8 for linting. Please ensure your code passes both before submitting a PR.
# Auto-format code
black .
# Run linter
flake8
This project is licensed under the MIT License - see the LICENSE file for details.
For questions or feedback, please open an issue on our GitHub repository or contact taimoorkhaniajaznabi2@gmail.com.
This project follows the all-contributors specification. Contributions of any kind welcome!
- Phase 1: Core fairness metrics and basic explainability tools
- Phase 2: Interactive dashboards and visualization components
- Phase 3: Advanced mitigation strategies and customizable metrics
- Phase 4: Integration with CI/CD pipelines and MLOps workflows
- Phase 5: Domain-specific extensions for healthcare, finance, etc.
MIT License
AFET is currently in development. We're looking for contributors and early adopters to help shape the future of ethical AI evaluation!