This repository contains a list of works on fairness in generative AI, including fairness measurements and bias mitigation methods.
This repository is dedicated to highlighting the efforts of researchers from all over the world aimed at enhancing fairness in the realm of generative AI. Recognizing the critical importance of fostering a fairer world, I hope this initiative encourages greater awareness and appreciation for fairness and inclusivity in our daily lives.
As the sole maintainer, I acknowledge that the listings here may be incomplete or reflect certain biases. To enrich and diversify this compilation, your contributions are welcome. You can:
- raise an Issue to suggest improvements or highlight omissions
- Pull a request with additional resources or corrections
- Contact me at itsmag11@gmail.com
Together, we can build a more comprehensive and representative resource that reflects the collective commitment to fairness in AI.
Sensitive Attribute | Venue | Paper | Code |
---|---|---|---|
Skin Tone | ECCV 2022 | TRUST: Towards Racially Unbiased Skin Tone Estimation via Scene Disambiguation Note: FAIR dataset introduced | |
Skin Color | ICCV 2023 | Beyond Skin Tone: A Multidimensional Measure of Apparent Skin Color |
-
[NeurIPS 2018] Bias and Generalization in Deep Generative Models: An Empirical Study
-
[VAST 2019] FairVis: Visual Analytics for Discovering Intersectional Bias in Machine Learning
-
[ICCV 2023] DALL-Eval: Probing the Reasoning Skills and Social Biases of Text-to-Image Generation Models
-
[NeurIPS 2023, Spotlight] Stable Bias: Evaluating Societal Representations in Diffusion Models
-
[NeurIPS 2022] Generative Visual Prompt: Unifying Distributional Control of Pre-Trained Generative Models
-
[EMNLP 2022, Oral] How well can Text-to-Image Generative Models understand Ethical Natural Language Interventions?
-
[CVPR 2023] Safe Latent Diffusion: Mitigating Inappropriate Degeneration in Diffusion Models
-
[arXiv 2023] Fair Diffusion: Instructing Text-to-Image Generation Models on Fairness
-
[ICCV 2023, Oral] ITI-GEN: Inclusive Text-to-Image Generation
-
[WACV 2024] Unified Concept Editing in Diffusion Models
-
[arXiv 2023] What is a Fair Diffusion Model? Designing Generative Text-To-Image Models to Incorporate Various Worldviews
-
[AAAI 2024] Fair Sampling in Diffusion Models through Switching Mechanism
-
[arXiv 2024] MIST: Mitigating Intersectional Bias with Disentangled Cross-Attention Editing in Text-to-Image Diffusion Models
-
[arXiv 2024] AITTI: Learning Adaptive Inclusive Token for Text-to-Image Generation
-
[NeurIPS 2024, Spotlight] Association of Objects May Engender Stereotypes: Mitigating Association-Engendered Stereotypes in Text-to-Image Generation
-
[NeurIPS 2024] FairQueue: Rethinking Prompt Learning for Fair Text-to-Image Generation
-
[arXiv 2024] DebiasDiff: Debiasing Text-to-image Diffusion Models with Self-discovering Latent Attribute Directions
The general approach to measure fairness of a text-to-image model involve:
- Generate a number of images using neutral text prompts;
- Use CLIP classifier or pre-trained sensitive attribute classifiers (listed in Sec.1) to classify sesitive attributes of the generated images;
- Use statistical measures to calculate fairness.
Measures | Paper | Note |
---|---|---|
Distribution Discrepancy |
ITI-GEN: Inclusive Text-to-Image Generation | KL Divergence of ideal and real sensitive attribute distribution |