Skip to content

This repository contains a list of works on fairness in generative AI, including fairness measurements and bias mitigation methods.

Notifications You must be signed in to change notification settings

itsmag11/Awesome-Fair-Generative-AI

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 

History

16 Commits
 
 
 
 

Repository files navigation

Awesome Awesome Fair Generative AI

This repository contains a list of works on fairness in generative AI, including fairness measurements and bias mitigation methods.

overall_structure

🚀 Get Involved

This repository is dedicated to highlighting the efforts of researchers from all over the world aimed at enhancing fairness in the realm of generative AI. Recognizing the critical importance of fostering a fairer world, I hope this initiative encourages greater awareness and appreciation for fairness and inclusivity in our daily lives.​

As the sole maintainer, I acknowledge that the listings here may be incomplete or reflect certain biases. To enrich and diversify this compilation, your contributions are welcome. You can:​

  • raise an Issue to suggest improvements or highlight omissions
  • Pull a request with additional resources or corrections
  • Contact me at itsmag11@gmail.com

Together, we can build a more comprehensive and representative resource that reflects the collective commitment to fairness in AI.

📖 Table of Contents

1. Sensitive Attribute Classifiers

Sensitive Attribute Venue Paper Code
Skin Tone ECCV 2022 TRUST: Towards Racially Unbiased Skin Tone Estimation via Scene Disambiguation Note: FAIR dataset introduced Code
Skin Color ICCV 2023 Beyond Skin Tone: A Multidimensional Measure of Apparent Skin Color Code

2. Bias Analysis

3. Bias Mitigation Methods

3.1. Mitigating Bias in Text-to-Image Generation

4. Evaluation Metrics

4.1. Measuring Fairness in Text-to-Image Generation

The general approach to measure fairness of a text-to-image model involve:

  1. Generate a number of images using neutral text prompts;
  2. Use CLIP classifier or pre-trained sensitive attribute classifiers (listed in Sec.1) to classify sesitive attributes of the generated images;
  3. Use statistical measures to calculate fairness.

Statistical Measures

Measures Paper Note
Distribution Discrepancy $\mathcal{D}_{KL}$ ITI-GEN: Inclusive Text-to-Image Generation KL Divergence of ideal and real sensitive attribute distribution

About

This repository contains a list of works on fairness in generative AI, including fairness measurements and bias mitigation methods.

Topics

Resources

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published