[Yuheng Li], [Haotian Liu], [Qingyang Wu], [Fangzhou Mu], [Jianwei Yang], [Jianfeng Gao], [Chunyuan Li], [Yong Jae Lee],
[Paper] [Project Page] [Demo]
Large-scale text-to-image diffusion models have made amazing advances. However, the status quo is to use text input alone, which can impede controllability. In this work, we propose GLIGEN, Grounded-Language-to-Image Generation, a novel approach that builds upon and extends the functionality of existing pre-trained text-to-image diffusion models by enabling them to also be conditioned on grounding inputs. To preserve the vast concept knowledge of the pre-trained model, we freeze all of its weights and inject the grounding information into new trainable layers via a gated mechanism. Our model achieves open-world grounded text2img generation with caption and bounding box condition inputs, and the grounding ability generalizes well to novel spatial configuration and concepts. GLIGEN’s zero-shot performance on COCO and LVIS outperforms that of existing supervised layout-to-image baselines by a large margin.
conda create --name gligen python=3.10
conda activate gligen
pip install -r requirements.txt
The .csv file containing the prompts should be inside a folder named prompts
that is posiotioned in the root of the project.
The .csv file used is expected to have the following structure (no limits in the number of objects):
id,prompt,obj1,bbox1,obj2,bbox2,obj3,bbox3,obj4,bbox4
@article{li2023gligen,
title={GLIGEN: Open-Set Grounded Text-to-Image Generation},
author={Li, Yuheng and Liu, Haotian and Wu, Qingyang and Mu, Fangzhou and Yang, Jianwei and Gao, Jianfeng and Li, Chunyuan and Lee, Yong Jae},
journal={CVPR},
year={2023}
}