Minghao Chen, Iro Laina, Andrea Vedaldi
[Paper] [Project Page] [Demo]
Our method manage to control of layout of images generated by large pretrained Text-to-Image diffusion models without training through the layout guidance performed on the cross-attention maps.
Recent diffusion-based generators can produce high-quality images based only on textual prompts. However, they do not correctly interpret instructions that specify the spatial layout of the composition. We propose a simple approach that can achieve robust layout control without requiring training or fine-tuning the image generator. Our technique, which we call layout guidance, manipulates the cross-attention layers that the model uses to interface textual and visual information and steers the reconstruction in the desired direction given, e.g., a user-specified layout. In order to determine how to best guide attention, we study the role of different attention maps when generating images and experiment with two alternative strategies, forward and backward guidance. We evaluate our method quantitatively and qualitatively with several experiments, validating its effectiveness. We further demonstrate its versatility by extending layout guidance to the task of editing the layout and context of a given real image.
If this repo is helpful for you, please consider to cite it. Thank you! :)
@article{chen2023trainingfree,
title={Training-Free Layout Control with Cross-Attention Guidance},
author={Minghao Chen and Iro Laina and Andrea Vedaldi},
journal={arXiv preprint arXiv:2304.03373},
year={2023}
}
This research is supported by ERC-CoG UNION 101001212. The codes are inspired by Diffuser and Stable Diffusion.