Welcome to the Controllable Image Generation project! This repository explores how to guide image synthesis using T2I-Adapter models with different types of conditioning inputs like edge maps, line poses, and depth maps. Perfect for researchers, creatives, and AI enthusiasts who want precise control over generative models. β¨
Notebook | Input Modality | Description |
---|---|---|
T2IAdapter_EdgeMask.ipynb |
ποΈ Edge Mask | Generate images guided by structural outlines (edges). Helps retain object shape and boundary. |
T2IAdapter_LinePose.ipynb |
π΄οΈ Line Pose | Condition on pose information to generate people or objects in specific configurations. |
T2IAdapter_depth.ipynb |
π Depth Map | Leverage spatial depth information to generate 3D-aware scenes. |
Follow these steps to run the notebooks:
git clone https://github.com/meghakalia/Controllable_Image_Generation.git
cd Controllable_Image_Generation
jupyter notebook
Choose a notebook based on your use case and run the cells to generate amazing images!
T2I-Adapter Paper (arXiv) π Original GitHub Repository (TencentARC) π»
Feel free to open issues or submit pull requests to improve the project. Feedback is always welcome!