Skip to content

meghakalia/Controllable_Image_Generation

Folders and files

NameName
Last commit message
Last commit date

Latest commit

Β 

History

5 Commits
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 

Repository files navigation

🧠🎨 Controllable Image Generation using with Stable Diffusion and T2I-Adapter

Welcome to the Controllable Image Generation project! This repository explores how to guide image synthesis using T2I-Adapter models with different types of conditioning inputs like edge maps, line poses, and depth maps. Perfect for researchers, creatives, and AI enthusiasts who want precise control over generative models. ✨


πŸ“ Notebooks Overview

Notebook Input Modality Description
T2IAdapter_EdgeMask.ipynb πŸ–οΈ Edge Mask Generate images guided by structural outlines (edges). Helps retain object shape and boundary.
T2IAdapter_LinePose.ipynb πŸ•΄οΈ Line Pose Condition on pose information to generate people or objects in specific configurations.
T2IAdapter_depth.ipynb πŸŒ„ Depth Map Leverage spatial depth information to generate 3D-aware scenes.

πŸš€ Getting Started

Follow these steps to run the notebooks:

1. πŸ“₯ Clone the repository

git clone https://github.com/meghakalia/Controllable_Image_Generation.git
cd Controllable_Image_Generation

2. πŸ““ Launch Jupyter Notebook

jupyter notebook

3. ▢️ Run Any Notebook

Choose a notebook based on your use case and run the cells to generate amazing images!

πŸ“š References

T2I-Adapter Paper (arXiv) πŸ“„ Original GitHub Repository (TencentARC) πŸ’»

🀝 Contributing

Feel free to open issues or submit pull requests to improve the project. Feedback is always welcome!

About

Adapter based methods to control the output of Stable Diffusion (SDXL)

Topics

Resources

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published