Skip to content

Image Generation using Stable Diffusion This project utilizes Stable Diffusion with ComfyUI for AI-powered image generation. It includes model configurations, automation techniques, and optimization for high-quality outputs. Built with Python and Machine Learning, it streamlines AI-driven image synthesis and customization. πŸš€

Notifications You must be signed in to change notification settings

ajaykuraparthi/Image-Generation-using-Stable-Diffusion

Repository files navigation

🎨 Image Generation using Stable Diffusion Explore the power of Stable Diffusion for creating AI-generated images using text prompts, image manipulation, inpainting, ControlNet, and more.

πŸ“¦ Part 1: Stable Diffusion Basics βœ… Setup Install required libraries (with xformers for memory optimization)

🧠 Image Generation Pipeline Create Prompt

Generate Image

Save Result

πŸ–ΌοΈ Generate Multiple Images βš™οΈ Key Parameters seed: Ensures reproducibility

inference_steps: Controls generation detail

guidance_scale (CFG): Controls adherence to prompt

image_size: Width x Height

negative_prompt: Avoid unwanted elements

🧩 Model Variants SD v1.5, SD v2.x

Fine-tuned models with specific aesthetics

πŸŒ€ Changing Schedulers PNDM (default)

DDIM

K-LMS

Euler A

DPM

✏️ Part 2: Prompt Engineering 🧩 Prompt Structure Subject / Object

Action / Location

Type & Style

Colors, Artists

Resolution, Site

Lighting, Negative Prompts

🎨 Use Cases Art & Paintings

Photorealistic Images

Landscapes & Architecture

3D Concepts & Drawings

πŸ”§ Advanced Models for Enhanced Output Anything v3.1

DreamShaper

Realistic Vision

Analog Diffusion

Protogen

Mitsua Diffusion One

πŸ§ͺ Part 3: Fine-Tuning Models πŸ› οΈ Install Dependencies bash Copy Edit accelerate transformers ftfy bitsandbytes==0.35.0 gradio natsort safetensors xformers πŸ”„ Workflow Load model

Prepare dataset (images + unique token + class name)

Train with new concepts

Convert weights to .ckpt

πŸ” Inference Test with custom prompts

Example prompts: in the forest, in Cairo desert, in a western scene, in Star Wars style, in Mount Fuji, etc.

Save results

πŸ–ΌοΈ Part 4: Image-to-Image πŸ”§ Install Libraries Same as Fine-Tuning section

πŸ” Steps Use an input image

Adjust strength for transformation intensity

Test with different styles, schedulers, and input images

✍️ Image Editing Use InstructPix2Pix for editable transformations

🎯 Part 5: Inpainting πŸ“¦ Setup Same libraries as before

πŸ§™β€β™‚οΈ Magic Eraser via Prompt Mask and replace objects

Create new elements in existing images

Compare outputs across variations

🧠 Part 6: ControlNet βš™οΈ Setup bash Copy Edit accelerate transformers xformers πŸ“ Edge-to-Image Detect edges with Canny Edge

Generate new images using the edge map and ControlNet

🀸 Pose-to-Image Use human poses to guide image generation

Combine with emojis for fun visual effects

πŸ“ Final Tips Use prompt engineering creatively

Experiment with fine-tuned models for better realism or stylization

Adjust schedulers and parameters for unique results

Combine features: img2img + ControlNet + inpainting = πŸ”₯

About

Image Generation using Stable Diffusion This project utilizes Stable Diffusion with ComfyUI for AI-powered image generation. It includes model configurations, automation techniques, and optimization for high-quality outputs. Built with Python and Machine Learning, it streamlines AI-driven image synthesis and customization. πŸš€

Resources

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published