Awesome Controllable Video Generation with Diffusion Models.
- Pose Control
- Audio Control
- Expression Control
- Universal Control
- Camera Control
- Trajectory Control
- Subject Control
- Area Control
- Video Control
- Brain Control
- ID Control
UniAnimate-DiT: Human Image Animation with Large-Scale Video Diffusion Transformer
OmniHuman-1: Rethinking the Scaling-Up of One-Stage Conditioned Human Animation Models
π Paper | π Project Page
EchoMimicV2: Towards Striking, Simplified, and Semi-Body Human Animation
π Paper | π Project Page | π» Code
MikuDance: Animating Character Art with Mixed Motion Dynamics
π Paper | π Project Page | π» Code
Diffusion as Shader: 3D-aware Video Diffusion for Versatile Video Generation Control
π Paper | π Project Page | π» Code
TANGO: Co-Speech Gesture Video Reenactment with Hierarchical Audio-Motion Embedding and Diffusion Interpolation
π Paper | π Project Page | π» Code
DynamicPose: A robust image-to-video framework for portrait animation driven by pose sequences
Alignment is All You Need: A Training-free Augmentation Strategy for Pose-guided Video Generation
Follow Your Pose: Pose-Guided Text-to-Video Generation using Pose-Free Videos
π Paper | π Project Page | π» Code
Animate Anyone: Consistent and Controllable Image-to-Video Synthesis for Character Animation
π Paper | π Project Page
DreaMoving: A Human Video Generation Framework based on Diffusion Models
π Paper | π Project Page | π» Code
MagicPose: Realistic Human Poses and Facial Expressions Retargeting with Identity-aware Diffusion
π Paper | π Project Page | π» Code
MagicAnimate: Temporally Consistent Human Image Animation using Diffusion Model
π Paper | π Project Page | π» Code
Champ: Controllable and Consistent Human Image Animation with 3D Parametric Guidance
π Paper | π Project Page | π» Code
Magic-Me: Identity-Specific Video Customized Diffusion
π Paper | π Project Page | π» Code
DisCo: Disentangled Control for Referring Human Dance Generation in Real World
π Paper | π Project Page | π» Code
Human4DiT: Free-view Human Video Generation with 4D Diffusion Transformer
π Paper | π Project Page
MimicMotion : High-Quality Human Motion Video Generation with Confidence-aware Pose Guidance
π Paper | π Project Page | π» Code
Follow-Your-Pose v2: Multiple-Condition Guided Character Image Animation for Stable Pose Control
π Paper | π Project Page
HumanVid: Demystifying Training Data for Camera-controllable Human Image Animation
π Paper | π Project Page | π» Code
MusePose: a Pose-Driven Image-to-Video Framework for Virtual Human Generation.
MDM: Human Motion Diffusion Model
π Paper | π Project Page | π» Code
FantasyTalking: Realistic Talking Portrait Generation via Coherent Motion Synthesis
π Paper | π Project Page | π» Code
Every Image Listens, Every Image Dances: Music-Driven Image Animation
MEMO: Memory-Guided Diffusion for Expressive Talking Video Generation
π Paper | π Project Page | π» Code
Hallo2: Long-Duration and High-Resolution Audio-driven Portrait Image Animation
π Paper | π Project Page | π» Code
Co-Speech Gesture Video Generation via Motion-Decoupled Diffusion Model
π Paper | π Project Page | π» Code
Diverse and Aligned Audio-to-Video Generation via Text-to-Video Model Adaptation
π Paper | π Project Page | π» Code
MM-Diffusion: Learning Multi-Modal Diffusion Models for Joint Audio and Video Generation
Speech Driven Video Editing via an Audio-Conditioned Diffusion Model
π Paper | π Project Page | π» Code
Hallo: Hierarchical Audio-Driven Visual Synthesis for Portrait Image Animation
π Paper | π Project Page | π» Code
Listen, denoise, action! Audio-driven motion synthesis with diffusion models
π Paper | π Project Page | π» Code
CoDi: Any-to-Any Generation via Composable Diffusion
π Paper | π Project Page | π» Code
Generative Disco: Text-to-Video Generation for Music Visualization
AADiff: Audio-Aligned Video Synthesis with Text-to-Image Diffusion
EMO: Emote Portrait Alive Generating Expressive Portrait Videos with Audio2Video Diffusion Model under Weak Conditions
π Paper | π Project Page | π» Code
Context-aware Talking Face Video Generation
FantasyPortrait: Enhancing Multi-Character Portrait Animation with Expression-Augmented Diffusion Transformers
π Paper | π Project Page | π» Code
X-Portrait: Expressive Portrait Animation with Hierarchical Motion Attention
π Paper | π Project Page | π» Code
HelloMeme: Integrating Spatial Knitting Attentions to Embed High-Level and Fidelity-Rich Conditions in Diffusion Models
π Paper | π Project Page | π» Code
SkyReels-A1: Expressive Portrait Animation in Video Diffusion Transformers
π Paper | π Project Page | π» Code
DreamActor-M1: Holistic, Expressive and Robust Human Image Animation with Hybrid Guidance
π Paper | π Project Page
Follow-Your-Emoji: Fine-Controllable and Expressive Freestyle Portrait Animation
π Paper | π Project Page | π» Code
Echomimic: Lifelike audio-driven portrait animations through editable landmark conditions
π Paper | π Project Page | π» Code
VACE: All-in-One Video Creation and Editing
π Paper | π Project Page | π» Code
ControlNeXt: Powerful and Efficient Control for Image and Video Generation
π Paper | π Project Page | π» Code
Control-A-Video: Controllable Text-to-Video Generation with Diffusion Models
π Paper | π Project Page | π» Code
ControlVideo: Training-free Controllable Text-to-Video Generation
TrackGo: A Flexible and Efficient Method for Controllable Video Generation
π Paper | π Project Page | π» Code
VideoComposer: Compositional Video Synthesis with Motion Controllability
π Paper | π Project Page | π» Code
Make-Your-Video: Customized Video Generation Using Textual and Structural Guidance
π Paper | π Project Page | π» Code
UniCtrl: Improving the Spatiotemporal Consistency of Text-to-Video Diffusion Models via Training-Free Unified Attention Control
π Paper | π Project Page | π» Code
SparseCtrl: Adding Sparse Controls to Text-to-Video Diffusion Models
π Paper | π Project Page | π» Code
VideoControlNet: A Motion-Guided Video-to-Video Translation Framework by Using Diffusion Model with ControlNet
π Paper | π Project Page | π» Code
Cinemo: Consistent and Controllable Image Animation with Motion Diffusion Models
π Paper | π Project Page | π» Code
MotionMaster: Training-free Camera Motion Transfer For Video Generation
π Paper | π Project Page | π» Code
CinePreGen: Camera Controllable Video Previsualization via Engine-powered Diffusion
CamViG: Camera Aware Image-to-Video Generation with Multimodal Transformers
Direct-a-Video: Customized Video Generation with User-Directed Camera Movement and Object Motion
π Paper | π Project Page | π» Code
MotionCtrl: A Unified and Flexible Motion Controller for Video Generation
π Paper | π Project Page | π» Code
CameraCtrl: Enabling Camera Control for Text-to-Video Generation
π Paper | π Project Page | π» Code
VD3D: Taming Large Video Diffusion Transformers for 3D Camera Control
π Paper | π Project Page
Controlling Space and Time with Diffusion Models
π Paper | π Project Page
CamCo: Camera-Controllable 3D-Consistent Image-to-Video Generation
π Paper | π Project Page
Collaborative Video Diffusion: Consistent Multi-video Generation with Camera Control
π Paper | π Project Page
HumanVid: Demystifying Training Data for Camera-controllable Human Image Animation
π Paper | π Project Page | π» Code
Training-free Camera Control for Video Generation
π Paper | π Project Page
Director3D: Real-world Camera Trajectory and 3D Scene Generation from Text
π Paper | π Project Page | π» Code
MotionBooth: Motion-Aware Customized Text-to-Video Generation
DiffDreamer: Towards Consistent Unsupervised Single-view Scene Extrapolation with Conditional Diffusion Models
π Paper | π Project Page
MotionCanvas: Cinematic Shot Design with Controllable Image-to-Video Generation
π Paper | π Project Page
FreeTraj: Tuning-Free Trajectory Control in Video Diffusion Models
π Paper | π Project Page | π» Code
TrailBlazer: Trajectory Control for Diffusion-Based Video Generation
π Paper | π Project Page | π» Code
DragNUWA: Fine-grained Control in Video Generation by Integrating Text, Image, and Trajectory
π Paper | π Project Page | π» Code
Tora: Trajectory-oriented Diffusion Transformer for Video Generation
π Paper | π Project Page
Controllable Longer Image Animation with Diffusion Models
π Paper | π Project Page
MotionCtrl: A Unified and Flexible Motion Controller for Video Generation
π Paper | π Project Page | π» Code
MotionBooth: Motion-Aware Customized Text-to-Video Generation
Puppet-Master: Scaling Interactive Video Generation as a Motion Prior for Part-Level Dynamics
π Paper | π Project Page | π» Code
Direct-a-Video: Customized Video Generation with User-Directed Camera Movement and Object Motion
π Paper | π Project Page | π» Code
Generative Image Dynamics
π Paper | π Project Page
Motion-Zero: Zero-Shot Moving Object Control Framework for Diffusion-Based Video Generation
Video Diffusion Models are Training-free Motion Interpreter and Controlle
π Paper | π Project Page
Phantom: Subject-consistent video generation via cross-modal alignment
π Paper | π Project Page
Tunnel Try-on: Excavating Spatial-temporal Tunnels for High-quality Virtual Try-on in Videos
Direct-a-Video: Customized Video Generation with User-Directed Camera Movement and Object Motion
π Paper | π Project Page | π» Code
ActAnywhere: Subject-Aware Video Background Generation
π Paper | π Project Page
MotionBooth: Motion-Aware Customized Text-to-Video Generation
Animate-A-Story: Storytelling with Retrieval-Augmented Video Generation
One-Shot Learning Meets Depth Diffusion in Multi-Object Videos
Boximator: Generating Rich and Controllable Motions for Video Synthesis
π Paper | π Project Page
Follow-Your-Click: Open-domain Regional Image Animation via Short Prompts
π Paper | π Project Page | π» Code
AnimateAnything: Fine-Grained Open Domain Image Animation with Motion Guidance
π Paper | π Project Page | π» Code
Motion-I2V: Consistent and Controllable Image-to-Video Generation with Explicit Motion Modeling
π Paper | π Project Page
Streetscapes: Large-scale Consistent Street View Generation Using Autoregressive Video Diffusion
π Paper | π Project Page
Customizing Motion in Text-to-Video Diffusion Models
π Paper | π Project Page
MotionClone: Training-Free Motion Cloning for Controllable Video Generation
π Paper | π Project Page | π» Code
VMC: Video Motion Customization using Temporal Attention Adaption for Text-to-Video Diffusion Models
π Paper | π Project Page | π» Code
Motion Inversion for Video Customization
π Paper | π Project Page | π» Code
NeuroCine: Decoding Vivid Video Sequences from Human Brain Activties
FantasyID: Face Knowledge Enhanced ID-Preserving Video Generation
π Paper | π Project Page | π» Code
Concat-ID: Towards Universal Identity-Preserving Video Synthesis
π Paper | π Project Page | π» Code
Ingredients: Blending Custom Photos with Video Diffusion Transformers
Identity-Preserving Text-to-Video Generation by Frequency Decomposition
π Paper | π Project Page | π» Code
VideoMaker: Zero-shot Customized Video Generation with the Inherent Force of Video Diffusion Models
π Paper | π Project Page | π» Code
Movie Gen: A Cast of Media Foundation Models
CustomCrafter: Customized Video Generation with Preserving Motion and Concept Composition Abilities
π Paper | π Project Page | π» Code
ID-Animator: Zero-Shot Identity-Preserving Human Video Generation
π Paper | π Project Page | π» Code
VideoBooth: Diffusion-based Video Generation with Image Prompts
π Paper | π Project Page | π» Code
Magic-Me: Identity-Specific Video Customized Diffusion