Yuxue Yang1,2, Lue Fan2, Zuzeng Lin3, Feng Wang4, Zhaoxiang Zhang1,2†
1UCAS 2CASIA 3TJU 4CreateAI †Corresponding author
Official implementation of LayerAnimate: Layer-level Control for Animation, ICCV 2025
Videos on the project website vividly introduces our work and presents qualitative results for an enhanced view experience.
- [25-06-26] Our work is accepted by ICCV 2025! 🎉
- [25-05-29] We have extended LayerAnimate to the DiT (Wan2.1 1.3B) variant, enabling the generation of 81 frames at 480 × 832 resolution. It performs surprisingly well in the Real-World Domain shown in the project website.
- [25-03-31] Release the online demo on Hugging Face.
- [25-03-30] Release a gradio script app.py to run the demo locally. Please raise an issue if you encounter any problems.
- [25-03-22] Release the checkpoint and the inference script. We update layer curation pipeline and support trajectory control for a flexible composition of various layer-level controls.
- [25-01-15] Release the project page and the arXiv preprint.
git clone git@github.com:IamCreateAI/LayerAnimate.git
conda create -n layeranimate python=3.10 -y
conda activate layeranimate
pip install -r requirements.txt
pip install wan@git+https://github.com/Wan-Video/Wan2.1 # If you want to use DiT variant.
Models | Download Link | Video Size |
---|---|---|
UNet variant | Huggingface 🤗 | 16 x 320 x 512 |
DiT variant | Huggingface 🤗 | 81 x 480 x 832 |
Download the pretrained weights and put them in checkpoints/
directory as follows:
checkpoints/
├─ LayerAnimate-Mix (UNet variant)
└─ LayerAnimate-DiT
Run the following command to generate a video from input images:
python scripts/animate_Layer.py --config scripts/demo1.yaml --savedir outputs/sample1
python scripts/animate_Layer.py --config scripts/demo2.yaml --savedir outputs/sample2
python scripts/animate_Layer.py --config scripts/demo3.yaml --savedir outputs/sample3
python scripts/animate_Layer.py --config scripts/demo4.yaml --savedir outputs/sample4
python scripts/animate_Layer.py --config scripts/demo5.yaml --savedir outputs/sample5
Note that the layer-level controls are prepared in __assets__/demos
.
You can run the demo locally by executing the following command:
python scripts/app.py --savedir outputs/gradio
Then, open the link in your browser to access the demo interface. The output video and the video with trajectory will be saved in the outputs/gradio
directory.
Run the following command to generate a video from input images:
python scripts/infer_DiT.py --config __assets__/demos/realworld/config.yaml --savedir outputs/realworld
We take the config.yaml
in demos/realworld/
as an example. You can also modify the config file to suit your needs.
- Release the code and checkpoint of LayerAnimate.
- Upload a gradio script to run the demo locally.
- Create a online demo in the huggingface space.
- DiT-based LayerAnimate.
- Release checkpoints trained under single control modality with better performance.
- Release layer curation pipeline.
- Training script for LayerAnimate.
We sincerely thank the great work ToonCrafter, LVCD, AniDoc, and Wan-Video for their inspiring work and contributions to the AIGC community.
Please consider citing our work as follows if it is helpful.
@article{yang2025layeranimate,
author = {Yang, Yuxue and Fan, Lue and Lin, Zuzeng and Wang, Feng and Zhang, Zhaoxiang},
title = {LayerAnimate: Layer-level Control for Animation},
journal = {arXiv preprint arXiv:2501.08295},
year = {2025},
}