Kollani Motion Lab (MoLab)
A toolbox for human motion generation and inbetweening, developed during the Kollani project.
- Github repository: https://github.com/JasonNero/MoLab/
- Documentation https://JasonNero.github.io/MoLab/
- Text To Motion: Describe the motion you want to see
- In-Betweening: Input your keyposes and let AI fill the gaps
- Motion Composition: Compose sequences and generate transitions
To get started with MoLab, please take a look at the installation guide. The first steps page will guide you through the process of starting the components and running your first inference.
MoLab/
├── backend/ # FastAPI endpoint for inference
├── models/condmdi/ # CondMDI fork with new features and improvements
│ └── README.md # CondMDI model description and usage instructions
├── frontend/ # Godot User Interface for MoLab Sequencer
├── dcc/ # DCC plugins (Maya, Blender, etc.)
├── docs/ # Documentation for the project
├── Justfile # Automate build/test tasks across components
└── README.md # Project description and setup instructions
Pull requests are welcome. For major changes, please open an issue first to discuss what you would like to change.
Kollani represents a collaborative innovation project by SERU Animation GmbH, RnDeep GmbH, and the Institute for Applied Artificial Intelligence at the Hochschule der Medien Stuttgart.
The project is funded by the Ministerium für Wirtschaft, Arbeit und Tourismus Baden-Württemberg as part of an investBW initiative.
We would like to thank the following contributors/projects for the great foundation that we build upon: diffusion-motion-inbetweening, GMD, MDM, guided-diffusion, MotionCLIP, text-to-motion, actor, joints2smpl, MoDi.
This code is distributed under an MIT LICENSE.
Note that our code depends on other libraries and pretrained models, including CondMDI, CLIP, SMPL, SMPL-X, PyTorch3D, and uses datasets that each have their own respective licenses that must also be followed.
Warning
By using the pre-trained models of diffusion-motion-inbetweening (CondMDI), you agree to adhere to the licenses of the respective model checkpoints and datasets. HumanML3D is non-commercial and for research purposes only due to its use of the AMASS dataset. As a result, this project follows the same licensing restrictions and is limited to non-commercial and research purposes only. In order to use the models for commercial purposes, you must train your own models on a dataset that you have the rights to use commercially.