A versatile, all-in-one toolbox for whole-body humanoid robot control—enabling universal motion tracking, upper–lower body split strategies, and accelerated experimentation across simulation and real-world platforms.
- Whole Body Control Mode: Effortlessly track full-body human motions in a zero-shot fashion—generalize, don’t overfit.
- Upper–Lower Body Split Mode: Enhanced control strategy like Homie with dynamic walking and powerful manipulation—seamless coordination, robust skills.
- Multi-Robot Ready: Instantly deploy on
Unitree G1
,H1
,H1-2
, andFourier GR-1
—with more robots joining the lineup! - Lightning-Fast Experimentation: Tweak everything with flexible Hydra configs—adapt, iterate, and innovate at speed.
- Sim-to-Real Mastery: Built-in friction & mass randomization, noisy observations, and Sim2Sim testing—engineered for real-world success.
- [2025/07] First Release for Universal Humanoid Motion Tracking on Unitree G1!
- Release Environments on Unitree G1**
- Release Pre-trained Checkpoints and Training Data**
- Release Environments on Different Robots**
- Release Deployment Codes**
- 🚀 Highlights
- 📰 News
- 🚧 TODO
- ⚡ Quick Start
- 🛠️ Installation
- 🗂️ Code Structure
- 🧩 Adding New Environments
- 🔗 Citation
- 📄 License
- 👏 Acknowledgements
The typical workflow for controlling real-world humanoid robots with InternHumanoid:
Train
→ Play
→ Sim2Sim
→ Sim2Real
Train the universal motion tracker for Unitree G1-29 DoF:
python legged_gym/scripts/train.py +algo=ppo +robot=g1/g1_29dof +task=imitation/g1_29dof
- To run on CPU: add
+sim_device=cpu +rl_device=cpu
- To run headless (no rendering): add
+headless
- Trained policies are saved in
logs/<experiment_name>/<date_time>_<run_name>/model_<iteration>.pt
After training, play the saved checkpoint:
python legged_gym/scripts/play.py +algo=ppo +robot=g1/g1_29dof +task=imitation/g1_29dof
- By default, loads the last model of the last run in the experiment folder.
Test the saved ONNX model with sim2sim transfer (Mujoco as the testing environment):
cd sim2sim
python play_im.py --robot g1_29dof
More details of training and playing can be found in the documentation.
Please refer to the installation guide for detailed steps and configuration instructions.
envs/
: Environment/task definitionsconfig/
: YAML configuration files for tasks, robots, terrains, algorithmsutils/
: Math, logging, motion libraries, terrain helpers, task registryscripts/
: Entry-point scripts for training, playing, and exporting models
algorithms/
: RL algorithms (e.g., PPO variants)modules/
: Neural network modules (actor-critic, normalization, etc.)runners/
: Training and evaluation runnersenv/
: Environment wrappers and vectorized interfacesstorage/
: Rollout storage and replay buffersutils/
: Utility functions and experiment helpers
To add a new simulation environment or modify configuration files, see add new experiments.md for a step-by-step guide and detailed examples.
If you find our work helpful, please cite:
@misc{internhumanoid2025,
title = {InternHumanoid: Universal Whole-Body Control and Imitation for Humanoid Robots},
author = {InternHumanoid Contributors},
howpublished={\url{https://github.com/InternRobotics/InternHumanoid}},
year = {2025}
}
InternHumanoid is MIT licensed.
Open-sourced data are under the Creative Commons Attribution-NonCommercial-ShareAlike 4.0 International License.
- legged_gym: Foundation for training and running codes.
- rsl_rl: Reinforcement learning algorithms.
- mujoco: Powerful simulation functionalities.
- unitree_rl: Powerful reinforcement learning framework provided for Unitree Robots.
- unitree_sdk2_python: Hardware communication interface for physical deployment.
Let me know if you want to further customize any section, add badges, or include demo images/videos!