This repository implements tasks for the SO‑ARM100 and SO‑ARM101 robots using Isaac Lab. It serves as the foundation for several tutorials in the LycheeAI Hub series Project: SO‑ARM101 × Isaac Sim × Isaac Lab.
📰 News featuring this repository:
- 10 June 2025: 🎥 LycheeAI Channel Premiere: SO-ARM101 tutorial series announcement! 🔗 Watch on YouTube
- 23 April 2025: 🤖 NVIDIA Omniverse Livestream: Training a Robot from Scratch in Simulation (URDF → OpenUSD). 🔗 Watch on YouTube
- 19 April 2025: 🎥 LycheeAI Tutorial: How to Create External Projects in Isaac Lab. 🔗 Watch on YouTube
🎬 Watch the Lift Task in action
-
Install Isaac Lab by following the official installation guide (using conda).
-
Clone this repository outside the
IsaacLab
directory. -
Install the package:
python -m pip install -e source/SO_100
To list all available environments:
python scripts/list_envs.py
Two scripts can help verify your setup:
Zero Agent
Sends zero commands to all robots, confirming that the environment loads correctly:
python scripts/zero_agent.py --task SO-ARM100-Reach-Play-v0
Random Agent
Sends random commands to all robots, confirming proper actuation:
python scripts/random_agent.py --task SO-ARM100-Reach-Play-v0
You can train a policy for SO‑ARM100 / SO‑ARM101 tasks (for example, the Reach task, which is a basic RL-based IK) with the rsl_rl
and/or skrl
library:
python scripts/rsl_rl/train.py --task SO-ARM100-Reach-v0 --headless
After training, validate the learned policy:
python scripts/rsl_rl/play.py --task SO-ARM100-Reach-Play-v0
This ensures that your policy performs as expected in Isaac Lab before attempting real‑world transfer.
Work in progress.
Work in progress.
We welcome contributions of all kinds! Please read our Contributing Guide to learn how to set up your environment, follow our coding style, and submit pull requests.
This project is licensed under the BSD 3-Clause License. See the LICENSE file for details.
This project builds upon the excellent work of several open-source projects and communities:
- Isaac Lab - The foundational robotics simulation framework that powers this project
- NVIDIA Isaac Sim - The underlying physics simulation platform
- RSL-RL - Reinforcement learning library used for training policies
- SKRL - Alternative RL library integration
- SO-ARM100/SO-ARM101 Robot - The hardware platform that inspired this simulation environment
Special thanks to:
- The Isaac Lab development team at NVIDIA for providing the simulation framework
- Hugging Face and The Robot Studio for the SO‑ARM robot series
- The LycheeAI Hub community for tutorials and support
If you use this work, please cite it as:
@software{Louis_Isaac_Lab_2025,
author = {Louis, Le Lay and Muammer, Bay},
doi = {https://doi.org/10.5281/zenodo.16794229},
license = {BSD-3-Clause},
month = apr,
title = {Isaac Lab – SO‑ARM100 / SO‑ARM101 Project},
url = {https://github.com/MuammerBay/isaac_so_arm101},
version = {1.1.0},
year = {2025}
}