Skip to content

Update LeRobot submodule and add web UI for data collection #9

New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Merged
merged 52 commits into from
May 27, 2025
Merged
Show file tree
Hide file tree
Changes from 50 commits
Commits
Show all changes
52 commits
Select commit Hold shift + click to select a range
e04819d
Remove data_collector and policy_to_trajectory packages along with th…
Woojin-Crive May 15, 2025
664d8e3
Removed package-names from ros-ci.yml
Seongoo May 19, 2025
9b9e15e
Update LeRobot submodule
Woojin-Crive May 19, 2025
0d621b2
Added meta ROS 2 package for physical_ai_tools
Seongoo May 21, 2025
28d448f
Merge branch 'feature-rivision-2' of https://github.com/ROBOTIS-GIT/p…
Seongoo May 21, 2025
a23b118
Updated the version information for physical_ai_manager
Seongoo May 22, 2025
2f3a84c
Changed update date in CHANGELOG.rst
Seongoo May 22, 2025
aed7845
Renamed physical_ai_manager to physical_ai_server.
DongyunRobotis May 23, 2025
99246cc
Renamed physical_ai_manager to physical_ai_server.
DongyunRobotis May 23, 2025
92d6089
Add physical_ai_manager (Web UI for physical AI)
ola31 May 23, 2025
f36dc86
feat: Update favicon and modify app metadata
ola31 May 23, 2025
99037d9
feat: Update logo images for the application
ola31 May 23, 2025
0001cdb
feat: Update package-lock for physical_ai_manager
ola31 May 23, 2025
b3f28ef
feat: Enhance TopicSelectModal with hover and select states
ola31 May 23, 2025
6808988
fix: Correct author name in files
ola31 May 23, 2025
25e129d
Modified meta package
Seongoo May 23, 2025
b9af3ba
Added package name in ros-ci.yml
Seongoo May 23, 2025
ea8e96a
Fixed EOL
Seongoo May 23, 2025
cb6f4da
Modified ros-ci.yml
Seongoo May 23, 2025
3a6d497
chore: Add .vscode to .gitignore
ola31 May 23, 2025
71f760a
chore: Update topic list and image source URL format
ola31 May 23, 2025
c67b1a5
chore: modify error message language in YamlEditor
ola31 May 23, 2025
398d554
Updated LeRobot submodule
Seongoo May 26, 2025
422bfbe
Modified package.xml
Seongoo May 26, 2025
e79e662
Added a change log for physical_ai_manager
Seongoo May 26, 2025
f169aeb
Modified change logs
Seongoo May 26, 2025
e94a7cb
Modified change logs
Seongoo May 26, 2025
f1e44e4
Add Docker setup for physical_ai_manager including container script, …
Woojin-Crive May 26, 2025
4071fd5
Update LeRobot submodule to commit 82b32dd
Woojin-Crive May 26, 2025
294e542
Modified README.md
Seongoo May 26, 2025
155ef0d
Fixed image link in README.md
Seongoo May 26, 2025
5b48618
Fixed image link in README.md
Seongoo May 26, 2025
a23266b
Fixed image height in README.md
Seongoo May 26, 2025
31e6a4a
Fixed image border radius in README.md
Seongoo May 26, 2025
8c064e0
Changed image link in README.md
Seongoo May 26, 2025
c60db81
Changed image link in README.md
Seongoo May 26, 2025
d34e0ae
Changed image link in README.md
Seongoo May 26, 2025
c6ef5e5
Changed image height in README.md
Seongoo May 26, 2025
1f377e5
Updated README.md
Seongoo May 26, 2025
0d773d8
Modified README.md
Seongoo May 26, 2025
2007eaf
Modified detailed descriptions about links in README.md
Seongoo May 27, 2025
9461806
Adjusted small changes in README.md
Seongoo May 27, 2025
2db00f0
fix: add EOL
ola31 May 27, 2025
c734900
Removed images from README.md
Seongoo May 27, 2025
0af90ad
Merge branch 'feature-rivision-2' of https://github.com/ROBOTIS-GIT/p…
Seongoo May 27, 2025
111c305
chore: update .gitignore for React/Node.js
ola31 May 27, 2025
451bed2
chore: remove README.md from physical_ai_manager
ola31 May 27, 2025
7cce297
chore: clean up comments in index.html
ola31 May 27, 2025
e2af6c4
chore: remove unnecessary comments
ola31 May 27, 2025
f5ab369
chore: remove logo.svg from project
ola31 May 27, 2025
1bcc69e
chore: update description in index.html
ola31 May 27, 2025
658a659
chore: update Node.js version in Dockerfile
ola31 May 27, 2025
File filter

Filter by extension

Filter by extension


Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
4 changes: 1 addition & 3 deletions .github/workflows/ros-ci.yml
Original file line number Diff line number Diff line change
Expand Up @@ -55,6 +55,4 @@ jobs:
target-ros2-distro: ${{ matrix.ros_distribution }}
vcs-repo-file-url: ""
package-name: |
data_collector
policy_to_trajectory
physical_ai_manager
physical_ai_server
27 changes: 27 additions & 0 deletions .gitignore
Original file line number Diff line number Diff line change
Expand Up @@ -172,3 +172,30 @@ cython_debug/

# PyPI configuration file
.pypirc

## IDE: vscode
.vscode/

# React / Node.js

# dependencies
node_modules/
/.pnp
.pnp.js

# testing
/coverage

# production
/build

# misc
.DS_Store
.env.local
.env.development.local
.env.test.local
.env.production.local

npm-debug.log*
yarn-debug.log*
yarn-error.log*
279 changes: 13 additions & 266 deletions README.md
Original file line number Diff line number Diff line change
@@ -1,272 +1,19 @@
# physical_ai_tools
# Physical AI Tools

This repository offers an interface for developing physical AI applications using LeRobot and ROS 2.
This repository offers an interface for developing physical AI applications using LeRobot and ROS 2. For detailed usage instructions, please refer to the e-manual below.
- [ROBOTIS e-Manual for AI Worker](https://ai.robotis.com/)

## Installation
To learn more about the ROS 2 packages for the AI Worker, visit:
- [AI Worker ROS 2 Packages](https://github.com/ROBOTIS-GIT/ai_worker)

### 1. Clone the Source Code
```bash
cd ~/${WORKSPACE}/src
git clone https://github.com/ROBOTIS-GIT/physical_ai_tools.git --recursive
```
To explore our open-source platforms in a simulation environment, visit:
- [Simulation Models](https://github.com/ROBOTIS-GIT/robotis_mujoco_menagerie)

### 2. Install 🤗 LeRobot
```bash
cd ~/${WORKSPACE}/src/physical_ai_tools/lerobot
pip install --no-binary=av -e .
```
For usage instructions and demonstrations of the AI Worker, check out:
- [Tutorial Videos](https://www.youtube.com/@ROBOTISOpenSourceTeam)

> **NOTE:** If you encounter build errors, you may need to install additional dependencies (`cmake`, `build-essential`, and `ffmpeg libs`). On Linux, run:
`sudo apt-get install cmake build-essential python-dev pkg-config libavformat-dev libavcodec-dev libavdevice-dev libavutil-dev libswscale-dev libswresample-dev libavfilter-dev pkg-config`. For other systems, see: [Compiling PyAV](https://pyav.org/docs/develop/overview/installation.html#bring-your-own-ffmpeg)
To access datasets and pre-trained models for our open-source platforms, see:
- [AI Models & Datasets](https://huggingface.co/ROBOTIS)

If you're using a Docker container, you may need to add the `--break-system-packages` option when installing with `pip`.
```bash
pip install --no-binary=av -e . --break-system-packages
```

### 3. Build the Workspace
Navigate to your ROS 2 workspace directory and build the package using `colcon`:
```bash
cd ~/${WORKSPACE}
colcon build --symlink-install --packages-select physical_ai_tools
```

### 4. Source the Workspace
After the build completes successfully, source the setup script:
```bash
source ~/${WORKSPACE}/install/setup.bash
```

### 5. Install Packages
Make the packages available as a Python module in your current environment:
```bash
cd ~/${WORKSPACE}/src/physical_ai_tools/data_collector
pip install .
```
```bash
cd ~/${WORKSPACE}/src/physical_ai_tools/policy_to_trajectory
pip install .
```
## Record LeRobot Datasets

### 1. Authenticate with Hugging Face
Make sure you've logged in using a **write-access token** generated from your [Hugging Face settings](https://huggingface.co/settings/tokens):
```bash
huggingface-cli login --token ${HUGGINGFACE_TOKEN} --add-to-git-credential
```

Store your Hugging Face username in a variable:
```bash
HF_USER=$(huggingface-cli whoami | head -n 1)
echo $HF_USER
```

---

### 2. Check Your Camera Indexes

To include image data, check which camera indexes are available on your system:
```bash
cd ~/${WORKSPACE}/src/physical_ai_tools/lerobot
```
```bash
python lerobot/common/robot_devices/cameras/opencv.py \
--images-dir outputs/images_from_opencv_cameras
```

Example output:

```text
Linux detected. Finding available camera indices through scanning '/dev/video*' ports
Camera found at index 0
Camera found at index 1
Camera found at index 2
...
Saving images to outputs/images_from_opencv_cameras
```

Check the saved images in `outputs/images_from_opencv_cameras` to determine which index corresponds to which physical camera:
```text
camera_00_frame_000000.png
camera_01_frame_000000.png
...
```

Once identified, update the camera indexes in the `"ffw"` robot configuration file:

```
cd lerobot/common/robot_devices/robots/configs.py
```

Modify it like so:
```python
@RobotConfig.register_subclass("ffw")
@dataclass
class FFWRobotConfig(ManipulatorRobotConfig):
[...]
cameras: dict[str, CameraConfig] = field(
default_factory=lambda: {
"cam_head": OpenCVCameraConfig(
camera_index=0, # To be changed
fps=30,
width=640,
height=480,
),
"cam_wrist_1": OpenCVCameraConfig(
camera_index=1, # To be changed
fps=30,
width=640,
height=480,
),
"cam_wrist_2": OpenCVCameraConfig(
camera_index=2, # To be changed
fps=30,
width=640,
height=480,
),
}
)

mock: bool = False
```

---

### 3. Record Your Dataset

Launch the ROS 2 data collector node.
```bash
# For OpenManipulator-X
ros2 launch data_collector data_collector.launch.py mode:=omx
# For AI Worker
ros2 launch data_collector data_collector.launch.py mode:=worker
```

Open a new terminal, and navigate to the `lerobot` directory:
```bash
cd ~/${WORKSPACE}/src/physical_ai_tools/lerobot
```

Run the following command to start recording your Hugging Face dataset:
```bash
python lerobot/scripts/control_robot.py \
--robot.type=ffw \
--control.type=record \
--control.single_task="pick and place objects" \
--control.fps=30 \
--control.repo_id=${HF_USER}/ffw_test \
--control.tags='["tutorial"]' \
--control.episode_time_s=20 \
--control.reset_time_s=10 \
--control.num_episodes=2 \
--control.push_to_hub=true \
--control.use_ros=true \
--control.play_sounds=false
```

💡 Make sure to replace `${HF_USER}` with your actual Hugging Face username.

💡 If you don't want to push your Hugging Face dataset to hub, set --control.push_to_hub=false.

---

### 🔧 Key Parameters to Customize

To create your own dataset, you only need to modify the following five options:

- **`--control.repo_id`**
The Hugging Face dataset repository ID in the format `<username>/<dataset_name>`. This is where your dataset will be saved and optionally pushed to the Hugging Face Hub.

- **`--control.single_task`**
The name of the task you're performing (e.g., "pick and place objects").

- **`--control.episode_time_s`**
Duration (in seconds) to record each episode.

- **`--control.reset_time_s`**
Time allocated (in seconds) for resetting your environment between episodes.

- **`--control.num_episodes`**
Total number of episodes to record for the dataset.

Of course, you can modify other parameters as needed to better suit your use case.

---
🎉 All set — now you’re ready to create your dataset!

📺 Need a walkthrough? Check out this [video tutorial on YouTube](https://www.youtube.com/watch?v=n_Ljp_xuFEM) to see the full process of recording a dataset with LeRobot.

### 4. Visualize Your Dataset

You can also view your recorded dataset through a local web server. This is useful for quickly checking the collected data.

Run the following command:

```bash
python lerobot/scripts/visualize_dataset_html.py \
--repo-id ${HF_USER}/ffw_test
```

🖥️ This will start a local web server and open your dataset in a browser-friendly format.

## Train a policy on Your Data

Run the following command to start training a policy using your dataset:

```bash
python lerobot/scripts/train.py \
--dataset.repo_id=${HF_USER}/ffw_test \
--policy.type=act \
--output_dir=outputs/train/act_ffw_test \
--job_name=act_ffw_test \
--policy.device=cuda \
--wandb.enable=true
```

(Optional) You can upload the latest checkpoint to the Hugging Face Hub with the following command:

```bash
huggingface-cli upload ${HF_USER}/act_ffw_test \
outputs/train/act_ffw_test/checkpoints/last/pretrained_model
```

## Evaluation

### 1. Launch the ROS 2 action_to_trajectory and topic_to_observation nodes.:

```bash
# For OpenManipulator-X
ros2 launch policy_to_trajectory policy_to_trajectory.launch.py mode:=omx
# For AI Worker
ros2 launch policy_to_trajectory policy_to_trajectory.launch.py mode:=worker
```

### 2. Evaluate your policy

You can evaluate the policy on the robot using the `record` mode, which allows you to visualize the evaluation later on.

```bash
python lerobot/scripts/control_robot.py \
--robot.type=ffw \
--control.type=record \
--control.single_task="pick and place objects" \
--control.fps=30 \
--control.repo_id=${HF_USER}/eval_ffw_test \
--control.tags='["tutorial"]' \
--control.episode_time_s=20 \
--control.reset_time_s=10 \
--control.num_episodes=2 \
--control.push_to_hub=true \
--control.use_ros=true \
--control.policy.path=outputs/train/act_ffw_test/checkpoints/last/pretrained_model \
--control.play_sounds=false
```

### 3. Visualize Evaluation

You can then visualize the evaluation results using the following command:

```bash
python lerobot/scripts/visualize_dataset_html.py \
--repo-id ${HF_USER}/eval_act_ffw_test
```
To use the Docker image for running ROS packages and Physical AI tools with the AI Worker, visit:
- [Docker Images](https://hub.docker.com/r/robotis/ros/tags)
37 changes: 0 additions & 37 deletions data_collector/config/joint_order.yaml

This file was deleted.

Loading
Loading