-
Notifications
You must be signed in to change notification settings - Fork 2
Update LeRobot submodule and add web UI for data collection #9
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Merged
Merged
Changes from 43 commits
Commits
Show all changes
52 commits
Select commit
Hold shift + click to select a range
e04819d
Remove data_collector and policy_to_trajectory packages along with th…
Woojin-Crive 664d8e3
Removed package-names from ros-ci.yml
Seongoo 9b9e15e
Update LeRobot submodule
Woojin-Crive 0d621b2
Added meta ROS 2 package for physical_ai_tools
Seongoo 28d448f
Merge branch 'feature-rivision-2' of https://github.com/ROBOTIS-GIT/p…
Seongoo a23b118
Updated the version information for physical_ai_manager
Seongoo 2f3a84c
Changed update date in CHANGELOG.rst
Seongoo aed7845
Renamed physical_ai_manager to physical_ai_server.
DongyunRobotis 99246cc
Renamed physical_ai_manager to physical_ai_server.
DongyunRobotis 92d6089
Add physical_ai_manager (Web UI for physical AI)
ola31 f36dc86
feat: Update favicon and modify app metadata
ola31 99037d9
feat: Update logo images for the application
ola31 0001cdb
feat: Update package-lock for physical_ai_manager
ola31 b3f28ef
feat: Enhance TopicSelectModal with hover and select states
ola31 6808988
fix: Correct author name in files
ola31 25e129d
Modified meta package
Seongoo b9af3ba
Added package name in ros-ci.yml
Seongoo ea8e96a
Fixed EOL
Seongoo cb6f4da
Modified ros-ci.yml
Seongoo 3a6d497
chore: Add .vscode to .gitignore
ola31 71f760a
chore: Update topic list and image source URL format
ola31 c67b1a5
chore: modify error message language in YamlEditor
ola31 398d554
Updated LeRobot submodule
Seongoo 422bfbe
Modified package.xml
Seongoo e79e662
Added a change log for physical_ai_manager
Seongoo f169aeb
Modified change logs
Seongoo e94a7cb
Modified change logs
Seongoo f1e44e4
Add Docker setup for physical_ai_manager including container script, …
Woojin-Crive 4071fd5
Update LeRobot submodule to commit 82b32dd
Woojin-Crive 294e542
Modified README.md
Seongoo 155ef0d
Fixed image link in README.md
Seongoo 5b48618
Fixed image link in README.md
Seongoo a23266b
Fixed image height in README.md
Seongoo 31e6a4a
Fixed image border radius in README.md
Seongoo 8c064e0
Changed image link in README.md
Seongoo c60db81
Changed image link in README.md
Seongoo d34e0ae
Changed image link in README.md
Seongoo c6ef5e5
Changed image height in README.md
Seongoo 1f377e5
Updated README.md
Seongoo 0d773d8
Modified README.md
Seongoo 2007eaf
Modified detailed descriptions about links in README.md
Seongoo 9461806
Adjusted small changes in README.md
Seongoo 2db00f0
fix: add EOL
ola31 c734900
Removed images from README.md
Seongoo 0af90ad
Merge branch 'feature-rivision-2' of https://github.com/ROBOTIS-GIT/p…
Seongoo 111c305
chore: update .gitignore for React/Node.js
ola31 451bed2
chore: remove README.md from physical_ai_manager
ola31 7cce297
chore: clean up comments in index.html
ola31 e2af6c4
chore: remove unnecessary comments
ola31 f5ab369
chore: remove logo.svg from project
ola31 1bcc69e
chore: update description in index.html
ola31 658a659
chore: update Node.js version in Dockerfile
ola31 File filter
Filter by extension
Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
There are no files selected for viewing
This file contains hidden or bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
This file contains hidden or bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
Original file line number | Diff line number | Diff line change |
---|---|---|
|
@@ -172,3 +172,6 @@ cython_debug/ | |
|
||
# PyPI configuration file | ||
.pypirc | ||
|
||
## IDE: vscode | ||
.vscode/ |
This file contains hidden or bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
Original file line number | Diff line number | Diff line change |
---|---|---|
@@ -1,272 +1,25 @@ | ||
# physical_ai_tools | ||
# Physical AI Tools | ||
|
||
This repository offers an interface for developing physical AI applications using LeRobot and ROS 2. | ||
<p align="center"> | ||
<img src="https://cdn.klnews.co.kr/news/photo/202504/316308_58702_2524.png" alt="AI_Worker" height="350"/> | ||
<img src="https://cdn11.bigcommerce.com/s-76o5u/images/stencil/original/uploaded_images/229751-232249-1155.png?t=1733856376" alt="OM_Y" height="350"/> | ||
</p> | ||
|
||
## Installation | ||
|
||
### 1. Clone the Source Code | ||
```bash | ||
cd ~/${WORKSPACE}/src | ||
git clone https://github.com/ROBOTIS-GIT/physical_ai_tools.git --recursive | ||
``` | ||
This repository offers an interface for developing physical AI applications using LeRobot and ROS 2. For detailed usage instructions, please refer to the e-manual below. | ||
- [ROBOTIS e-Manual for AI Worker](https://ai.robotis.com/) | ||
|
||
### 2. Install 🤗 LeRobot | ||
```bash | ||
cd ~/${WORKSPACE}/src/physical_ai_tools/lerobot | ||
pip install --no-binary=av -e . | ||
``` | ||
To learn more about the ROS 2 packages for the AI Worker, visit: | ||
- [AI Worker ROS 2 Packages](https://github.com/ROBOTIS-GIT/ai_worker) | ||
|
||
> **NOTE:** If you encounter build errors, you may need to install additional dependencies (`cmake`, `build-essential`, and `ffmpeg libs`). On Linux, run: | ||
`sudo apt-get install cmake build-essential python-dev pkg-config libavformat-dev libavcodec-dev libavdevice-dev libavutil-dev libswscale-dev libswresample-dev libavfilter-dev pkg-config`. For other systems, see: [Compiling PyAV](https://pyav.org/docs/develop/overview/installation.html#bring-your-own-ffmpeg) | ||
To explore our open-source platforms in a simulation environment, visit: | ||
- [Simulation Models](https://github.com/ROBOTIS-GIT/robotis_mujoco_menagerie) | ||
|
||
If you're using a Docker container, you may need to add the `--break-system-packages` option when installing with `pip`. | ||
```bash | ||
pip install --no-binary=av -e . --break-system-packages | ||
``` | ||
For usage instructions and demonstrations of the AI Worker, check out: | ||
- [Tutorial Videos](https://www.youtube.com/@ROBOTISOpenSourceTeam) | ||
|
||
### 3. Build the Workspace | ||
Navigate to your ROS 2 workspace directory and build the package using `colcon`: | ||
```bash | ||
cd ~/${WORKSPACE} | ||
colcon build --symlink-install --packages-select physical_ai_tools | ||
``` | ||
To access datasets and pre-trained models for our open-source platforms, see: | ||
- [AI Models & Datasets](https://huggingface.co/ROBOTIS) | ||
|
||
### 4. Source the Workspace | ||
After the build completes successfully, source the setup script: | ||
```bash | ||
source ~/${WORKSPACE}/install/setup.bash | ||
``` | ||
|
||
### 5. Install Packages | ||
Make the packages available as a Python module in your current environment: | ||
```bash | ||
cd ~/${WORKSPACE}/src/physical_ai_tools/data_collector | ||
pip install . | ||
``` | ||
```bash | ||
cd ~/${WORKSPACE}/src/physical_ai_tools/policy_to_trajectory | ||
pip install . | ||
``` | ||
## Record LeRobot Datasets | ||
|
||
### 1. Authenticate with Hugging Face | ||
Make sure you've logged in using a **write-access token** generated from your [Hugging Face settings](https://huggingface.co/settings/tokens): | ||
```bash | ||
huggingface-cli login --token ${HUGGINGFACE_TOKEN} --add-to-git-credential | ||
``` | ||
|
||
Store your Hugging Face username in a variable: | ||
```bash | ||
HF_USER=$(huggingface-cli whoami | head -n 1) | ||
echo $HF_USER | ||
``` | ||
|
||
--- | ||
|
||
### 2. Check Your Camera Indexes | ||
|
||
To include image data, check which camera indexes are available on your system: | ||
```bash | ||
cd ~/${WORKSPACE}/src/physical_ai_tools/lerobot | ||
``` | ||
```bash | ||
python lerobot/common/robot_devices/cameras/opencv.py \ | ||
--images-dir outputs/images_from_opencv_cameras | ||
``` | ||
|
||
Example output: | ||
|
||
```text | ||
Linux detected. Finding available camera indices through scanning '/dev/video*' ports | ||
Camera found at index 0 | ||
Camera found at index 1 | ||
Camera found at index 2 | ||
... | ||
Saving images to outputs/images_from_opencv_cameras | ||
``` | ||
|
||
Check the saved images in `outputs/images_from_opencv_cameras` to determine which index corresponds to which physical camera: | ||
```text | ||
camera_00_frame_000000.png | ||
camera_01_frame_000000.png | ||
... | ||
``` | ||
|
||
Once identified, update the camera indexes in the `"ffw"` robot configuration file: | ||
|
||
``` | ||
cd lerobot/common/robot_devices/robots/configs.py | ||
``` | ||
|
||
Modify it like so: | ||
```python | ||
@RobotConfig.register_subclass("ffw") | ||
@dataclass | ||
class FFWRobotConfig(ManipulatorRobotConfig): | ||
[...] | ||
cameras: dict[str, CameraConfig] = field( | ||
default_factory=lambda: { | ||
"cam_head": OpenCVCameraConfig( | ||
camera_index=0, # To be changed | ||
fps=30, | ||
width=640, | ||
height=480, | ||
), | ||
"cam_wrist_1": OpenCVCameraConfig( | ||
camera_index=1, # To be changed | ||
fps=30, | ||
width=640, | ||
height=480, | ||
), | ||
"cam_wrist_2": OpenCVCameraConfig( | ||
camera_index=2, # To be changed | ||
fps=30, | ||
width=640, | ||
height=480, | ||
), | ||
} | ||
) | ||
|
||
mock: bool = False | ||
``` | ||
|
||
--- | ||
|
||
### 3. Record Your Dataset | ||
|
||
Launch the ROS 2 data collector node. | ||
```bash | ||
# For OpenManipulator-X | ||
ros2 launch data_collector data_collector.launch.py mode:=omx | ||
# For AI Worker | ||
ros2 launch data_collector data_collector.launch.py mode:=worker | ||
``` | ||
|
||
Open a new terminal, and navigate to the `lerobot` directory: | ||
```bash | ||
cd ~/${WORKSPACE}/src/physical_ai_tools/lerobot | ||
``` | ||
|
||
Run the following command to start recording your Hugging Face dataset: | ||
```bash | ||
python lerobot/scripts/control_robot.py \ | ||
--robot.type=ffw \ | ||
--control.type=record \ | ||
--control.single_task="pick and place objects" \ | ||
--control.fps=30 \ | ||
--control.repo_id=${HF_USER}/ffw_test \ | ||
--control.tags='["tutorial"]' \ | ||
--control.episode_time_s=20 \ | ||
--control.reset_time_s=10 \ | ||
--control.num_episodes=2 \ | ||
--control.push_to_hub=true \ | ||
--control.use_ros=true \ | ||
--control.play_sounds=false | ||
``` | ||
|
||
💡 Make sure to replace `${HF_USER}` with your actual Hugging Face username. | ||
|
||
💡 If you don't want to push your Hugging Face dataset to hub, set --control.push_to_hub=false. | ||
|
||
--- | ||
|
||
### 🔧 Key Parameters to Customize | ||
|
||
To create your own dataset, you only need to modify the following five options: | ||
|
||
- **`--control.repo_id`** | ||
The Hugging Face dataset repository ID in the format `<username>/<dataset_name>`. This is where your dataset will be saved and optionally pushed to the Hugging Face Hub. | ||
|
||
- **`--control.single_task`** | ||
The name of the task you're performing (e.g., "pick and place objects"). | ||
|
||
- **`--control.episode_time_s`** | ||
Duration (in seconds) to record each episode. | ||
|
||
- **`--control.reset_time_s`** | ||
Time allocated (in seconds) for resetting your environment between episodes. | ||
|
||
- **`--control.num_episodes`** | ||
Total number of episodes to record for the dataset. | ||
|
||
Of course, you can modify other parameters as needed to better suit your use case. | ||
|
||
--- | ||
🎉 All set — now you’re ready to create your dataset! | ||
|
||
📺 Need a walkthrough? Check out this [video tutorial on YouTube](https://www.youtube.com/watch?v=n_Ljp_xuFEM) to see the full process of recording a dataset with LeRobot. | ||
|
||
### 4. Visualize Your Dataset | ||
|
||
You can also view your recorded dataset through a local web server. This is useful for quickly checking the collected data. | ||
|
||
Run the following command: | ||
|
||
```bash | ||
python lerobot/scripts/visualize_dataset_html.py \ | ||
--repo-id ${HF_USER}/ffw_test | ||
``` | ||
|
||
🖥️ This will start a local web server and open your dataset in a browser-friendly format. | ||
|
||
## Train a policy on Your Data | ||
|
||
Run the following command to start training a policy using your dataset: | ||
|
||
```bash | ||
python lerobot/scripts/train.py \ | ||
--dataset.repo_id=${HF_USER}/ffw_test \ | ||
--policy.type=act \ | ||
--output_dir=outputs/train/act_ffw_test \ | ||
--job_name=act_ffw_test \ | ||
--policy.device=cuda \ | ||
--wandb.enable=true | ||
``` | ||
|
||
(Optional) You can upload the latest checkpoint to the Hugging Face Hub with the following command: | ||
|
||
```bash | ||
huggingface-cli upload ${HF_USER}/act_ffw_test \ | ||
outputs/train/act_ffw_test/checkpoints/last/pretrained_model | ||
``` | ||
|
||
## Evaluation | ||
|
||
### 1. Launch the ROS 2 action_to_trajectory and topic_to_observation nodes.: | ||
|
||
```bash | ||
# For OpenManipulator-X | ||
ros2 launch policy_to_trajectory policy_to_trajectory.launch.py mode:=omx | ||
# For AI Worker | ||
ros2 launch policy_to_trajectory policy_to_trajectory.launch.py mode:=worker | ||
``` | ||
|
||
### 2. Evaluate your policy | ||
|
||
You can evaluate the policy on the robot using the `record` mode, which allows you to visualize the evaluation later on. | ||
|
||
```bash | ||
python lerobot/scripts/control_robot.py \ | ||
--robot.type=ffw \ | ||
--control.type=record \ | ||
--control.single_task="pick and place objects" \ | ||
--control.fps=30 \ | ||
--control.repo_id=${HF_USER}/eval_ffw_test \ | ||
--control.tags='["tutorial"]' \ | ||
--control.episode_time_s=20 \ | ||
--control.reset_time_s=10 \ | ||
--control.num_episodes=2 \ | ||
--control.push_to_hub=true \ | ||
--control.use_ros=true \ | ||
--control.policy.path=outputs/train/act_ffw_test/checkpoints/last/pretrained_model \ | ||
--control.play_sounds=false | ||
``` | ||
|
||
### 3. Visualize Evaluation | ||
|
||
You can then visualize the evaluation results using the following command: | ||
|
||
```bash | ||
python lerobot/scripts/visualize_dataset_html.py \ | ||
--repo-id ${HF_USER}/eval_act_ffw_test | ||
``` | ||
To use the Docker image for running ROS packages and Physical AI tools with the AI Worker, visit: | ||
- [Docker Images](https://hub.docker.com/r/robotis/ros/tags) |
This file was deleted.
Oops, something went wrong.
This file was deleted.
Oops, something went wrong.
Oops, something went wrong.
Oops, something went wrong.
Add this suggestion to a batch that can be applied as a single commit.
This suggestion is invalid because no changes were made to the code.
Suggestions cannot be applied while the pull request is closed.
Suggestions cannot be applied while viewing a subset of changes.
Only one suggestion per line can be applied in a batch.
Add this suggestion to a batch that can be applied as a single commit.
Applying suggestions on deleted lines is not supported.
You must change the existing code in this line in order to create a valid suggestion.
Outdated suggestions cannot be applied.
This suggestion has been applied or marked resolved.
Suggestions cannot be applied from pending reviews.
Suggestions cannot be applied on multi-line comments.
Suggestions cannot be applied while the pull request is queued to merge.
Suggestion cannot be applied right now. Please check back later.
Uh oh!
There was an error while loading. Please reload this page.