A versatile teleoperation system for dexterous robotic hands, supporting multiple input devices including Apple Vision Pro, SpaceMouse, keyboard, and joystick. The system enables precise control of an Allegro Hand in both armless (floating) configuration and mounted on an xArm6 robotic arm.
- Apple Vision Pro - High-end hand tracking
- Keyboard - Accessible control option
- Space Mouse - 3D control for precision manipulation
- Joystick - Alternative controller option
- Meta Quest (Work in Progress)
Tested on Ubuntu 20.04, CUDA 11.7 and 12.1
# Clone the repository
git clone https://github.com/DavidLXu/Dexhand_VisionPro_Teleop.git
cd Dexhand_VisionPro_Teleop
# Create and activate conda environment
conda create -n dexgrasp python=3.8
conda activate dexgrasp
# Install IsaacGym
cd path_to_isaacgym/python
pip install -e .
# Install this repository
cd DexTeleop
bash install.sh
# Install dependencies for joystick and space mouse
pip install pygame pyspacemouse
On Apple Vision Pro, install Tracking Streamer from the App Store. Launch the application and obtain the IP address, then update the vision_pro_ip
parameter in dexhand_teleop.yaml
with this address.
To use different teleoperation devices, modify the teleop_device
parameter in dexhand_teleop.yaml
:
env:
teleop_device: "vision_pro" # Options: "vision_pro", "keyboard", "spacemouse", "joystick"
vision_pro_ip: "192.168.100.17" # IP address of Vision Pro (only needed for Vision Pro mode)
Navigate to the dexgrasp directory:
cd DexTeleop/dexgrasp
Run the teleoperation using:
# For convenience, use the run script
./run_teleop.sh
# Or run manually:
# For armless (floating) Allegro Hand
python run_online.py --task DexhandTeleop --algo ppo --config teleop_policy.yaml
# For Allegro Hand mounted on xArm6
python run_online.py --task DexhandTeleop --algo ppo --config teleop_policy.yaml --use_xarm6
Note: While we do not perform reinforcement learning, we leverage UniGraspTransformer's RL environment codebase to enable potential future extensions.
The system supports recording and replaying teleoperation trajectories:
- During teleoperation, press
1
to start recording - Perform your desired manipulation sequence
- Press
2
to stop recording and save the trajectory - Trajectories are saved in the
recorded_trajectories
directory with timestamp-based filenames
There are two types of replaying. The first type only replays the trajectory without dynamics.
# For convenience, use the replay script
./run_replay.sh [path/to/trajectory.json]
# Or run manually:
python tasks/replay_trajectory.py --trajectory [path/to/trajectory.json]
The second type of replaying considers dynamics while replaying. In dexhand_teleop.yaml
, set replay_mode: True
. Then the recorded trajectory is played in teleoperation environment.
./run_teleop.sh
- Hand Tracking: Apple Vision Pro Tracking Streamer provides hand keypoints
- Retargeting: PyBullet IK-based retargeting solves joint values for Allegro URDF
- Simulation Control: Isaac Gym RL Environment (this repo) handles armless force control or xarm control
If we use the original Allegro URDF, finger movements will cause rotation of the floating palm due to conservation of angular momentum.
Left: original palm inertia. Right: increased palm inertia.
We used a trick to increase the palm's inertia, which makes the floating hand control more stable.
For the allegro hand mounted on xArm6, there are multiple configurations for each end-effector pose.
Left: "stretched" arm configuration. Right: "twisted" arm configuration.
For instance we typically want the first configuration which is more "stretched" as the GIF shows, but it's possible to get the second configuration which is more "twisted" where the pitch rotation is constrained. To solve this, we can use a 7DoF arm (franka) with more advanced trajectory planning algorithms, which is beyond the scope of this repo.
Interaction with various objects. In dexhand_teleop.yaml
, set use_object
as True
. You can use your own object, and modify object_asset_path
and object_asset_file
.
In the gym viewer init viewer perspective, +x is pointing left, +y is pointing at us, +z is pointing up.
q[-z] w[-y] e[+z] u[+qy] i[+qx] o[-qy]
a[+x] s[+y] d[-x] f[grsp] h[rls] j[+qz] k[-qx] l[-qz]
q w e a s d
controls translationu i o j k l
controls rotationf
to grasp andh
to release fingers heuristically
Note: May encounter gimbal lock. Refer to class KeyboardTeleopDevice
for implementation details.
Control | Action |
---|---|
Left stick | XY translation |
X button | -Z translation |
Y button | +Z translation |
Right stick | Roll and pitch rotation |
Shoulder triggers | Yaw rotation |
A button | Grasp |
B button | Release |
Note: May encounter gimbal lock. Refer to class JoystickTeleopDevice
for implementation details.
Control | Action |
---|---|
Mouse Cap | Delta 6D pose |
Left button | Grasp |
Right button | Release |
Note: May encounter gimbal lock. Refer to class SpaceMouseTeleop
for implementation details.
Install all the dependencies and set the permission.
sudo apt-get install libhidapi-dev
Change to super user.
sudo su
run:
echo 'KERNEL=="hidraw*", SUBSYSTEM=="hidraw", MODE="0664", GROUP="plugdev"' > /etc/udev/rules.d/99-hidraw-permissions.rules
exit, and return to your user account.
exit
sudo usermod -aG plugdev $USER
newgrp plugdev
Then, restart your computer to load all the changes. Or you can see this page and install all the dependencies.
- Bridging Apple Vision Pro Tracking Streamer and PyBullet IK solver
- Support for URDF: Armless (floating) Allegro hand
- Support for URDF: Allegro hand mounted on xArm
- Support for URDF: Franka Arm and LEAP Hand
- Support for Bi-hands and Bi-arms
- Support for INPUT source: Joystick
- Support for INPUT source: 3D Mouse
- Support for INPUT source: Keyboard
- Support for INPUT source: Meta Quest
- Support for trajectory recording and replay