Skip to content

Sentient-Beings/Pixels-to-Actions

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 

History

41 Commits
 
 
 
 
 
 
 
 
 
 
 
 

Repository files navigation

🤖 Pixels to Actions

In this project, the aim is to develop the pipeline to deploy and test different VLA models to control a robotic arm in MuJoCo Sim, to perform manipulation tasks.

Currently, i am not using the 'LeRobot' hugging face codebase. But I am planning to either create a separate project testing out the LeRobot or integrate it within this project.

Key elements of the Project

  • Main simulation environemnt is MuJoCo
  • A ROS2 integration is also done where an in memory datastore like Redis is used to communicate Mujoco Sim with the ROS2 and Rviz2.
  • The robot can also be teleoperated or controlled using a VLA model.
  • The VLA inference script is inspired from the NanoLLM

Scritps

  • custom_teleop/src/ur5e_gripper_control : Teleoperation using an IK solver with a PS4 joystick controller
  • vla_project/scripts/teleop_joystick.py : Script to control robot arm using joystick

About

Testing and developing architecture around Vision Language Action Models

Resources

License

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published