Replies: 1 comment 3 replies
-
Thank you for posting this post. I'll move it to our Discussions section. Here is a summary of things to consider. 1. Physics and Simulation ConfigurationRobot and Gripper Articulation
Gain Tuning
2. Reward Function and Task DesignReward Engineering
Example: Reward Function Structure
3. rsl_rl and Training HyperparametersParallel Environments
Hyperparameter Tuning
Debugging NaNs and Divergence
4. Additional Best Practices
Summary Table
Footnotes
|
Beta Was this translation helpful? Give feedback.
Uh oh!
There was an error while loading. Please reload this page.
-
Hi,
I'm working on a custom lift task in Isaac Lab using the UR10e robotic arm equipped with the Robotiq 2F-140 gripper. Below, I summarize the steps I've followed so far and describe the issue I’m currently encountering.
Steps Completed
1. USD Asset Setup
physics
,sensors
, andgripper
.Preview of the setup:
2. Custom Configuration
Robot Configuration
We created a custom
ArticulationCfg
for the UR10e with the Robotiq 2F-140:Lift Environment Configuration
❗ Current Issue
We adapted the default Franka Lift task from Isaac Lab to work with the UR10e + Robotiq 2F-140 setup by modifying the robot and environment configurations as shown above.
We trained the policy using the
rsl_rl
library with 4096 parallel environments. While the simulation runs smoothly, the robot consistently fails to complete the lift task. In particular:The reward consistently trends toward large negative values during training.
The training loss rapidly increases and appears to diverge toward infinity, with no sign of convergence.
Screenshot:
If you’ve worked with similar setups or have tips for:
...any suggestions would be much appreciated!
Thanks in advance for your support!
Beta Was this translation helpful? Give feedback.
All reactions