RSL_RL training performance worse than jax #106
Unanswered
MankaranSingh
asked this question in
Q&A
Replies: 2 comments
-
Hi @MankaranSingh , I would refer to the whitepaper and the RL config, where we show RSL-RL examples for a limited set of envs. The RSL-RL impl is there to show that we are somewhat agnostic to the RL lib. but many environments (reward config, hypers) were not really tuned specifically to RSL-RL; it was easier for us to sweep over brax hypers. If you'd like to submit better hypers for RSL-RL, please feel free! |
Beta Was this translation helpful? Give feedback.
0 replies
-
meet the same issue |
Beta Was this translation helpful? Give feedback.
0 replies
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
Uh oh!
There was an error while loading. Please reload this page.
-
Hi! the brax based training works really well for given locomotion examples. But on training with rsl_rl, the robots mostly just learn to output vey fast actions and fall over.
Is rsl rl based training tested or the focus is more on brax? Would love if rsl_rl examples work because they contain some nice RNN based actor critic implementations.
Beta Was this translation helpful? Give feedback.
All reactions