A set of standard implementations of common algorithms for use with rlgym-learn
- PPO agent controller
- Flexible metrics logging
- File-based configuration
- install RLGym via
pip install rlgym
. If you're here for Rocket League, you can usepip install rlgym[rl-rlviser]
instead to get the RLGym API as well as the Rocket League / Sim submodules and rlviser support. - If you would like to use a GPU install PyTorch with CUDA
- Install rlgym-learn via
pip install rlgym-learn
- Install this project via
pip install rlgym-learn-algos
- If pip installing fails at first, install Rust by following the instructions here
See the RLGym website for complete documentation and demonstration of functionality [COMING SOON]. For now, you can take a look at quick_start_guide.py
and speed_test.py
in the rlgym-learn repo to get a sense of what's going on.
This code was separated out from rlgym-learn, which itself was created using rlgym-ppo as a base. The following files were largely written by Matthew Allen, the creator of rlgym-ppo: ppo/basic_critic.py ppo/continuous_actor.py ppo/critic.py ppo/discrete_actor.py ppo/multi_discrete_actor.py ppo/experience_buffer.py util/*
In general his support for this project has allowed it to become what it is today.