Skip to content

An easy-to-use, comprehensible reinforcement learning library for research and education. (We have a logo.)

License

Notifications You must be signed in to change notification settings

Sequel-SCOOL/rlberry

 
 

Repository files navigation

A Reinforcement Learning Library for Research and Education

pytest Documentation Status contributors Codacy codecov PyPI

Try it on Google Colab! Open In Colab


Section Description
Goals The philosophy of rlberry
Installation How to install rlberry
Getting started A quick usage guide of rlberry
Documentation A link to the documentation
Contributing A guide for contributing
Citation How to cite this work

Goals

  • Write detailed documentation and comprehensible tutorial/examples (Jupyter Notebook) for each implemented algorithm.

  • Provide a general interface for agents, that

    • puts minimal constraints on the agent code (=> making it easy to include new algorithms and modify existing ones);

    • allows comparison between agents using a simple and unified evaluation interface (=> making it easy, for instance, to compare deep and "traditional" RL algorithms).

  • Unified seeding mechanism: define only one global seed, from which all other seeds will inherit, enforcing independence of the random number generators.

  • Simple interface for creating and rendering new environments.

Installation

Cloning & creating virtual environment

It is suggested to create a virtual environment using Anaconda or Miniconda:

git clone https://github.com/rlberry-py/rlberry.git
conda create -n rlberry python=3.7

Basic installation

Install without heavy libraries (e.g. pytorch).

conda activate rlberry
pip install -e .

Full installation

Install with all features,

conda activate rlberry
pip install -e .[full]

which includes:

  • Numba for just-in-time compilation of algorithms based on dynamic programming,
  • PyTorch for Deep RL agents,
  • Optuna for hyperparameter optimization,
  • ffmpeg-python for saving videos,
  • PyOpenGL for more rendering options.

Getting started

Tests

To run tests, install test dependencies with pip install -e .[test] and run pytest. To run tests with coverage, install test dependencies and run bash run_testscov.sh. See coverage report in cov_html/index.html.

Documentation

The documentation is under construction and will be available here.

Contributing

Want to contribute to rlberry? Please check our contribution guidelines. A list of interesting TODO's will be available soon. If you want to add any new agents or environments, do not hesitate to open an issue!

Implementation notes

  • When inheriting from the Agent class, make sure to call Agent.__init__(self, env, **kwargs) using **kwargs in case new features are added to the base class, and to make sure that copy_env and reseed_env are always an option to any agent.

  • Convention for verbose in the agents:

    • verbose=0: nothing is printed
    • verbose>1: print progress messages

Errors and warnings are printed using the logging library.

Citing rlberry

About

An easy-to-use, comprehensible reinforcement learning library for research and education. (We have a logo.)

Resources

License

Contributing

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published

Languages

  • Python 78.6%
  • Jupyter Notebook 21.4%