This repository contains the official code release for the paper Learning in complex action spaces without policy gradients by Arash Tavakoli, Sina Ghiassian, and Nemanja Rakićević. The paper is published in Transactions on Machine Learning Research (TMLR).
This implementation was developed by Arash Tavakoli.
Please use the following citation if you make use of our work:
@article{tavakoli2025learning,
author = {Arash Tavakoli and Sina Ghiassian and Nemanja Rakicevic},
title = {Learning in complex action spaces without policy gradients},
journal = {Transactions on Machine Learning Research},
year = {2025},
url = {https://openreview.net/forum?id=nOL9M6D4oM}
}
To run experiments locally, give the following a try:
git clone https://github.com/atavakol/qmle.git && cd qmle
pip install -e .
Start training with capture video:
python -m src.qmle --env-id walker_stand --capture-video
This project includes modified code from the following repositories:
- CleanRL - DQN implementation, licensed under MIT.
- Stable Baselines - Prioritized Replay Buffer, licensed under MIT.
Each respective license is included in the third_party/
directory.