-
Notifications
You must be signed in to change notification settings - Fork 764
Bug of cleanrl_utils/evals #380
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Comments
Ah nice catch! Thanks for the report. Would you be up to creating a fix for this? We should also add end-to-end test cases for the evaluation scripts. |
I am working on this and creating a PR later. I haven't figured out how to write the test yet. |
Thanks! Regarding tests, we usually just make sure the script runs without errors with something like this: cleanrl/tests/test_atari_jax.py Lines 4 to 9 in 9f8b64b
Then we ran github actions to ensure the script runs on multiple operating systems such as Windows, Linux, and MacOs. cleanrl/.github/workflows/tests.yaml Lines 18 to 55 in 9f8b64b
|
Problem Description
The eval model feature cannot be used.
The main reason is that #370 migrates the implementation of dqn algorithm from gym to gymnasium, and eval module still uses gym as the environment for evaluation.
Checklist
poetry install
(see CleanRL's installation guideline.Current Behavior
Expected Behavior
Possible Solution
Update the
cleanrl_utils/evals/*
module.Steps to Reproduce
Running the command
python dqn_atari_jax.py --save-model True --total-timesteps 2000 --learning-starts 1000
will throw an error:The text was updated successfully, but these errors were encountered: