-
Notifications
You must be signed in to change notification settings - Fork 1.7k
[Bug Report] Multiple issues within ray/tuner.py
#2328
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Comments
Thanks for posting this. The team will review. |
Thanks for the report! Tagging @glvov-bdai to get his feedback. |
I'd really appreciate if you could PR these changes, they look good to me! Sorry about these issues @ozhanozen thank you for looking into how to fix this! If you have these on a a branch/PR I can test this on my multi-gpu machine and confirm that these changes work. @kellyguo11 just fyi my internship at the RAI Institute ends at the end of the month so I'll no longer have access/get notified on the @glvov-bdai account feel free to ping @garylvov instead moving forward |
Hi @garylvov, no worries, thank you for integrating Ray, it is quite useful for us. I have created a PR for these changes as you have suggested. |
Hello,
I have noticed multiple issues within the
step()
function ofray/tuner.py
, some of which prevent me from having an uninterrupted hyperparameter tuning session with ray. Here are the issues with possible workarounds:There is the following loop to idle until the incoming data is updated:
IsaacLab/scripts/reinforcement_learning/ray/tuner.py
Lines 115 to 117 in 7de6d6f
However, due to the keyword
"done"
we insert intoself.data
at each loop, thedata
andself.data
can never be equal, even if the underlying data are equal ("done"
will be absent withindata
).I suggest we change this part to something like:
Update to the
self.data["done"]
to mark the run as finished happens currently here:IsaacLab/scripts/reinforcement_learning/ray/tuner.py
Lines 104 to 105 in 7de6d6f
However, from time to time, I notice that the process that executes the training takes a while to return after the end of the training, and we end up inside the following loop (after the fix from bullet 1):
IsaacLab/scripts/reinforcement_learning/ray/tuner.py
Lines 115 to 117 in 7de6d6f
from which we can never exit (since the data is not updated anymore, and we don't check if the process returned or not). Consequently, the ray stuck there.
I suggest we change both of the while loops as follows:
Finally, while this might not necessarily be an issue directly related to IsaacLab, I noticed that sometimes the process executing the training hangs right after the end of the training forever (maybe at
simulation_app.close()
?), hence halting all the ray process as we never can mark the run as finished.While it might not be the best solution, I applied the following patch as a workaround, and it seems to work for me:
Additional context
I have tested these only on a single GPU (4090 RTX) and with the rsl_rl library.
System Info
Commit: bc7c9f5
Isaac Sim Version: 4.5
OS: Ubuntu 22.04
GPU: 4090 RTX
CUDA: 12.2
GPU Driver:
535.129.03
Checklist
Acceptance Criteria
The text was updated successfully, but these errors were encountered: