Skip to content

Conversation

gustavo-grieco
Copy link
Contributor

Fix for #2427

@ehennenfent ehennenfent linked an issue May 11, 2021 that may be closed by this pull request
Copy link
Contributor

@ehennenfent ehennenfent left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

At the meeting, we discussed how checking the number of killed states across each iteration of this loop might be a little bit brittle in a multithreaded context. Fortunately, when a worker dies, Manticore emits an event called did_terminate_worker. Instead of having to check the number of killed states, you could instead create did_terminate_worker_callback on ManticoreBase and then add self.subscribe("did_terminate_worker", self.did_terminate_worker_callback) to __init__. That would simplify the code and reduce the risk of race conditions.

The other thing to note is that we may still want some sort of a bail-out mechanism in case things have gone so wrong that all the workers are dying as soon as we re-create them. Perhaps we could have a maximum number of restarts allowed for each worker ID? Or maybe a maximum number of restarts in five-minute window?

@gustavo-grieco
Copy link
Contributor Author

@ehennenfent I reimplemented this PR using did_terminate_worker_callback, but I still need to count the number of killed states to know how many of them I need to restart. This is required since did_terminate_worker_callback can be called in a variety of scenarios, where most of them are fine.

On the other hand, some sort of a bail-out mechanism if we restarted too many workers, but keep in mind that manticore should eventually finished because killed states are not revived, so the analysis will finish and manticore will report to the user that some states are unexplored. Perhaps restarting 10 or 20 workers is ok, if you are exploring 2000 states, so it is difficult to determinate. I'm still open to ideas about it.

@CLAassistant
Copy link

CLAassistant commented Jun 1, 2022

CLA assistant check
Thank you for your submission! We really appreciate it. Like many open source projects, we ask that you all sign our Contributor License Agreement before we can accept your contribution.
1 out of 2 committers have signed the CLA.

✅ gustavo-grieco
❌ Eric Hennenfent


Eric Hennenfent seems not to be a GitHub user. You need a GitHub account to be able to sign the CLA. If you have already a GitHub account, please add the email address used for this commit to your account.
You have signed the CLA already but the status is still pending? Let us recheck it.

@dguido dguido requested a review from ekilmer as a code owner August 21, 2025 17:29
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

Available workers decrease over time?

4 participants