Remove unused Events and fix _wait_for_worker_startup behavior #471
Add this suggestion to a batch that can be applied as a single commit.
This suggestion is invalid because no changes were made to the code.
Suggestions cannot be applied while the pull request is closed.
Suggestions cannot be applied while viewing a subset of changes.
Only one suggestion per line can be applied in a batch.
Add this suggestion to a batch that can be applied as a single commit.
Applying suggestions on deleted lines is not supported.
You must change the existing code in this line in order to create a valid suggestion.
Outdated suggestions cannot be applied.
This suggestion has been applied or marked resolved.
Suggestions cannot be applied from pending reviews.
Suggestions cannot be applied on multi-line comments.
Suggestions cannot be applied while the pull request is queued to merge.
Suggestion cannot be applied right now. Please check back later.
When trying to use TaskIQ for distributing slow-to-start processes (due to AI imports) meant for long-running tasks, we kept running into issues where TaskIQ would start our worker subprocesses from ProcessManager and then within 10-15 seconds would erroneously deem them "not alive" and restart them. After some experimentation with a forked repo we discovered that this was because the
_wait_for_worker_startup
was not properly waiting for our python process to report alive.This PR fixes that method and removes the unused Events and imports, which from what I could tell were never being signaled from anywhere anyways. I tested a simplified version of this change on our dev system, so it seems like this should work, but after cleaning up this change I have not retested other than running unit tests. We are likely going to switch to Celery for observability benefits, but I figured giving you folks this PR would be the right thing to do for our OpenSource community :)