Replies: 1 comment
-
Not sure if you figured this out already, but like it says, you shouldn't really be using tries and instead something like
Adjust your hours as you see fit. Also, it's worth noting you could use a key and put these type jobs on their own queue so....
And dispatch your job with something like...'
That way they don't gum up your other queues. My problem is that in the above, there's no difference between jobs that have failed and jobs that have timed out. I want jobs that fail because of errors or exceptions to fail after one try. I can't figure that part out. |
Beta Was this translation helpful? Give feedback.
0 replies
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
Uh oh!
There was an error while loading. Please reload this page.
-
Hi,
My Project consumes several 3rd party APIs which enforce requests limiting. My Project calls these api's through Laravel Jobs.
I have following situation
Once a Project is submitted, around 60 jobs are dispatched on an average. These jobs needs to be executed as
1 Job/Minute
there is one supervisord process running 2 process of the default queue with
--tries=3
also in
config/queue.php
for redis I am using'retry_after' => (60 * 15)
to avoid retrying while job is executing.My current Rate Limiter middleware is coded this way
What happens is that 3 jobs get processed in 3 mins, but after that all jobs gets failed.
what I can understand is all jobs are requeued every min and once they cross tries threshold (3), they are moved to failed_jobs.
I tried removing
--tries
flags but that didn't work. I also tried increasing--tries=20
, but then jobs fails after 20 mins.I don't want to hardcode the
--tries
flag as in some situation more than 100 jobs can be dispatched.I also want to increase no of queue workers process in the supervisor so that few jobs can execute parallely.
I am looking for help to get through this problem. Please Help...
Beta Was this translation helpful? Give feedback.
All reactions