Replies: 14 comments 7 replies
-
@momala454 interesting how even if the TestBug3 commands are finished and exited during the sleep 20, you still get the log for them. But Laravel should auto handle the signals in the queuing system and if you run the worker in daemon mode it will exit after the current job is processed. So, the question is: Why do you manually subscribe to signals in your commands? |
Beta Was this translation helpful? Give feedback.
-
due to the fact that commands don't output stdout and stderr, I moved my commands to queues. But for "simplicity", I created a "Scheduler" job, which will call "Artisan::call" for my commands, so I don't have to rewrite my commands to be "queuable". I manually subscribe to signals to be able to stop commands before they finish (because they can last hours). you're right, TestBug3 shouldn't even display the message "kill asked" as it's finished. |
Beta Was this translation helpful? Give feedback.
-
@momala454 to tackle the reason you use queues for this you can solve it by reading this Now if you still want to use queues and make the commands signal aware, we were under the impression that it is out of the box offered by laravel, how else would a blue green deployment work also for the scheduler. It appears that your implementation makes them aware. The conclusion is that in blue green deployments, the long running commands are killed before they finish... and so we have work to do.... |
Beta Was this translation helpful? Give feedback.
-
I already spent a lot of time reading all solutions offered for the problem of stdout/err, but none of them was working properly. |
Beta Was this translation helpful? Give feedback.
-
For us the linked one works for the past year with no issues. |
Beta Was this translation helpful? Give feedback.
-
actually I discarded the method to create a job for each log because I didn't liked a lot this solution which could even make infinite loop if the job itself generate a log |
Beta Was this translation helpful? Give feedback.
-
The job does not regenerate a new job to log, but logs to stdout directly. Only the BG process/command generates a job/message so, the infinite loop does not happen. Process::fromShellCommandline($this->buildCommand(), base_path(), [
CustomStreamHandler::LOG_VIA_QUEUE_JOB => CustomStreamHandler::LOG_VIA_QUEUE_JOB,
], null, null)->run(); protected function write(array $record): void
{
if (self::LOG_VIA_QUEUE_JOB !== \env(self::LOG_VIA_QUEUE_JOB)) {
parent::write($record);
return;
} |
Beta Was this translation helpful? Give feedback.
-
Ok thanks, but what about the initial issue, can it be fixed (easily?) or it's an expected side effect ? |
Beta Was this translation helpful? Give feedback.
-
We will leave laravel team deal with that. We thank you for the heads-up. We are implementing SignalableCommands as we speak in our projects to avoid commands being killed unexpectedly on deploys. UPDATE https://stackoverflow.com/a/61745416 for pid in $(ps -eo pid,cmd | grep '[a]rtisan' | awk '{print $1}'); do
(kill -TERM "${pid}" > /dev/stdout 2>&1 || echo "kill -TERM ${pid} exited with non-zero code") &
done
|
Beta Was this translation helpful? Give feedback.
-
@momala454 we looked into it. At the moment the only workaround would be to use php artisan queue:work --once for your queue worker. We use it also like that to avoid the limitations the daemon mode brings to the table (just like octane). This will still not solve the duplicate signals during the same job but at least you don't persist them between jobs. Unseting the handlers is not easy as they are a list of callbacks for each type of signal public function register(int $signal, callable $signalHandler): void
{
if (!isset($this->signalHandlers[$signal])) {
$previousCallback = pcntl_signal_get_handler($signal);
if (\is_callable($previousCallback)) {
$this->signalHandlers[$signal][] = $previousCallback;
}
}
$this->signalHandlers[$signal][] = $signalHandler;
pcntl_signal($signal, $this->handle(...));
} Note public function handle(int $signal): void
{
$count = \count($this->signalHandlers[$signal]);
foreach ($this->signalHandlers[$signal] as $i => $signalHandler) {
$hasNext = $i !== $count - 1;
$signalHandler($signal, $hasNext); //..................................see here that you have a $hasNext flag
}
} So you could avoid the logging but this will not handle the memory issue. |
Beta Was this translation helpful? Give feedback.
-
Another solution php artisan queue:work --max-jobs=10 |
Beta Was this translation helpful? Give feedback.
-
thanks for the workarounds |
Beta Was this translation helpful? Give feedback.
-
framework/src/Illuminate/Foundation/Console/ServeCommand.php Lines 129 to 153 in c82b43b |
Beta Was this translation helpful? Give feedback.
-
@momala454 what did you choose as solution? |
Beta Was this translation helpful? Give feedback.
Uh oh!
There was an error while loading. Please reload this page.
Uh oh!
There was an error while loading. Please reload this page.
-
Laravel Version
12.20.0
PHP Version
8.4.10
Database Driver & Version
No response
Description
Each time a call to
\Illuminate\Support\Facades\Artisan::call()
is done, it store the signal handler without clearing the old ones.Steps To Reproduce
When doing
php artisan testBug2
and doingkill -3 (pid)
during thesleep(20)
it will generate the following output
What I expect :
Which means the class
Symfony\Component\Console\SignalRegistry\SignalRegistry
is filling for each call to Artisan::call, the array$this->signalHandlers
I'm scheduling console commands using queues, so the worker is calling a lot of time
Artisan::call
, so it will use more and more memory to fill the array signalHandlers, and when sending a signal, it will generate X lines of notice "Kill asked", X being the number of time the schedule have been calledBeta Was this translation helpful? Give feedback.
All reactions