-
Notifications
You must be signed in to change notification settings - Fork 12
Description
- PySeq version: latest
- Python version: 3.9
- Operating System: aws docker linux python 3.9 image
Description
I'm trying to get seqlog working for our FastAPI app. I've written a middleware that takes requests and logs their details using seqlog - this works perfectly locally, or locally in docker. But it's also broken on AWS Lambda :(
What I Did
When I call seqlog.log_to_seq
I save that instance as log_handler
so I see what's going on inside. Whenever I log anything, the log_handler.consumer.current_batch_size
correctly shows the batch size increasing in my local test environment. But on lambda, this value doesn't seem to increase. The only way I've managed to get some logs out of the system, is to repeatedly spam one fastAPI endpoint, and around the 30-50th try, suddenly logs start to show up and everything works perfectly.
I've tried:
- doing a time.sleep so maybe the consumer thread has time to "wake up"
- repeatedly log messages on startup, trying to achieve the same effect as calling an endpoint does
- manually calling
flush
but this doesn't seem to help as the logs are not being queued
I'm happy to try anything - sadly on lambda there's no way to debug, and each deployment is like 5-7 minutes, plus I can only print things to aws cloudwatch to see what happened.