Replies: 1 comment
-
Tried using Provisioned Concurrency for Lambda? |
Beta Was this translation helpful? Give feedback.
0 replies
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
Uh oh!
There was an error while loading. Please reload this page.
-
We're experiencing cold start times of roughly 6-8s with langchain, pinecone and openai imported for a very small retrieval function.
Experiments have shown that importing langchain is responsible for ~90% of this cold start times. Function memory makes no difference.
Any ideas on how to reduce these cold starts? Running on python3.9, native python runtime (no docker) and with 2,048mb RAM.
Beta Was this translation helpful? Give feedback.
All reactions