Router Instance creation is consuming ~3 mins for 250+ deployed models. #9246
Closed
winningcode
started this conversation in
General
Replies: 1 comment 1 reply
-
Hey @winningcode what version of litellm is this? There is no limit to how many models can be supported, the delay was probably due to azure/openai client creation on init, but that's been refactored out - #9140 Can you check if this exists on v1.63.8-nightly. |
Beta Was this translation helpful? Give feedback.
1 reply
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
-
Router Instance creation is consuming ~3 mins for 250+ deployed models. is there performance test around this, max how many deployment models litellm can support through litellm SDK?
Beta Was this translation helpful? Give feedback.
All reactions