Replies: 1 comment 2 replies
-
Ref: #800 |
Beta Was this translation helpful? Give feedback.
2 replies
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
Uh oh!
There was an error while loading. Please reload this page.
-
We're using litellm to proxy various LLMs in OpenAI format. While there's already an OpenAI compatible adapter (which works just fine), a dedicated LiteLLM compatible adapter would enable prompt caching which is a big deal for performance (and cost saving).
Beta Was this translation helpful? Give feedback.
All reactions