Configuration of librechat.yaml for Azure OpenAI #7364
Answered
by
danny-avila
apamildner
asked this question in
Troubleshooting
-
Beta Was this translation helpful? Give feedback.
Answered by
danny-avila
May 13, 2025
Replies: 1 comment 1 reply
-
Thanks for the report but this is not a bug—it's expected behavior.
Fix: models:
gpt-4.1:
deploymentName: "gpt-4.1" Arbitrary keys are not meant to be used under Do you have a specific use case for customizing this key? As long as it's a partial match to a system-known model, it will work as expected. endpoints:
azureOpenAI:
streamRate: 35
titleModel: "gpt-4o"
titleConvo: true
groups:
- group: "my-group-name"
apiKey: "${MY_API_KEY_REFERENCE}"
instanceName: "my-instance-name"
version: "2024-12-01-preview"
models:
gpt-4.1:
deploymentName: "gpt-4.1" |
Beta Was this translation helpful? Give feedback.
1 reply
Answer selected by
apamildner
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
Thanks for the report but this is not a bug—it's expected behavior.
models:
(the "model identifier") must match an actual LLM model name or identifier (likegpt-4.1
).foo:
), LibreChat can't recognize the model, resulting in errors.Fix:
Change your model config to:
Arbitrary keys are not meant to be used under
models:
. The motivation behind this is to allow cases where the deployment name is not a valid LLM model name or identifier (likegpt-4.1
).Do you have a specific…