We read every piece of feedback, and take your input very seriously.
To see all available qualifiers, see our documentation.
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Hi,
Supporting the service_tier parameter ("flex") can help cut down costs by half using o3 or o4-mini.
See: https://platform.openai.com/docs/guides/flex-processing
It helps to cut down costs by half, which is always nice :)
No
No response
The text was updated successfully, but these errors were encountered:
According to the document, any parameters not on this list are considered provider specific and will be passed directly to the LLM API.
So you can use this parameter directly. I have tried it, and it works fine.
Sorry, something went wrong.
No branches or pull requests
The Feature
Hi,
Supporting the service_tier parameter ("flex") can help cut down costs by half using o3 or o4-mini.
See: https://platform.openai.com/docs/guides/flex-processing
Motivation, pitch
It helps to cut down costs by half, which is always nice :)
Are you a ML Ops Team?
No
Twitter / LinkedIn details
No response
The text was updated successfully, but these errors were encountered: