Skip to content

updated the default model to gpt-4.1-mini in examples #542

New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Merged
merged 1 commit into from
Jun 26, 2025
Merged
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
12 changes: 6 additions & 6 deletions fern/calls/customer-join-timeout.mdx
Original file line number Diff line number Diff line change
Expand Up @@ -62,7 +62,7 @@ Configure `customerJoinTimeoutSeconds` through the Vapi API for both permanent a
name: "Customer Support Assistant",
model: {
provider: "openai",
model: "gpt-4"
model: "gpt-4.1-mini"
},
voice: {
provider: "11labs",
Expand All @@ -78,7 +78,7 @@ Configure `customerJoinTimeoutSeconds` through the Vapi API for both permanent a

assistant = client.assistants.create(
name="Customer Support Assistant",
model={"provider": "openai", "model": "gpt-4"},
model={"provider": "openai", "model": "gpt-4.1-mini"},
voice={"provider": "11labs", "voiceId": "21m00Tcm4TlvDq8ikWAM"},
customer_join_timeout_seconds=30
)
Expand All @@ -89,7 +89,7 @@ Configure `customerJoinTimeoutSeconds` through the Vapi API for both permanent a
-H "Content-Type: application/json" \
-d '{
"name": "Customer Support Assistant",
"model": {"provider": "openai", "model": "gpt-4"},
"model": {"provider": "openai", "model": "gpt-4.1-mini"},
"voice": {"provider": "11labs", "voiceId": "21m00Tcm4TlvDq8ikWAM"},
"customerJoinTimeoutSeconds": 30
}'
Expand Down Expand Up @@ -130,7 +130,7 @@ Configure `customerJoinTimeoutSeconds` through the Vapi API for both permanent a
```typescript title="TypeScript (Server SDK)"
const call = await client.calls.createWeb({
assistant: {
model: { provider: "openai", model: "gpt-3.5-turbo" },
model: { provider: "openai", model: "gpt-4.1-mini" },
voice: { provider: "playht", voiceId: "jennifer" },
customerJoinTimeoutSeconds: 60
}
Expand All @@ -139,7 +139,7 @@ Configure `customerJoinTimeoutSeconds` through the Vapi API for both permanent a
```python title="Python (Server SDK)"
call = client.calls.create_web(
assistant={
"model": {"provider": "openai", "model": "gpt-3.5-turbo"},
"model": {"provider": "openai", "model": "gpt-4.1-mini"},
"voice": {"provider": "playht", "voiceId": "jennifer"},
"customer_join_timeout_seconds": 60
}
Expand All @@ -151,7 +151,7 @@ Configure `customerJoinTimeoutSeconds` through the Vapi API for both permanent a
-H "Content-Type: application/json" \
-d '{
"assistant": {
"model": {"provider": "openai", "model": "gpt-3.5-turbo"},
"model": {"provider": "openai", "model": "gpt-4.1-mini"},
"voice": {"provider": "playht", "voiceId": "jennifer"},
"customerJoinTimeoutSeconds": 60
}
Expand Down
10 changes: 5 additions & 5 deletions fern/customization/custom-llm/using-your-server.mdx
Original file line number Diff line number Diff line change
Expand Up @@ -4,11 +4,11 @@ slug: customization/custom-llm/using-your-server
---


This guide provides a comprehensive walkthrough on integrating Vapi with OpenAI's gpt-3.5-turbo-instruct model using a custom LLM configuration. We'll leverage Ngrok to expose a local development environment for testing and demonstrate the communication flow between Vapi and your LLM.
This guide provides a comprehensive walkthrough on integrating Vapi with OpenAI's gpt-4.1-mini model using a custom LLM configuration. We'll leverage Ngrok to expose a local development environment for testing and demonstrate the communication flow between Vapi and your LLM.
## Prerequisites

- **Vapi Account**: Access to the Vapi Dashboard for configuration.
- **OpenAI API Key**: With access to the gpt-3.5-turbo-instruct model.
- **OpenAI API Key**: With access to the gpt-4.1-mini model.
- **Python Environment**: Set up with the OpenAI library (`pip install openai`).
- **Ngrok**: For exposing your local server to the internet.
- **Code Reference**: Familiarize yourself with the `/openai-sse/chat/completions` endpoint function in the provided Github repository: [Server-Side Example Python Flask](https://github.com/VapiAI/server-side-example-python-flask/blob/main/app/api/custom_llm.py).
Expand All @@ -31,7 +31,7 @@ def chat_completions():
# ...

response = openai.ChatCompletion.create(
model="gpt-3.5-turbo-instruct",
model="gpt-4.1-mini",
messages=[
{"role": "system", "content": "You are a helpful assistant."},
# ... (Add messages from conversation history and current prompt)
Expand Down Expand Up @@ -111,7 +111,7 @@ Your Python script receives the POST request and the chat_completions function i
The script parses the JSON data, extracts relevant information (prompt, conversation history), and builds the prompt for the OpenAI API call.

**4. Call to OpenAI API:**
The constructed prompt is sent to the gpt-3.5-turbo-instruct model using the openai.ChatCompletion.create method.
The constructed prompt is sent to the gpt-4.1-mini model using the openai.ChatCompletion.create method.

**5. Receive and Format Response:**
The response from OpenAI, containing the generated text, is received and formatted according to Vapi's expected structure.
Expand All @@ -122,7 +122,7 @@ The formatted response is sent back to Vapi as a JSON object.
**7. Vapi Displays Response:**
Vapi receives the response and displays the generated text within the conversation interface to the user.

By following these detailed steps and understanding the communication flow, you can successfully connect Vapi to OpenAI's gpt-3.5-turbo-instruct model and create powerful conversational experiences within your Vapi applications. The provided code example and reference serve as a starting point for you to build and customize your integration based on your specific needs.
By following these detailed steps and understanding the communication flow, you can successfully connect Vapi to OpenAI's gpt-4.1-mini model and create powerful conversational experiences within your Vapi applications. The provided code example and reference serve as a starting point for you to build and customize your integration based on your specific needs.

**Video Tutorial:**
<iframe
Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -51,7 +51,7 @@

</Accordion>
<Accordion title="Model Setup" icon="microchip" iconType="solid">
Now we're going to set the "brains" of the assistant, the large language model. We're going to be using `GPT-4` (from [OpenAI](https://openai.com/)) for this demo (though you're free to use `GPT-3.5`, or any one of your favorite LLMs).
Now we're going to set the "brains" of the assistant, the large language model. We're going to be using `GPT-4` (from [OpenAI](https://openai.com/)) for this demo (though you're free to use `GPT-4.1-mini`, or any one of your favorite LLMs).

<AccordionGroup>
<Accordion title="Set Your OpenAI Provider Key (optional)" icon="key" iconType="solid">
Expand Down
Loading