Skip to content

Commit 1dbe442

Browse files
authored
updated the default model to gpt-4.1-mini in examples (#542)
1 parent 78566ac commit 1dbe442

File tree

3 files changed

+12
-12
lines changed

3 files changed

+12
-12
lines changed

fern/calls/customer-join-timeout.mdx

Lines changed: 6 additions & 6 deletions
Original file line numberDiff line numberDiff line change
@@ -62,7 +62,7 @@ Configure `customerJoinTimeoutSeconds` through the Vapi API for both permanent a
6262
name: "Customer Support Assistant",
6363
model: {
6464
provider: "openai",
65-
model: "gpt-4"
65+
model: "gpt-4.1-mini"
6666
},
6767
voice: {
6868
provider: "11labs",
@@ -78,7 +78,7 @@ Configure `customerJoinTimeoutSeconds` through the Vapi API for both permanent a
7878

7979
assistant = client.assistants.create(
8080
name="Customer Support Assistant",
81-
model={"provider": "openai", "model": "gpt-4"},
81+
model={"provider": "openai", "model": "gpt-4.1-mini"},
8282
voice={"provider": "11labs", "voiceId": "21m00Tcm4TlvDq8ikWAM"},
8383
customer_join_timeout_seconds=30
8484
)
@@ -89,7 +89,7 @@ Configure `customerJoinTimeoutSeconds` through the Vapi API for both permanent a
8989
-H "Content-Type: application/json" \
9090
-d '{
9191
"name": "Customer Support Assistant",
92-
"model": {"provider": "openai", "model": "gpt-4"},
92+
"model": {"provider": "openai", "model": "gpt-4.1-mini"},
9393
"voice": {"provider": "11labs", "voiceId": "21m00Tcm4TlvDq8ikWAM"},
9494
"customerJoinTimeoutSeconds": 30
9595
}'
@@ -130,7 +130,7 @@ Configure `customerJoinTimeoutSeconds` through the Vapi API for both permanent a
130130
```typescript title="TypeScript (Server SDK)"
131131
const call = await client.calls.createWeb({
132132
assistant: {
133-
model: { provider: "openai", model: "gpt-3.5-turbo" },
133+
model: { provider: "openai", model: "gpt-4.1-mini" },
134134
voice: { provider: "playht", voiceId: "jennifer" },
135135
customerJoinTimeoutSeconds: 60
136136
}
@@ -139,7 +139,7 @@ Configure `customerJoinTimeoutSeconds` through the Vapi API for both permanent a
139139
```python title="Python (Server SDK)"
140140
call = client.calls.create_web(
141141
assistant={
142-
"model": {"provider": "openai", "model": "gpt-3.5-turbo"},
142+
"model": {"provider": "openai", "model": "gpt-4.1-mini"},
143143
"voice": {"provider": "playht", "voiceId": "jennifer"},
144144
"customer_join_timeout_seconds": 60
145145
}
@@ -151,7 +151,7 @@ Configure `customerJoinTimeoutSeconds` through the Vapi API for both permanent a
151151
-H "Content-Type: application/json" \
152152
-d '{
153153
"assistant": {
154-
"model": {"provider": "openai", "model": "gpt-3.5-turbo"},
154+
"model": {"provider": "openai", "model": "gpt-4.1-mini"},
155155
"voice": {"provider": "playht", "voiceId": "jennifer"},
156156
"customerJoinTimeoutSeconds": 60
157157
}

fern/customization/custom-llm/using-your-server.mdx

Lines changed: 5 additions & 5 deletions
Original file line numberDiff line numberDiff line change
@@ -4,11 +4,11 @@ slug: customization/custom-llm/using-your-server
44
---
55

66

7-
This guide provides a comprehensive walkthrough on integrating Vapi with OpenAI's gpt-3.5-turbo-instruct model using a custom LLM configuration. We'll leverage Ngrok to expose a local development environment for testing and demonstrate the communication flow between Vapi and your LLM.
7+
This guide provides a comprehensive walkthrough on integrating Vapi with OpenAI's gpt-4.1-mini model using a custom LLM configuration. We'll leverage Ngrok to expose a local development environment for testing and demonstrate the communication flow between Vapi and your LLM.
88
## Prerequisites
99

1010
- **Vapi Account**: Access to the Vapi Dashboard for configuration.
11-
- **OpenAI API Key**: With access to the gpt-3.5-turbo-instruct model.
11+
- **OpenAI API Key**: With access to the gpt-4.1-mini model.
1212
- **Python Environment**: Set up with the OpenAI library (`pip install openai`).
1313
- **Ngrok**: For exposing your local server to the internet.
1414
- **Code Reference**: Familiarize yourself with the `/openai-sse/chat/completions` endpoint function in the provided Github repository: [Server-Side Example Python Flask](https://github.com/VapiAI/server-side-example-python-flask/blob/main/app/api/custom_llm.py).
@@ -31,7 +31,7 @@ def chat_completions():
3131
# ...
3232

3333
response = openai.ChatCompletion.create(
34-
model="gpt-3.5-turbo-instruct",
34+
model="gpt-4.1-mini",
3535
messages=[
3636
{"role": "system", "content": "You are a helpful assistant."},
3737
# ... (Add messages from conversation history and current prompt)
@@ -111,7 +111,7 @@ Your Python script receives the POST request and the chat_completions function i
111111
The script parses the JSON data, extracts relevant information (prompt, conversation history), and builds the prompt for the OpenAI API call.
112112

113113
**4. Call to OpenAI API:**
114-
The constructed prompt is sent to the gpt-3.5-turbo-instruct model using the openai.ChatCompletion.create method.
114+
The constructed prompt is sent to the gpt-4.1-mini model using the openai.ChatCompletion.create method.
115115

116116
**5. Receive and Format Response:**
117117
The response from OpenAI, containing the generated text, is received and formatted according to Vapi's expected structure.
@@ -122,7 +122,7 @@ The formatted response is sent back to Vapi as a JSON object.
122122
**7. Vapi Displays Response:**
123123
Vapi receives the response and displays the generated text within the conversation interface to the user.
124124

125-
By following these detailed steps and understanding the communication flow, you can successfully connect Vapi to OpenAI's gpt-3.5-turbo-instruct model and create powerful conversational experiences within your Vapi applications. The provided code example and reference serve as a starting point for you to build and customize your integration based on your specific needs.
125+
By following these detailed steps and understanding the communication flow, you can successfully connect Vapi to OpenAI's gpt-4.1-mini model and create powerful conversational experiences within your Vapi applications. The provided code example and reference serve as a starting point for you to build and customize your integration based on your specific needs.
126126

127127
**Video Tutorial:**
128128
<iframe

fern/snippets/quickstart/dashboard/assistant-setup-inbound.mdx

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -51,7 +51,7 @@
5151

5252
</Accordion>
5353
<Accordion title="Model Setup" icon="microchip" iconType="solid">
54-
Now we're going to set the "brains" of the assistant, the large language model. We're going to be using `GPT-4` (from [OpenAI](https://openai.com/)) for this demo (though you're free to use `GPT-3.5`, or any one of your favorite LLMs).
54+
Now we're going to set the "brains" of the assistant, the large language model. We're going to be using `GPT-4` (from [OpenAI](https://openai.com/)) for this demo (though you're free to use `GPT-4.1-mini`, or any one of your favorite LLMs).
5555

5656
<AccordionGroup>
5757
<Accordion title="Set Your OpenAI Provider Key (optional)" icon="key" iconType="solid">

0 commit comments

Comments
 (0)