diff --git a/fern/advanced/sip/sip-plivo.mdx b/fern/advanced/sip/sip-plivo.mdx index cd79a27e..48be3ff0 100644 --- a/fern/advanced/sip/sip-plivo.mdx +++ b/fern/advanced/sip/sip-plivo.mdx @@ -150,11 +150,11 @@ Indian phone numbers cannot be used with Plivo on Vapi due to TRAI regulations. ``` - 1. [Follow this guide to create an assistant](https://docs.vapi.ai/quickstart/dashboard#create-an-assistant) + 1. [Follow this guide to create an assistant](/quickstart/phone#create-your-first-voice-assistant) 2. Note your Assistant ID for making calls. - [**Using the API**](https://docs.vapi.ai/calls/outbound-calling) + [**Using the API**](/calls/outbound-calling) ```bash curl --location 'https://api.vapi.ai/call/phone' \ @@ -170,7 +170,7 @@ Indian phone numbers cannot be used with Plivo on Vapi due to TRAI regulations. }' ``` - [**Using the Vapi Dashboard**](https://docs.vapi.ai/quickstart/phone/outbound) + [**Using the Vapi Dashboard**](/quickstart/phone#try-outbound-calling) 1. Select your Assistant 2. Enter the phone number of the user you want to call diff --git a/fern/advanced/sip/sip-twilio.mdx b/fern/advanced/sip/sip-twilio.mdx index f8eb009e..a399e967 100644 --- a/fern/advanced/sip/sip-twilio.mdx +++ b/fern/advanced/sip/sip-twilio.mdx @@ -144,7 +144,7 @@ This guide walks you through setting up both outbound and inbound SIP trunking b 1. **Create and Configure a Vapi Assistant** - - Create an assistant following the steps at [https://docs.vapi.ai/quickstart/dashboard#create-an-assistant](https://docs.vapi.ai/quickstart/dashboard#create-an-assistant) + - Create an assistant following the steps in our [Phone Quickstart](/quickstart/phone#create-your-first-voice-assistant) - In the assistant settings, link it to the phone number you created Now when someone calls your Twilio number, the call will be routed to your Vapi assistant. diff --git a/fern/assets/styles.css b/fern/assets/styles.css index 036a2d18..9cf81a30 100644 --- a/fern/assets/styles.css +++ b/fern/assets/styles.css @@ -7,6 +7,43 @@ font-weight: 500; } +/* Badge/Pill Styles */ +.vapi-badge { + display: inline-block; + padding: 2px 8px; + border-radius: 12px; + font-size: 0.6875rem; + font-weight: 500; + text-transform: uppercase; + letter-spacing: 0.3px; + margin-bottom: 6px; +} + +.vapi-badge-assistant { + background-color: #E6F7F4; + color: #0F766E; + border: 1px solid #A7F3D0; +} + +.vapi-badge-workflow { + background-color: #EEF2FF; + color: #4338CA; + border: 1px solid #C7D2FE; +} + +/* Dark mode adjustments */ +:is(.dark) .vapi-badge-assistant { + background-color: #134E4A; + color: #99F6E4; + border: 1px solid #14B8A6; +} + +:is(.dark) .vapi-badge-workflow { + background-color: #312E81; + color: #C7D2FE; + border: 1px solid #6366F1; +} + /* for a grid of videos */ .video-grid { diff --git a/fern/assistants/assistant-hooks.mdx b/fern/assistants/assistant-hooks.mdx index f7b58e7d..5d5c4bdf 100644 --- a/fern/assistants/assistant-hooks.mdx +++ b/fern/assistants/assistant-hooks.mdx @@ -25,8 +25,8 @@ Hooks are defined in the `hooks` array of your assistant configuration. Each hoo - `filters`: (Optional) Conditions that must be met for the hook to trigger -The `call.endedReason` filter can be set to any of the [call ended reasons](https://docs.vapi.ai/api-reference/calls/get#response.body.endedReason). -The transfer destination type follows the [transfer call tool destinations](https://docs.vapi.ai/api-reference/tools/create#request.body.transferCall.destinations) schema. +The `call.endedReason` filter can be set to any of the [call ended reasons](/api-reference/calls/get#response.body.endedReason). +The transfer destination type follows the [transfer call tool destinations](/api-reference/tools/create#request.body.transferCall.destinations) schema. ## Example: Transfer on pipeline error @@ -256,4 +256,4 @@ Add this hook configuration to your assistant to trigger Slack notifications on Replace `` with your actual Slack webhook URL and `` with your serverless function endpoint. - \ No newline at end of file + diff --git a/fern/assistants/call-recording.mdx b/fern/assistants/call-recording.mdx index c617c09d..897ff87f 100644 --- a/fern/assistants/call-recording.mdx +++ b/fern/assistants/call-recording.mdx @@ -6,42 +6,45 @@ slug: call-recording The Call Recording feature allows you to capture and store full recordings of phone calls for analysis. By default, Vapi stores a complete recording of every call, providing both mono and stereo audio. The stereo option separates human and assistant audio into two distinct channels, offering a clearer analysis of the conversation. -You can customize this behavior in the assistant's [`assistant.artifactPlan`](https://docs.vapi.ai/api-reference/assistants/create#request.body.artifactPlan). +You can customize this behavior in the assistant's [`assistant.artifactPlan`](/api-reference/assistants/create#request.body.artifactPlan). +## Recording Formats -## Supported Formats +Vapi supports multiple recording formats to fit your storage and playback needs. -Vapi supports multiple audio formats for call recordings: -- `wav;l16`: 16-bit linear PCM WAV format, providing high-quality uncompressed audio in mono -- `mp3`: MP3 compressed audio format, offering good quality with smaller file sizes +You can specify your preferred format using the [`assistant.artifactPlan.recordingFormat`](/api-reference/assistants/create#request.body.artifactPlan.recordingFormat) property. If not specified, recordings will default to `wav;l16`. -You can specify your preferred format using the [`assistant.artifactPlan.recordingFormat`](https://docs.vapi.ai/api-reference/assistants/create#request.body.artifactPlan.recordingFormat) property. If not specified, recordings will default to `wav;l16`. +**Supported formats:** +- `wav;l16` (default) - High quality linear PCM +- `mp3` - Compressed format for smaller file sizes +- `flac` - Lossless compression for archival - -At this time, you can only specify one format. - +## Storage Options -## Custom Storage bucket +Vapi supports uploading recordings to your own storage buckets. See [Integrations -> Cloud](/providers/cloud/s3) for more information on available storage options. -Vapi supports uploading recordings to your own storage buckets. See [Integrations -> Cloud](https://docs.vapi.ai/providers/cloud/s3) for more information on available storage options. +**Supported cloud storage providers:** +- AWS S3 +- Google Cloud Storage +- Cloudflare R2 +- Supabase -## Upload Path +## Configuration Options -When uploading recordings to your custom storage bucket, you can specify the upload path using the `assistant.artifactPlan.recordingPath` property. If not specified, recordings will default to the root of the bucket. +### Enable/Disable Recording -Usage: -- If you want to upload the recording to a specific path, set this to the path. Example: `/my-assistant-recordings`. -- If you want to upload the recording to the root of the bucket, set this to `/`. +You can turn on/off call recording by setting the [`assistant.artifactPlan.recordingEnabled`](/api-reference/assistants/create#request.body.artifactPlan.recordingEnabled) property to `true` or `false`. If not specified, recordings will default to `true`. -## Turn On/Off Call Recording +**HIPAA Compliance:** If [HIPAA](/security-and-privacy/hipaa) mode is enabled, Vapi will only store recordings if you have defined a custom storage bucket. Make sure to set credentials in the Provider Credentials section of your dashboard. -You can turn on/off call recording by setting the [`assistant.artifactPlan.recordingEnabled`](https://docs.vapi.ai/api-reference/assistants/create#request.body.artifactPlan.recordingEnabled) property to `true` or `false`. If not specified, recordings will default to `true`. +### Video Recording - -If [HIPAA](https://docs.vapi.ai/security-and-privacy/hipaa) mode is enabled, Vapi will only store recordings if you have defined a custom storage bucket. Make sure to set credentials in the Provider Credentials page in the Dashboard. - +You can turn on/off video recording by setting the [`assistant.artifactPlan.videoRecordingEnabled`](/api-reference/assistants/create#request.body.artifactPlan.videoRecordingEnabled) property to `true` or `false`. If not specified, video recording will default to `false`. -## Turn On/Off Video Recording (only for webCall) +## Upload Path -You can turn on/off video recording by setting the [`assistant.artifactPlan.videoRecordingEnabled`](https://docs.vapi.ai/api-reference/assistants/create#request.body.artifactPlan.videoRecordingEnabled) property to `true` or `false`. If not specified, video recording will default to `false`. +When uploading recordings to your custom storage bucket, you can specify the upload path using the `assistant.artifactPlan.recordingPath` property. If not specified, recordings will default to the root of the bucket. +Usage: +- If you want to upload the recording to a specific path, set this to the path. Example: `/my-assistant-recordings`. +- If you want to upload the recording to the root of the bucket, set this to `/`. diff --git a/fern/call-forwarding.mdx b/fern/call-forwarding.mdx index 7403a69e..79d39d79 100644 --- a/fern/call-forwarding.mdx +++ b/fern/call-forwarding.mdx @@ -454,4 +454,4 @@ Here is a full example of a `transferCall` payload using the experimental warm t **Notes:** - In all warm transfer modes, the `{{transcript}}` variable contains the full transcript of the call and can be used within the `summaryPlan`. -- For more details about transfer plans and configuration options, please refer to the [transferCall API documentation](https://docs.vapi.ai/api-reference/tools/create#request.body.transferCall.destinations.number.transferPlan) +- For more details about transfer plans and configuration options, please refer to the [transferCall API documentation](/api-reference/tools/create#request.body.transferCall.destinations.number.transferPlan) diff --git a/fern/calls/voicemail-detection.mdx b/fern/calls/voicemail-detection.mdx index a9324e14..f120e761 100644 --- a/fern/calls/voicemail-detection.mdx +++ b/fern/calls/voicemail-detection.mdx @@ -62,10 +62,10 @@ For each detection method, you can fine-tune the following parameters: | Parameter | Description | | :-------- | :---------- | -| **Initial Detection Delay** | How long to wait (in seconds) before starting voicemail detection | -| **Detection Retry Interval** | How frequently to check for voicemail (in seconds) | -| **Max Detection Retries** | Maximum number of detection attempts before stopping | -| **Max Voicemail Message Wait** | Maximum time to wait before leaving a message (even without beep detection) | +| **Initial Detection Delay** | How long to wait (in seconds) before starting voicemail detection. | +| **Detection Retry Interval** | How frequently to check for voicemail after the initial delay. | +| **Max Detection Retries** | Maximum number of detection attempts before stopping. | +| **Max Voicemail Message Wait** | Maximum time to wait before leaving a voicemail if no beep is detected. | These settings allow you to balance: - **Speed** (how quickly voicemail is detected) diff --git a/fern/docs.yml b/fern/docs.yml index 491b1227..471a371d 100644 --- a/fern/docs.yml +++ b/fern/docs.yml @@ -95,10 +95,13 @@ navigation: layout: - section: Get started contents: - - page: Quickstart - icon: fa-light fa-rocket - path: quickstart/dashboard.mdx - - page: Make a web call + - page: Introduction + icon: fa-light fa-info-circle + path: quickstart/introduction.mdx + - page: Phone calls + icon: fa-solid fa-phone + path: quickstart/phone.mdx + - page: Web integration icon: fa-light fa-browser path: quickstart/web.mdx - section: How Vapi works @@ -123,7 +126,7 @@ navigation: path: examples/docs-agent.mdx icon: fa-light fa-microphone - - section: Assistant customization + - section: Assistants contents: - section: Conversation behavior icon: fa-light fa-comments @@ -690,9 +693,11 @@ redirects: - source: "api-reference/calls/create-call" destination: "https://api.vapi.ai/api#/Calls/CallController_create" - source: "/getting_started" - destination: "/quickstart/dashboard" + destination: "/quickstart/phone" - source: "/dashboard" - destination: "/quickstart/dashboard" + destination: "/quickstart/phone" + - source: "/quickstart/dashboard" + destination: "/quickstart/phone" - source: "/provider_keys" destination: "/assistants/provider-keys" - source: "/provider-keys" @@ -746,7 +751,7 @@ redirects: - source: "/outbound_call_python" destination: "/examples/outbound-call-python" - source: "/voice_widget" - destination: "/voice-widget" + destination: "/examples/voice-widget" - source: "/clients" destination: "/sdks" - source: "/error_message_guide" @@ -782,9 +787,9 @@ redirects: - source: /phone-calling/voicemail-detection destination: /calls/voicemail-detection - source: /quickstart/phone/inbound - destination: /quickstart/dashboard + destination: /quickstart/phone - source: /quickstart/phone/outbound - destination: /quickstart/dashboard + destination: /quickstart/phone - source: /introduction destination: /quickstart - source: /welcome @@ -797,9 +802,9 @@ redirects: destination: /quickstart - source: /assistants destination: /api-reference/assistants/create - - source: /examples/voice-widget - destination: /sdk/web - source: /workflows/examples/outbound-sales destination: /workflows/examples/lead-qualification - source: /workflows destination: /workflows/quickstart + - source: /web-integration + destination: /web diff --git a/fern/examples/docs-agent.mdx b/fern/examples/docs-agent.mdx index 34685c29..26e54cb3 100644 --- a/fern/examples/docs-agent.mdx +++ b/fern/examples/docs-agent.mdx @@ -219,7 +219,7 @@ You'll learn to: - Vapi automatically analyzes every call. The assistant above includes an [`analysisPlan`](https://docs.vapi.ai/api-reference/assistants/create#request.body.analysisPlan) with summary and success evaluation configured. + Vapi automatically analyzes every call. The assistant above includes an [`analysisPlan`](/api-reference/assistants/create#request.body.analysisPlan) with summary and success evaluation configured. Configure additional analysis options in your assistant: - **Summary plan**: Custom prompts for call summaries @@ -227,14 +227,14 @@ You'll learn to: - **Success evaluation plan**: Score calls with custom rubrics - **Structured data multi plan**: Multiple extraction schemas - Retrieve analysis results using the [Get Call API](https://docs.vapi.ai/api-reference/calls/get#response.body.analysis): + Retrieve analysis results using the [Get Call API](/api-reference/calls/get#response.body.analysis): ```bash - curl https://api.vapi.ai/call/CALL_ID \ + curl https://api.vapi.ai/call/{CALL_ID} \ -H "Authorization: Bearer YOUR_VAPI_API_KEY" ``` - The response includes `call.analysis` with your configured analysis results. Learn more about [call analysis configuration](https://docs.vapi.ai/assistants/call-analysis). + The response includes `call.analysis` with your configured analysis results. Learn more about [call analysis configuration](/assistants/call-analysis). **Iterative improvements:** - Review analysis summaries to identify common user questions diff --git a/fern/knowledge-base/integrating-with-trieve.mdx b/fern/knowledge-base/integrating-with-trieve.mdx index a1262d33..2358dbea 100644 --- a/fern/knowledge-base/integrating-with-trieve.mdx +++ b/fern/knowledge-base/integrating-with-trieve.mdx @@ -268,7 +268,7 @@ Use Trieve's search playground to: 1. Create your Trieve API key from [Trieve's dashboard](https://dashboard.trieve.ai/org/keys) 2. Add your Trieve API key to Vapi [Provider Credentials](https://dashboard.vapi.ai/keys) ![Add Trieve API key in Vapi](../static/images/knowledge-base/trieve-credential.png) -3. Once your dataset is optimized in Trieve, import it to Vapi via POST request to the [create knowledge base route](https://docs.vapi.ai/api-reference/knowledge-bases/create): +3. Once your dataset is optimized in Trieve, import it to Vapi via POST request to the [create knowledge base route](/api-reference/knowledge-bases/create): ```json { diff --git a/fern/overview.mdx b/fern/overview.mdx index 9c3297f0..34dc9da4 100644 --- a/fern/overview.mdx +++ b/fern/overview.mdx @@ -37,14 +37,14 @@ Each layer is highly customizable and we support dozens of models across STT, LL ## Quickstart Guides + title="Phone Calls" + href="/quickstart/phone"> The easiest way to start with Vapi. Build a voice agent in 5 minutes. - Quickly get started making web calls. + title="Web Integration" + href="/quickstart/web-integration"> + Integrate voice calls into your web application. diff --git a/fern/pricing.mdx b/fern/pricing.mdx index d71695dc..fb1b5d2b 100644 --- a/fern/pricing.mdx +++ b/fern/pricing.mdx @@ -35,7 +35,7 @@ slug: pricing ### Starter Credits -Every new account is granted **$10 in free credits** to begin testing voice workflows. You can [begin using Vapi](/quickstart/dashboard) without a credit card. +Every new account is granted **$10 in free credits** to begin testing voice workflows. You can [begin using Vapi](/quickstart/phone) without a credit card. --- diff --git a/fern/providers/observability/langfuse.mdx b/fern/providers/observability/langfuse.mdx index 2e7c6faf..82979ac6 100644 --- a/fern/providers/observability/langfuse.mdx +++ b/fern/providers/observability/langfuse.mdx @@ -92,7 +92,7 @@ You can enhance your observability in Langfuse by adding metadata and tags: **Metadata** -Use the [`assistant.observabilityPlan.metadata`](https://docs.vapi.ai/api-reference/assistants/create#request.body.observabilityPlan.metadata) field to attach custom key-value pairs. +Use the [`assistant.observabilityPlan.metadata`](/api-reference/assistants/create#request.body.observabilityPlan.metadata) field to attach custom key-value pairs. Examples: - Track experiment versions ("experiment": "v2.1") @@ -101,7 +101,7 @@ Examples: **Tags** -Use the [`assistant.observabilityPlan.tags`](https://docs.vapi.ai/api-reference/assistants/create#request.body.observabilityPlan.tags) field to add searchable labels. +Use the [`assistant.observabilityPlan.tags`](/api-reference/assistants/create#request.body.observabilityPlan.tags) field to add searchable labels. Examples: - Mark important runs ("priority") @@ -111,4 +111,4 @@ Examples: Adding metadata and tags makes it easier to filter, analyze, and monitor your assistants activity in Langfuse. ### Example -![Langfuse Metadata Example](../../static/images/providers/langfuse-example.png) \ No newline at end of file +![Langfuse Metadata Example](../../static/images/providers/langfuse-example.png) diff --git a/fern/quickstart/dashboard.mdx b/fern/quickstart/dashboard.mdx deleted file mode 100644 index d705b83d..00000000 --- a/fern/quickstart/dashboard.mdx +++ /dev/null @@ -1,141 +0,0 @@ ---- -title: Dashboard -subtitle: Learn to build a voice agent in under 5 minutes. -slug: quickstart/dashboard ---- - -## Overview - -Vapi makes it easy to build end-to-end voice agents, which we call ***assistants***. Assistants support live, two-way conversations. You can call an assistant or have it call you. - -**Each assistant has three components:** Speech-to-text (STT), Language model (LLM) and Text-to-speech (TTS). Vapi gives you full control over each, with dozens of providers and models to choose from. - -**In this quickstart, you'll learn to:** -- Create an assistant in the Vapi dashboard -- Pick your STT, LLM, and TTS providers -- Attach a phone number -- Make your first call over the web or by phone - -## Prerequisites - -- [A Vapi account](https://dashboard.vapi.ai) - -## Get started by creating an assistant - - - - Go to [dashboard.vapi.ai](https://dashboard.vapi.ai) and log in to your account. - - - - In the Vapi dashboard, create a new assistant using the customer support specialist template. - - - - - - - - - - Set the first message that the assistant will speak when a conversation is started with it. - - ```plaintext First message - Hi there, this is Alex from TechSolutions customer support. How can I help you today? - ``` - - - - Set the system prompt, which sets the context, role, personality and instructions that will guide your assistant. - - ```plaintext System Prompt - You are Alex, a customer service voice assistant for TechSolutions. Your primary purpose is to help customers resolve issues with their products, answer questions about services, and ensure a satisfying support experience. - - Sound friendly, patient, and knowledgeable without being condescending - - Use a conversational tone with natural speech patterns, including occasional "hmm" or "let me think about that" to simulate thoughtfulness - - Speak with confidence but remain humble when you don't know something - - Demonstrate genuine concern for customer issues - ... - ``` - - - - - - - - Select your preferred provider and the large language model (LLM) that will power your assistant (default gpt-4o). - - - - - - - - In the transcriber tab, choose the transcriber (speech-to-text, STT) model that will convert your callers' speech into text for your assistant to process. - - - - - - - Vapi was made for model selection to be configurable - try playing around with different models! - - - - - In the voice tab, select the voice (TTS) model that will determine how your assistant sounds to callers. - - - - - - Just like with the transcriber, you can plug in any provider! - - - - - -## Make an inbound call (call your assistant) - - - Try calling your assistant by clicking the call button in the dashboard. - - - - - - - - In the Phone Numbers tab, you can create a free new number or import an existing number from another telephony provider (Twilio, Vonage, Telnyx, etc.). - - - - - - Select your assistant in the inbound settings for your phone number. Whenever this number is called, your assistant will pick up and have a conversation with them! - - - - - - - - -## Make an outbound call (assistant calls you) - - - 1. Fill out your own phone number as the number to dial. - 2. Set the assistant that will be making the call - - - - - - - When you click on the outbound call button, your assistant will make an outbound call to the phone number. - - Your assistant won't yet be able to hang-up the phone at the end of the call. - You will learn more about configuring call end behavior in later guides. - - - diff --git a/fern/quickstart/introduction.mdx b/fern/quickstart/introduction.mdx new file mode 100644 index 00000000..94ade849 --- /dev/null +++ b/fern/quickstart/introduction.mdx @@ -0,0 +1,186 @@ +--- +title: Introduction +subtitle: Build voice AI agents that can make and receive phone calls +slug: quickstart/introduction +--- + +## What is Vapi? + +Vapi is the developer platform for building voice AI agents. We handle the complex infrastructure so you can focus on creating great voice experiences. + +**Voice agents** allow you to: +- Have natural conversations with users +- Make and receive phone calls +- Integrate with your existing systems and APIs +- Handle complex workflows like appointment scheduling, customer support, and more + +## How voice agents work + +Every Vapi assistant combines three core technologies: + + + + Converts user speech into text that your agent can understand + + + Processes the conversation and generates intelligent responses + + + Converts your agent's responses back into natural speech + + + +You have full control over each component, with dozens of providers and models to choose from; OpenAI, Anthropic, Google, Deepgram, ElevenLabs, and many, many more. + +## Two ways to build voice agents + +Vapi offers two main primitives for building voice agents, each designed for different use cases: + + + + **Best for:** Quick kickstart and straightforward conversations + + Assistants use a single system prompt to control behavior. Perfect for: + - Customer support chatbots + - Simple question-answering agents + - Getting started quickly with minimal setup + + *Control everything from one place with natural language instructions.* + + + **Best for:** Complex logic and multi-step processes + + Workflows use visual decision trees and conditional logic. Perfect for: + - Appointment scheduling with availability checks + - Lead qualification with branching questions + - Complex customer service flows with escalation paths + + *Build sophisticated branching logic without code.* + + + +## Key capabilities + +- **Real-time conversations:** Sub-600ms response times with natural turn-taking +- **Phone integration:** Make and receive calls on any phone number +- **Web integration:** Embed voice calls directly in your applications +- **Tool integration:** Connect to your APIs, databases, and existing systems +- **Custom workflows:** Build complex multi-step processes with decision trees + +## Choose your path + + + + **Start here if you want to:** + - Create a voice agent that can make and receive phone calls + - Build customer support or sales automation + - Get started with no coding required + + *Build your first voice agent in 5 minutes using our dashboard.* + + + **Start here if you want to:** + - Add voice capabilities to your web application + - Integrate voice chat into your existing product + - Build with code and SDKs + + *Embed live voice conversations directly in your app.* + + + +## Popular use cases + + + +
Built with Assistants
+ + Automate inbound support calls with agents that can access your knowledge base and escalate to humans when needed. + + *View example →* +
+ +
Built with Workflows
+ + Make outbound sales calls, qualify leads, and schedule appointments with sophisticated branching logic. + + *View example →* +
+ +
Built with Workflows
+ + Handle booking requests, check availability, and confirm appointments with conditional routing. + + *View example →* +
+
+ + + +
Built with Workflows
+ + Emergency routing and appointment scheduling for healthcare. +
+ +
Built with Workflows
+ + Order tracking, returns, and customer support workflows. +
+
+ +## Ready to get started? + +Most users start with **phone calls** since it's the easiest way to see Vapi in action. You can create and test a working voice agent in under 5 minutes without writing any code. + + + Create your first voice agent and make your first call + diff --git a/fern/quickstart/phone.mdx b/fern/quickstart/phone.mdx new file mode 100644 index 00000000..8fcb203d --- /dev/null +++ b/fern/quickstart/phone.mdx @@ -0,0 +1,118 @@ +--- +title: Phone calls +subtitle: Learn to make your first phone call with a voice agent +slug: quickstart/phone +--- + +## Overview + +Vapi makes it easy to build voice agents that can make and receive phone calls. In under 5 minutes, you'll create a voice assistant and start talking to it over the phone. + +**In this quickstart, you'll learn to:** +- Create an assistant using the dashboard +- Set up a phone number +- Make your first inbound and outbound calls + +## Prerequisites + +- [A Vapi account](https://dashboard.vapi.ai) + +## Create your first voice assistant + + + + Go to [dashboard.vapi.ai](https://dashboard.vapi.ai) and log in to your account. + + + + In the dashboard, create a new assistant using the customer support specialist template. + + + + + + + + Set the first message and system prompt for your assistant: + + **First message:** + ```plaintext + Hi there, this is Alex from TechSolutions customer support. How can I help you today? + ``` + + **System prompt:** + ```plaintext + You are Alex, a customer service voice assistant for TechSolutions. Your primary purpose is to help customers resolve issues with their products, answer questions about services, and ensure a satisfying support experience. + - Sound friendly, patient, and knowledgeable without being condescending + - Use a conversational tone with natural speech patterns + - Speak with confidence but remain humble when you don't know something + - Demonstrate genuine concern for customer issues + ``` + + + +## Set up a phone number + + + + In the Phone Numbers tab, create a free US phone number or import an existing number from another provider. + + + + + + + Free Vapi phone numbers are only available for US national use. For international calls, you'll need to import a number from Twilio or another provider. + + + + + Select your assistant in the inbound settings for your phone number. When this number is called, your assistant will automatically answer. + + + + + + + +## Make your first calls + + + + Call the phone number you just created. Your assistant will pick up and start the conversation with your configured first message. + + + + In the dashboard, go to the outbound calls section: + 1. Enter your own phone number as the target + 2. Select your assistant + 3. Click "Make Call" + + + + + + Your assistant will call you immediately. + + + + You can also test your assistant directly in the dashboard by clicking the call button—no phone number required. + + + + + + + +## Next steps + +Now that you have a working voice assistant: + +- **Customize the conversation:** Update the system prompt to match your use case +- **Add tools:** Connect your assistant to external APIs and databases +- **Configure models:** Try different speech and language models for better performance +- **Scale with APIs:** Use Vapi's REST API to create assistants programmatically + + +Ready to integrate voice into your application? Check out the [Web integration guide](/quickstart/web-integration) to embed voice calls directly in your app. + diff --git a/fern/quickstart/web.mdx b/fern/quickstart/web.mdx index 64f4a16a..aa20ddba 100644 --- a/fern/quickstart/web.mdx +++ b/fern/quickstart/web.mdx @@ -1,20 +1,24 @@ --- -title: Web call -subtitle: Make a web call to your assistant from a browser +title: Web integration +subtitle: Integrate voice calls into your web application +slug: quickstart/web --- ## Overview -This guide shows you how to integrate live, two-way voice calls with your Vapi assistant into any web app. You can use Vapi in plain JavaScript, React, Next.js, or any other web framework. +This guide shows you how to integrate live, two-way voice conversations into any web application. Use Vapi with plain JavaScript, React, Next.js, or any other web framework to add voice capabilities directly to your app. -Get started with either the Vapi web SDK or by connecting to an assistant you created in the dashboard. +**In this guide, you'll learn to:** +- Install and configure the Vapi Web SDK +- Connect to existing assistants from your dashboard +- Handle call lifecycle events in your application -See the full next.js [demo here on v0](https://v0.dev/chat/vapi-quickstart-nextjs-z3lv02T7Dd5). To try it live and make edits, follow these steps: +See the full Next.js [demo here on v0](https://v0.dev/chat/vapi-quickstart-nextjs-z3lv02T7Dd5). To try it live and make edits, follow these steps: 1. Fork the app in v0 -2. Go to settings --> environment variables +2. Go to settings → environment variables 3. Create a new environment variable called `NEXT_PUBLIC_VAPI_API_KEY` -4. Add your [public api key from the dashboard](https://dashboard.vapi.ai/org/api-keys)! +4. Add your [public API key from the dashboard](https://dashboard.vapi.ai/org/api-keys) ## Installation @@ -26,18 +30,58 @@ Get started with either the Vapi web SDK or by connecting to an assistant you cr -## Get started +## Integration approaches - + - - Create an assistant object. + + First, create and configure an assistant in the [Vapi dashboard](https://dashboard.vapi.ai). Follow the [Phone calls quickstart](/quickstart/phone) to set up your assistant. + + + Copy your assistant's ID from the dashboard: + + + Assistant ID in dashboard + + + + ```javascript + // Start a call with your pre-configured assistant + vapi.start("YOUR_ASSISTANT_ID_FROM_THE_DASHBOARD"); + ``` + + + Customize settings or pass template variables at runtime: + + ```javascript + const assistantOverrides = { + transcriber: { + provider: "deepgram", + model: "nova-2", + language: "en-US", + }, + recordingEnabled: false, + variableValues: { + customerName: "John", + accountType: "premium" + }, + }; + + vapi.start("YOUR_ASSISTANT_ID", assistantOverrides); + ``` + + + + + + + Create an assistant configuration directly in your code: ```javascript const assistantOptions = { - name: "Vapi's Pizza Front Desk", - firstMessage: "Vapi's Pizzeria speaking, how can I help you?", + name: "Customer Support Assistant", + firstMessage: "Hi! How can I help you today?", transcriber: { provider: "deepgram", model: "nova-2", @@ -53,64 +97,104 @@ Get started with either the Vapi web SDK or by connecting to an assistant you cr messages: [ { role: "system", - content: `You are a voice assistant for Vappy's Pizzeria, a pizza shop located on the Internet.\n\nYour job is to take the order of customers calling in. The menu has only 3 types of items: pizza, sides, and drinks. There are no other types of items on the menu.\n\n1) There are 3 kinds of pizza: cheese pizza, pepperoni pizza, and vegetarian pizza (often called \"veggie\" pizza).\n2) There are 3 kinds of sides: french fries, garlic bread, and chicken wings.\n3) There are 2 kinds of drinks: soda, and water. (if a customer asks for a brand name like \"coca cola\", just let them know that we only offer \"soda\")\n\nCustomers can only order 1 of each item. If a customer tries to order more than 1 item within each category, politely inform them that only 1 item per category may be ordered.\n\nCustomers must order 1 item from at least 1 category to have a complete order. They can order just a pizza, or just a side, or just a drink.\n\nBe sure to introduce the menu items, don't assume that the caller knows what is on the menu (most appropriate at the start of the conversation).\n\nIf the customer goes off-topic or off-track and talks about anything but the process of ordering, politely steer the conversation back to collecting their order.\n\nOnce you have all the information you need pertaining to their order, you can end the conversation. You can say something like \"Awesome, we'll have that ready for you in 10-20 minutes.\" to naturally let the customer know the order has been fully communicated.\n\nIt is important that you collect the order in an efficient manner (succinct replies & direct questions). You only have 1 task here, and it is to collect the customers order, then end the conversation.\n\n- Be sure to be kind of funny and witty!\n- Keep all your responses short and simple. Use casual language, phrases like \"Umm...\", \"Well...\", and \"I mean\" are preferred.\n- This is a voice conversation, so keep your responses short, like in a real conversation. Don't ramble for too long.`, + content: `You are a helpful customer support assistant. Keep responses brief and friendly since this is a voice conversation.`, }, ], }, }; ``` - - **Parameters:** - - `name` sets the display name for the assistant (internal use) - - `firstMessage` is the first message the assistant says - - `transcriber` selects the speech-to-text provider and model - - `voice` selects the text-to-speech provider and voice - - `model` sets the LLM provider, model, and system prompt - - Start a call using your assistant configuration. - + ```javascript vapi.start(assistantOptions); ``` - - - - To create an assistant in the dashboard, follow the step-by-step guide in the [Dashboard Quickstart](https://docs.vapi.ai/quickstart/dashboard#get-started). - - - Once you have your assistant's ID, you can start a call with it: - - - Assistant ID in dashboard - - - ```javascript - vapi.start("YOUR_ASSISTANT_ID_FROM_THE_DASHBOARD"); - ``` - - - To override assistant settings or set template variables, pass an `assistantOverrides` object as the second argument. - - ```javascript - const assistantOverrides = { - transcriber: { - provider: "deepgram", - model: "nova-2", - language: "en-US", - }, - recordingEnabled: false, - variableValues: { - name: "John", - }, - }; - - vapi.start("YOUR_ASSISTANT_ID_FROM_THE_DASHBOARD", assistantOverrides); - ``` - - - + +## Handle call events + +Listen to call lifecycle events to update your UI and handle user interactions: + +```javascript +// Call started +vapi.on('call-start', () => { + console.log('Call has started'); + // Update UI to show call is active +}); + +// Call ended +vapi.on('call-end', () => { + console.log('Call has ended'); + // Reset UI state +}); + +// Message received from assistant +vapi.on('message', (message) => { + console.log('Assistant said:', message.content); + // Display assistant's response in UI +}); + +// Speech recognition results +vapi.on('speech-start', () => { + console.log('User started speaking'); +}); + +vapi.on('speech-end', () => { + console.log('User stopped speaking'); +}); +``` + +## Common integration patterns + +### Voice button component +Add a simple "Talk to Assistant" button: + +```javascript +const VoiceButton = () => { + const [isCallActive, setIsCallActive] = useState(false); + + const toggleCall = () => { + if (isCallActive) { + vapi.stop(); + } else { + vapi.start("your-assistant-id"); + } + }; + + useEffect(() => { + vapi.on('call-start', () => setIsCallActive(true)); + vapi.on('call-end', () => setIsCallActive(false)); + }, []); + + return ( + + ); +}; +``` + +### Context-aware conversations +Pass dynamic data to your assistant based on the current page or user state: + +```javascript +const startContextualCall = (userContext) => { + const assistantOverrides = { + variableValues: { + userName: userContext.name, + currentPage: window.location.pathname, + userPreferences: userContext.preferences + } + }; + + vapi.start("your-assistant-id", assistantOverrides); +}; +``` + +## Next steps + +- **Phone integration:** Enable phone calling with the [Phone calls guide](/quickstart/phone) +- **Custom tools:** Connect your assistant to your APIs with [Custom tools](/tools/custom-tools) +- **Advanced features:** Explore [Variables](/assistants/dynamic-variables) and [Hooks](/assistants/assistant-hooks) diff --git a/fern/sdk/web.mdx b/fern/sdk/web.mdx index dad01b9b..a5ff9b70 100644 --- a/fern/sdk/web.mdx +++ b/fern/sdk/web.mdx @@ -36,7 +36,7 @@ const call = await vapi.start(assistantId); #### Passing an Assistant ID -If you already have an assistant that you created (either via [the Dashboard](/quickstart/dashboard) or [the API](/api-reference/assistants/create-assistant)), you can start the call with the assistant's ID: +If you already have an assistant that you created (either via [the Dashboard](/quickstart/phone) or [the API](/api-reference/assistants/create-assistant)), you can start the call with the assistant's ID: ```javascript vapi.start("79f3XXXX-XXXX-XXXX-XXXX-XXXXXXXXce48"); diff --git a/fern/snippets/quickstart/dashboard/assistant-setup-inbound.mdx b/fern/snippets/quickstart/dashboard/assistant-setup-inbound.mdx index 9ba48581..5dd5cf2c 100644 --- a/fern/snippets/quickstart/dashboard/assistant-setup-inbound.mdx +++ b/fern/snippets/quickstart/dashboard/assistant-setup-inbound.mdx @@ -8,7 +8,7 @@ Sign-up for an account (or log-in to your existing account) — you will then find yourself inside the web dashboard. It will look something like this: - + @@ -30,7 +30,7 @@ - You will then be able to name your assistant — you can name it whatever you'd like (`Vapi’s Pizza Front Desk`, for example): + You will then be able to name your assistant — you can name it whatever you'd like (`Vapi's Pizza Front Desk`, for example): This name is only for internal labeling use. It is not an identifier, nor will the assistant be @@ -43,26 +43,26 @@ Once you have named your assistant, you can hit "Create" to create it. You will then see something like this: - + - This is the assistant overview view — it gives you the ability to edit different attributes about your assistant, as well as see **cost** & **latency** projection information for each portion of it’s voice pipeline (this is very important data to have handy when building out your assistants). + This is the assistant overview view — it gives you the ability to edit different attributes about your assistant, as well as see **cost** & **latency** projection information for each portion of it's voice pipeline (this is very important data to have handy when building out your assistants). - Now we’re going to set the "brains" of the assistant, the large language model. We're going to be using `GPT-4` (from [OpenAI](https://openai.com/)) for this demo (though you're free to use `GPT-3.5`, or any one of your favorite LLMs). + Now we're going to set the "brains" of the assistant, the large language model. We're going to be using `GPT-4` (from [OpenAI](https://openai.com/)) for this demo (though you're free to use `GPT-3.5`, or any one of your favorite LLMs). - Before we proceed, we can set our [provider key](https://docs.vapi.ai/customization/provider-keys) for OpenAI (this is just your OpenAI secret key). + Before we proceed, we can set our [provider key](/customization/provider-keys) for OpenAI (this is just your OpenAI secret key). You can see all of your provider keys in the "Provider Keys" dashboard tab. You can also go directly to [dashboard.vapi.ai/keys](https://dashboard.vapi.ai/keys). - Vapi uses [provider keys](https://docs.vapi.ai/customization/provider-keys) you provide to communicate with LLM, TTS, & STT vendors on your behalf. It is most ideal that we set keys for the vendors we intend to use ahead of time. + Vapi uses [provider keys](/customization/provider-keys) you provide to communicate with LLM, TTS, & STT vendors on your behalf. It is most ideal that we set keys for the vendors we intend to use ahead of time. @@ -85,7 +85,7 @@ For our use case, we will want a first message. It would be ideal for us to have a first message like this: ```text - Vappy’s Pizzeria speaking, how can I help you? + Vappy's Pizzeria speaking, how can I help you? ``` @@ -105,7 +105,7 @@ The system prompt can be used to configure the context, role, personality, instructions and so on for the assistant. In our case, a system prompt like this will give us the behavior we want: ```text - You are a voice assistant for Vappy’s Pizzeria, + You are a voice assistant for Vappy's Pizzeria, a pizza shop located on the Internet. Your job is to take the order of customers calling in. The menu has only 3 types diff --git a/fern/snippets/quickstart/platform-specific/no-code-prerequisites.mdx b/fern/snippets/quickstart/platform-specific/no-code-prerequisites.mdx index caaa2a5d..844b19ec 100644 --- a/fern/snippets/quickstart/platform-specific/no-code-prerequisites.mdx +++ b/fern/snippets/quickstart/platform-specific/no-code-prerequisites.mdx @@ -5,24 +5,16 @@ They may be helpful to go through before following this guide: - - The easiest way to start with Vapi. Run a voice agent in minutes. + + The easiest way to start with Vapi. Build a voice agent in 5 minutes. - Quickly get started handling inbound phone calls. - - - Quickly get started sending outbound phone calls. + Integrate voice calls into your web application. diff --git a/fern/workflows/overview.mdx b/fern/workflows/overview.mdx index 44385ebe..ea5dd145 100644 --- a/fern/workflows/overview.mdx +++ b/fern/workflows/overview.mdx @@ -89,7 +89,7 @@ The API Request Node allows developers to make HTTP Requests to their API, custo Transfer calls to another phone number, including human agents or specialized voice agents. -Developers can specify a phone number destination and a [transfer plan](https://docs.vapi.ai/call-forwarding#call-transfers-mode), which lets them specify a message or a summary of the call to the person or agent picking up in the destination number before actually connecting the call. +Developers can specify a phone number destination and a [transfer plan](/call-forwarding#call-transfers-mode), which lets them specify a message or a summary of the call to the person or agent picking up in the destination number before actually connecting the caller. Create workflow interface