diff --git a/assistants/call-analysis.mdx b/assistants/call-analysis.mdx new file mode 100644 index 0000000..27aaae3 --- /dev/null +++ b/assistants/call-analysis.mdx @@ -0,0 +1,148 @@ +--- +title: "Call Analysis" +sidebarTitle: "Call Analysis" +description: "At the end of the call, you can summarize and evaluate how it went." +--- + +The Call Analysis feature allows you to summarize and evaluate calls, providing valuable insights into their effectiveness. This feature uses a combination of prompts and schemas to generate structured data and success evaluations based on the call's content. + +You can customize the below in the assistant's `assistant.analysisPlan`. + +## Summary Prompt + +The summary prompt is used to create a concise summary of the call. This summary is stored in `call.analysis.summary`. + +### Default Summary Prompt + +The default summary prompt is: + +```text +You are an expert note-taker. You will be given a transcript of a call. Summarize the call in 2-3 sentences, if applicable. +``` + +### Customizing the Summary Prompt + +You can customize the summary prompt by setting the `summaryPrompt` property in the API or SDK: + +```json +{ + "summaryPrompt": "Custom summary prompt text" +} +``` + +To disable the summary prompt, set it to an empty string `""` or `"off"`: + +```json +{ + "summaryPrompt": "" +} +``` + +## Structured Data Prompt + +The structured data prompt extracts specific pieces of data from the call. This data is stored in `call.analysis.structuredData`. + +### Default Structured Data Prompt + +The default structured data prompt is: + +```text +You are an expert data extractor. You will be given a transcript of a call. Extract structured data per the JSON Schema. +``` + +### Customizing the Structured Data Prompt + +You can set a custom structured data prompt using the `structuredDataPrompt` property: + +```json +{ + "structuredDataPrompt": "Custom structured data prompt text" +} +``` + +## Structured Data Schema + +The structured data schema enforces the format of the extracted data. It is defined using JSON Schema standards. + +### Customizing the Structured Data Schema + +You can set a custom structured data schema using the `structuredDataSchema` property: + +```json +{ + "structuredDataSchema": { + "type": "object", + "properties": { + "field1": { "type": "string" }, + "field2": { "type": "number" } + }, + "required": ["field1", "field2"] + } +} +``` + +## Success Evaluation Prompt + +The success evaluation prompt is used to determine if the call was successful. This evaluation is stored in `call.analysis.successEvaluation`. + +### Default Success Evaluation Prompt + +The default success evaluation prompt is: + +```text +You are an expert call evaluator. You will be given a transcript of a call and the system prompt of the AI participant. Determine if the call was successful based on the objectives inferred from the system prompt. +``` + +### Customizing the Success Evaluation Prompt + +You can set a custom success evaluation prompt using the `successEvaluationPrompt` property: + +```json +{ + "successEvaluationPrompt": "Custom success evaluation prompt text" +} +``` + +To disable the success evaluation prompt, set it to an empty string `""` or `"off"`: + +```json +{ + "successEvaluationPrompt": "" +} +``` + +## Success Evaluation Rubric + +The success evaluation rubric defines the criteria used to evaluate the call's success. The available rubrics are: + +- `NumericScale`: A scale of 1 to 10. +- `DescriptiveScale`: A scale of Excellent, Good, Fair, Poor. +- `Checklist`: A checklist of criteria and their status. +- `Matrix`: A grid that evaluates multiple criteria across different performance levels. +- `PercentageScale`: A scale of 0% to 100%. +- `LikertScale`: A scale of Strongly Agree, Agree, Neutral, Disagree, Strongly Disagree. +- `AutomaticRubric`: Automatically break down evaluation into several criteria, each with its own score. +- `PassFail`: A simple 'true' if the call passed, 'false' if not. + +### Customizing the Success Evaluation Rubric + +You can set a custom success evaluation rubric using the `successEvaluationRubric` property: + +```json +{ + "successEvaluationRubric": "NumericScale" +} +``` + +## Combining Prompts and Rubrics + +You can use prompts and rubrics in combination to create detailed instructions for the call analysis: + +```json +{ + "successEvaluationPrompt": "Evaluate the call based on these criteria:...", + "successEvaluationRubric": "Checklist" +} +``` + +By customizing these properties, you can tailor the call analysis to meet your specific needs and gain valuable insights from your calls. \ No newline at end of file diff --git a/mint.json b/mint.json index 1b8e70b..c23d3da 100644 --- a/mint.json +++ b/mint.json @@ -156,6 +156,7 @@ "assistants/custom-llm", "assistants/persistent-assistants", "assistants/dynamic-variables", + "assistants/call-analysis", "assistants/background-messages" ] }, @@ -345,6 +346,14 @@ "source": "/dynamic-variables", "destination": "/assistants/dynamic-variables" }, + { + "source": "/call_analysis", + "destination": "/assistants/call-analysis" + }, + { + "source": "/call-analysis", + "destination": "/assistants/call-analysis" + }, { "source": "/text_message", "destination": "/assistants/background-messages" diff --git a/sdk/web.mdx b/sdk/web.mdx index ba8b3da..635f08d 100644 --- a/sdk/web.mdx +++ b/sdk/web.mdx @@ -142,6 +142,14 @@ vapi.setMuted(true); vapi.isMuted(); // true ``` +### `say(message: string, endCallAfterSpoken?: boolean)` + +The `say` method can be used to invoke speech and gracefully terminate the call if needed + +```javascript +vapi.say("Our time's up, goodbye!", true) +``` + ## Events You can listen on the `vapi` instance for events. These events allow you to react to changes in the state of the call or user speech.