You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Copy file name to clipboardExpand all lines: CHANGELOG.md
+13-3Lines changed: 13 additions & 3 deletions
Original file line number
Diff line number
Diff line change
@@ -6,10 +6,18 @@ and this project adheres to [Semantic Versioning](https://semver.org/spec/v2.0.0
6
6
7
7
## [Unreleased]
8
8
9
+
## [0.7.5] - 2024-01-17
10
+
### Added
11
+
- Support for larger prompts by storing LLMPromptSummaryTemplate in S3 rather than SSM. By default, the CF templtae will migrate existing SSM prompts to DynamoDB.
12
+
13
+
### Fixed
14
+
-#125 Updated the pca-aws-sf-bulk-queue-space.py function to correctly count jobs based on IN_PROGRESS as well as QUEUED
15
+
-#224 Updated the pca-aws-sf-bulk-queue-space.py function to correctly count both Transcribe and Transcribe Call Analytics (vs just Transcribe).
16
+
9
17
## [0.7.4] - 2023-12-15
10
18
### Added
11
-
- Drag/drop upload from call list page
12
-
- Refresh call summary from call details page
19
+
- Drag/drop upload from call list page.
20
+
- Refresh call summary from call details page.
13
21
14
22
### Fixed
15
23
- Accessibility improvements
@@ -152,7 +160,9 @@ and this project adheres to [Semantic Versioning](https://semver.org/spec/v2.0.0
Copy file name to clipboardExpand all lines: docs/generative_ai.md
+27-68Lines changed: 27 additions & 68 deletions
Original file line number
Diff line number
Diff line change
@@ -13,58 +13,22 @@ PCA also supports 'Generative AI Queries' - which simply means you can ask quest
13
13
14
14
## Generative AI Insights
15
15
16
-
When enabled, PCA can run one or more FM inferences against Amazon Bedrock or Anthropic APIs. The prompt used to generate the insights is configured in a [AWS Systems Manager Parameter Store](https://docs.aws.amazon.com/systems-manager/latest/userguide/systems-manager-parameter-store.html). The name of the parameter is `LLMPromptSummaryTemplate`.
16
+
When enabled, PCA can run one or more FM inferences against Amazon Bedrock or Anthropic APIs. The prompt used to generate the insights is stored in DynamoDB. The name of the table contains the string `LLMPromptConfigure`, and the table partition key is `LLMPromptTemplateId`. There are two items in the table, one with the partition key value of `LLMPromptSummaryTemplate` and the other with the partition key value of `LLMPromptQueryTemplate`.
17
17
18
-
### Multiple inferences per call
18
+
### Generative AI interactive queries
19
19
20
-
The default value for `LLMPromptSummaryTemplate` is a JSON object with key/value pairs, each pair representing the label (key) and prompt (value). During the `Summarize` step, PCA will iterate the keys and run each prompt. PCA will replace `<br>` tags with newlines, and `{transcript}` is replaced with the call transcript. The key will be used as a header for the value in the "generated insights" section in the PCA UI.
20
+
The item in Dynamo with the key`LLMPromptQueryTemplate` allows you to customize the interactive query prompt as seen in the call details page. You can use this to provide model specific prompts. The default valu is in [Anthropic's prompt format](https://docs.anthropic.com/claude/docs/constructing-a-prompt).
21
21
22
-
Below is the default value of `LLMpromptSummaryTemplate`.
23
-
24
-
```
25
-
{
26
-
"Summary":"<br><br>Human: Answer the questions below, defined in <question></question> based on the transcript defined in <transcript></transcript>. If you cannot answer the question, reply with 'n/a'. Use gender neutral pronouns. When you reply, only respond with the answer.<br><br><question>What is a summary of the transcript?</question><br><br><transcript><br>{transcript}<br></transcript><br><br>Assistant:",
27
-
"Topic":"<br><br>Human: Answer the questions below, defined in <question></question> based on the transcript defined in <transcript></transcript>. If you cannot answer the question, reply with 'n/a'. Use gender neutral pronouns. When you reply, only respond with the answer.<br><br><question>What is the topic of the call? For example, iphone issue, billing issue, cancellation. Only reply with the topic, nothing more.</question><br><br><transcript><br>{transcript}<br></transcript><br><br>Assistant:",
28
-
"Product":"<br><br>Human: Answer the questions below, defined in <question></question> based on the transcript defined in <transcript></transcript>. If you cannot answer the question, reply with 'n/a'. Use gender neutral pronouns. When you reply, only respond with the answer.<br><br><question>What product did the customer call about? For example, internet, broadband, mobile phone, mobile plans. Only reply with the product, nothing more.</question><br><br><transcript><br>{transcript}<br></transcript><br><br>Assistant:",
29
-
"Resolved":"<br><br>Human: Answer the questions below, defined in <question></question> based on the transcript defined in <transcript></transcript>. If you cannot answer the question, reply with 'n/a'. Use gender neutral pronouns. When you reply, only respond with the answer.<br><br><question>Did the agent resolve the customer's questions? Only reply with yes or no, nothing more. </question><br><br><transcript><br>{transcript}<br></transcript><br><br>Assistant:",
30
-
"Callback":"<br><br>Human: Answer the questions below, defined in <question></question> based on the transcript defined in <transcript></transcript>. If you cannot answer the question, reply with 'n/a'. Use gender neutral pronouns. When you reply, only respond with the answer.<br><br><question>Was this a callback? (yes or no) Only reply with yes or no, nothing more.</question><br><br><transcript><br>{transcript}<br></transcript><br><br>Assistant:",
31
-
"Politeness":"<br><br>Human: Answer the question below, defined in <question></question> based on the transcript defined in <transcript></transcript>. If you cannot answer the question, reply with 'n/a'. Use gender neutral pronouns. When you reply, only respond with the answer.<br><br><question>Was the agent polite and professional? (yes or no) Only reply with yes or no, nothing more.</question><br><br><transcript><br>{transcript}<br></transcript><br><br>Assistant:",
32
-
"Actions":"<br><br>Human: Answer the question below, defined in <question></question> based on the transcript defined in <transcript></transcript>. If you cannot answer the question, reply with 'n/a'. Use gender neutral pronouns. When you reply, only respond with the answer.<br><br><question>What actions did the Agent take? </question><br><br><transcript><br>{transcript}<br></transcript><br><br>Assistant:"
33
-
}
34
-
```
35
-
36
-
The expected output after the summarize step is a single json object, as a string, that contains all the key/value pairs. For example:
37
-
38
-
```
39
-
{
40
-
"Summary": "...",
41
-
"Topic": "...",
42
-
"Product": "...",
43
-
"Resolved": "...",
44
-
"Callback": "...",
45
-
"Politeness": "...",
46
-
"Actions": "...",
47
-
}
48
-
```
49
-
50
-
51
-
### Single FM Inference
52
-
53
-
Some LLMs may be able to generate the JSON with one inference, rather than several. Below is an example that we've seen work, but with mixed results.
22
+
The default value is:
54
23
55
24
```
56
25
<br>
57
-
<br>Human: Answer all the questions below, based on the contents of <transcript></transcript>, as a json object with key value pairs. Use the text before the colon as the key, and the answer as the value. If you cannot answer the question, reply with 'n/a'. Only return json. Use gender neutral pronouns. Skip the preamble; go straight into the json.
26
+
<br>Human: You are an AI chatbot. Carefully read the following transcript within <transcript></transcript>
27
+
and then provide a short answer to the question. If the answer cannot be determined from the transcript or
28
+
the context, then reply saying Sorry, I don't know. Use gender neutral pronouns. Skip the preamble; when you reply, only
29
+
respond with the answer.
58
30
<br>
59
-
<br><questions>
60
-
<br>Summary: Summarize the transcript in no more than 5 sentences. Were the caller's needs met during the call?
61
-
<br>Topic: Topic of the call. Choose from one of these or make one up (iphone issue, billing issue, cancellation)
62
-
<br>Product: What product did the customer call about? (internet, broadband, mobile phone, mobile plans)
63
-
<br>Resolved: Did the agent resolve the customer's questions? (yes or no)
64
-
<br>Callback: Was this a callback? (yes or no)
65
-
<br>Politeness: Was the agent polite and professional? (yes or no)
66
-
<br>Actions: What actions did the Agent take?
67
-
<br></questions>
31
+
<br><question>{question}</question>
68
32
<br>
69
33
<br><transcript>
70
34
<br>{transcript}
@@ -75,35 +39,30 @@ Some LLMs may be able to generate the JSON with one inference, rather than sever
75
39
76
40
The `<br>` tags are replaced with newlines, and `{transcript}` is replaced with the call transcript.
77
41
78
-
**Note:** This prompt generates 7 insights in a single inference - summary, topic, product, resolved, callback, agent politeness, and actions.
79
42
80
-
The expected output of the inference should be a single JSON object with key-value pairs, similar to above.
43
+
### Generative AI insights
81
44
82
-
### Call list default columns
45
+
The item in Dynamo with the key `LLMPromptSummaryTemplate` contains 1 or more attributes. Each attribute is a single prompt that will be invoked for each call analyzed. Each attribute contains an attribute name and value. The attribute name is an integer, followed by a `#`, followed by the name of the insight. The number signifies the order of the insight. For example, `1#Summary` will show up first.
83
46
84
-
The call list main screen contains additional pre-defined columns. If the output of the inference contains JSON with the column names (or the names are keys in the multiple inferences per call), the values will propogate to the main call list. The names columns are: `Summary`, `Topic`, `Product`, `Resolved`, `Callback`, `Politeness`, `Actions`. They are also in the default prompt.
47
+
Default attributes:
85
48
86
-
## Generative AI Queries
49
+
| Key | Description | Prompt |
50
+
| ----- | -------- | ---------- |
51
+
|`1#Summary`| What is a summary of the transcript? |`<br><br>Human: Answer the questions below, defined in <question></question> based on the transcript defined in <transcript></transcript>. If you cannot answer the question, reply with 'n/a'. Use gender neutral pronouns. When you reply, only respond with the answer.<br><br><question>What is a summary of the transcript?</question><br><br><transcript><br>{transcript}<br></transcript><br><br>Assistant:`|
52
+
|`2#Topic`| What is the topic of the call? |`<br><br>Human: Answer the questions below, defined in <question></question> based on the transcript defined in <transcript></transcript>. If you cannot answer the question, reply with 'n/a'. Use gender neutral pronouns. When you reply, only respond with the answer.<br><br><question>What is the topic of the call? For example, iphone issue, billing issue, cancellation. Only reply with the topic, nothing more.</question><br><br><transcript><br>{transcript}<br></transcript><br><br>Assistant:`|
53
+
|`3#Product`| What product did the customer call about? |`<br><br>Human: Answer the questions below, defined in <question></question> based on the transcript defined in <transcript></transcript>. If you cannot answer the question, reply with 'n/a'. Use gender neutral pronouns. When you reply, only respond with the answer.<br><br><question>What product did the customer call about? For example, internet, broadband, mobile phone, mobile plans. Only reply with the product, nothing more.</question><br><br><transcript><br>{transcript}<br></transcript><br><br>Assistant:`|
54
+
|`4#Resolved`| Did the agent resolve the customer's questions? Only reply with yes or no. |`<br><br>Human: Answer the questions below, defined in <question></question> based on the transcript defined in <transcript></transcript>. If you cannot answer the question, reply with 'n/a'. Use gender neutral pronouns. When you reply, only respond with the answer.<br><br><question>Did the agent resolve the customer's questions? Only reply with yes or no, nothing more. </question><br><br><transcript><br>{transcript}<br></transcript><br><br>Assistant:`|
55
+
|`5#Callback`| Was this a callback? |`<br><br>Human: Answer the questions below, defined in <question></question> based on the transcript defined in <transcript></transcript>. If you cannot answer the question, reply with 'n/a'. Use gender neutral pronouns. When you reply, only respond with the answer.<br><br><question>Was this a callback? (yes or no) Only reply with yes or no, nothing more.</question><br><br><transcript><br>{transcript}<br></transcript><br><br>Assistant:`|
56
+
|`6#Politeness`| Was the agent polite and professional? |`<br><br>Human: Answer the question below, defined in <question></question> based on the transcript defined in <transcript></transcript>. If you cannot answer the question, reply with 'n/a'. Use gender neutral pronouns. When you reply, only respond with the answer.<br><br><question>Was the agent polite and professional? (yes or no) Only reply with yes or no, nothing more.</question><br><br><transcript><br>{transcript}<br></transcript><br><br>Assistant:`|
57
+
|`7#Actions`| What actions did the Agent take? |`<br><br>Human: Answer the question below, defined in <question></question> based on the transcript defined in <transcript></transcript>. If you cannot answer the question, reply with 'n/a'. Use gender neutral pronouns. When you reply, only respond with the answer.<br><br><question>What actions did the Agent take? </question><br><br><transcript><br>{transcript}<br></transcript><br><br>Assistant:`|
87
58
88
-
For interactive queries from within PCA, it uses a different parameter, named `LLMPromptQueryTemplate`. This will only run a single inference per question.
59
+
The `<br>` tags are replaced with newlines, and `{transcript}` is replaced with the call transcript. Some Bedrock models such as Claude require newlines in specific spots.
89
60
90
-
The default value is:
61
+
#### Customizing
91
62
92
-
```
93
-
<br>
94
-
<br>Human: You are an AI chatbot. Carefully read the following transcript within <transcript></transcript>
95
-
and then provide a short answer to the question. If the answer cannot be determined from the transcript or
96
-
the context, then reply saying Sorry, I don't know. Use gender neutral pronouns. Skip the preamble; when you reply, only
97
-
respond with the answer.
98
-
<br>
99
-
<br><question>{question}</question>
100
-
<br>
101
-
<br><transcript>
102
-
<br>{transcript}
103
-
<br></transcript>
104
-
<br>
105
-
<br>Assistant:
106
-
```
63
+
You can add your own additional attributes and prompts by editing this item in DynamoDB. Make sure you include an order number and insight name in the attribute name. For example `9#NPS Score`. You can use any of the above prompts as a starting point for crafting a prompt. Do not forget to include `{transcript}` as a placeholder, otherwise your transcript will not be included in the LLM inference!
107
64
108
-
The `<br>` tags are replaced with newlines, and `{transcript}` is replaced with the call transcript.
65
+
### Call list default columns
66
+
67
+
The call list main screen contains additional pre-defined columns. If the output of the inference contains the column names, the values will propogate to the main call list. The names columns are: `Summary`, `Topic`, `Product`, `Resolved`, `Callback`, `Politeness`, `Actions`. They are also in the default prompt.
0 commit comments