Skip to content

Commit aa0dcc0

Browse files
committed
Merge branch 'develop' v0.7.5
2 parents bd028df + 57d54d3 commit aa0dcc0

18 files changed

+415
-175
lines changed

CHANGELOG.md

Lines changed: 13 additions & 3 deletions
Original file line numberDiff line numberDiff line change
@@ -6,10 +6,18 @@ and this project adheres to [Semantic Versioning](https://semver.org/spec/v2.0.0
66

77
## [Unreleased]
88

9+
## [0.7.5] - 2024-01-17
10+
### Added
11+
- Support for larger prompts by storing LLMPromptSummaryTemplate in S3 rather than SSM. By default, the CF templtae will migrate existing SSM prompts to DynamoDB.
12+
13+
### Fixed
14+
- #125 Updated the pca-aws-sf-bulk-queue-space.py function to correctly count jobs based on IN_PROGRESS as well as QUEUED
15+
- #224 Updated the pca-aws-sf-bulk-queue-space.py function to correctly count both Transcribe and Transcribe Call Analytics (vs just Transcribe).
16+
917
## [0.7.4] - 2023-12-15
1018
### Added
11-
- Drag/drop upload from call list page
12-
- Refresh call summary from call details page
19+
- Drag/drop upload from call list page.
20+
- Refresh call summary from call details page.
1321

1422
### Fixed
1523
- Accessibility improvements
@@ -152,7 +160,9 @@ and this project adheres to [Semantic Versioning](https://semver.org/spec/v2.0.0
152160
### Added
153161
- Initial release
154162

155-
[Unreleased]: https://github.com/aws-samples/amazon-transcribe-post-call-analytics/compare/v0.7.3...develop
163+
[Unreleased]: https://github.com/aws-samples/amazon-transcribe-post-call-analytics/compare/v0.7.5...develop
164+
[0.7.5]: https://github.com/aws-samples/amazon-transcribe-post-call-analytics/releases/tag/v0.7.5
165+
[0.7.4]: https://github.com/aws-samples/amazon-transcribe-post-call-analytics/releases/tag/v0.7.4
156166
[0.7.3]: https://github.com/aws-samples/amazon-transcribe-post-call-analytics/releases/tag/v0.7.3
157167
[0.7.2]: https://github.com/aws-samples/amazon-transcribe-post-call-analytics/releases/tag/v0.7.2
158168
[0.7.1]: https://github.com/aws-samples/amazon-transcribe-post-call-analytics/releases/tag/v0.7.1

VERSION

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -1 +1 @@
1-
0.7.4
1+
0.7.5

docs/generative_ai.md

Lines changed: 27 additions & 68 deletions
Original file line numberDiff line numberDiff line change
@@ -13,58 +13,22 @@ PCA also supports 'Generative AI Queries' - which simply means you can ask quest
1313

1414
## Generative AI Insights
1515

16-
When enabled, PCA can run one or more FM inferences against Amazon Bedrock or Anthropic APIs. The prompt used to generate the insights is configured in a [AWS Systems Manager Parameter Store](https://docs.aws.amazon.com/systems-manager/latest/userguide/systems-manager-parameter-store.html). The name of the parameter is `LLMPromptSummaryTemplate`.
16+
When enabled, PCA can run one or more FM inferences against Amazon Bedrock or Anthropic APIs. The prompt used to generate the insights is stored in DynamoDB. The name of the table contains the string `LLMPromptConfigure`, and the table partition key is `LLMPromptTemplateId`. There are two items in the table, one with the partition key value of `LLMPromptSummaryTemplate` and the other with the partition key value of `LLMPromptQueryTemplate`.
1717

18-
### Multiple inferences per call
18+
### Generative AI interactive queries
1919

20-
The default value for `LLMPromptSummaryTemplate` is a JSON object with key/value pairs, each pair representing the label (key) and prompt (value). During the `Summarize` step, PCA will iterate the keys and run each prompt. PCA will replace `<br>` tags with newlines, and `{transcript}` is replaced with the call transcript. The key will be used as a header for the value in the "generated insights" section in the PCA UI.
20+
The item in Dynamo with the key `LLMPromptQueryTemplate` allows you to customize the interactive query prompt as seen in the call details page. You can use this to provide model specific prompts. The default valu is in [Anthropic's prompt format](https://docs.anthropic.com/claude/docs/constructing-a-prompt).
2121

22-
Below is the default value of `LLMpromptSummaryTemplate`.
23-
24-
```
25-
{
26-
"Summary":"<br><br>Human: Answer the questions below, defined in <question></question> based on the transcript defined in <transcript></transcript>. If you cannot answer the question, reply with 'n/a'. Use gender neutral pronouns. When you reply, only respond with the answer.<br><br><question>What is a summary of the transcript?</question><br><br><transcript><br>{transcript}<br></transcript><br><br>Assistant:",
27-
"Topic":"<br><br>Human: Answer the questions below, defined in <question></question> based on the transcript defined in <transcript></transcript>. If you cannot answer the question, reply with 'n/a'. Use gender neutral pronouns. When you reply, only respond with the answer.<br><br><question>What is the topic of the call? For example, iphone issue, billing issue, cancellation. Only reply with the topic, nothing more.</question><br><br><transcript><br>{transcript}<br></transcript><br><br>Assistant:",
28-
"Product":"<br><br>Human: Answer the questions below, defined in <question></question> based on the transcript defined in <transcript></transcript>. If you cannot answer the question, reply with 'n/a'. Use gender neutral pronouns. When you reply, only respond with the answer.<br><br><question>What product did the customer call about? For example, internet, broadband, mobile phone, mobile plans. Only reply with the product, nothing more.</question><br><br><transcript><br>{transcript}<br></transcript><br><br>Assistant:",
29-
"Resolved":"<br><br>Human: Answer the questions below, defined in <question></question> based on the transcript defined in <transcript></transcript>. If you cannot answer the question, reply with 'n/a'. Use gender neutral pronouns. When you reply, only respond with the answer.<br><br><question>Did the agent resolve the customer's questions? Only reply with yes or no, nothing more. </question><br><br><transcript><br>{transcript}<br></transcript><br><br>Assistant:",
30-
"Callback":"<br><br>Human: Answer the questions below, defined in <question></question> based on the transcript defined in <transcript></transcript>. If you cannot answer the question, reply with 'n/a'. Use gender neutral pronouns. When you reply, only respond with the answer.<br><br><question>Was this a callback? (yes or no) Only reply with yes or no, nothing more.</question><br><br><transcript><br>{transcript}<br></transcript><br><br>Assistant:",
31-
"Politeness":"<br><br>Human: Answer the question below, defined in <question></question> based on the transcript defined in <transcript></transcript>. If you cannot answer the question, reply with 'n/a'. Use gender neutral pronouns. When you reply, only respond with the answer.<br><br><question>Was the agent polite and professional? (yes or no) Only reply with yes or no, nothing more.</question><br><br><transcript><br>{transcript}<br></transcript><br><br>Assistant:",
32-
"Actions":"<br><br>Human: Answer the question below, defined in <question></question> based on the transcript defined in <transcript></transcript>. If you cannot answer the question, reply with 'n/a'. Use gender neutral pronouns. When you reply, only respond with the answer.<br><br><question>What actions did the Agent take? </question><br><br><transcript><br>{transcript}<br></transcript><br><br>Assistant:"
33-
}
34-
```
35-
36-
The expected output after the summarize step is a single json object, as a string, that contains all the key/value pairs. For example:
37-
38-
```
39-
{
40-
"Summary": "...",
41-
"Topic": "...",
42-
"Product": "...",
43-
"Resolved": "...",
44-
"Callback": "...",
45-
"Politeness": "...",
46-
"Actions": "...",
47-
}
48-
```
49-
50-
51-
### Single FM Inference
52-
53-
Some LLMs may be able to generate the JSON with one inference, rather than several. Below is an example that we've seen work, but with mixed results.
22+
The default value is:
5423

5524
```
5625
<br>
57-
<br>Human: Answer all the questions below, based on the contents of <transcript></transcript>, as a json object with key value pairs. Use the text before the colon as the key, and the answer as the value. If you cannot answer the question, reply with 'n/a'. Only return json. Use gender neutral pronouns. Skip the preamble; go straight into the json.
26+
<br>Human: You are an AI chatbot. Carefully read the following transcript within <transcript></transcript>
27+
and then provide a short answer to the question. If the answer cannot be determined from the transcript or
28+
the context, then reply saying Sorry, I don't know. Use gender neutral pronouns. Skip the preamble; when you reply, only
29+
respond with the answer.
5830
<br>
59-
<br><questions>
60-
<br>Summary: Summarize the transcript in no more than 5 sentences. Were the caller's needs met during the call?
61-
<br>Topic: Topic of the call. Choose from one of these or make one up (iphone issue, billing issue, cancellation)
62-
<br>Product: What product did the customer call about? (internet, broadband, mobile phone, mobile plans)
63-
<br>Resolved: Did the agent resolve the customer's questions? (yes or no)
64-
<br>Callback: Was this a callback? (yes or no)
65-
<br>Politeness: Was the agent polite and professional? (yes or no)
66-
<br>Actions: What actions did the Agent take?
67-
<br></questions>
31+
<br><question>{question}</question>
6832
<br>
6933
<br><transcript>
7034
<br>{transcript}
@@ -75,35 +39,30 @@ Some LLMs may be able to generate the JSON with one inference, rather than sever
7539

7640
The `<br>` tags are replaced with newlines, and `{transcript}` is replaced with the call transcript.
7741

78-
**Note:** This prompt generates 7 insights in a single inference - summary, topic, product, resolved, callback, agent politeness, and actions.
7942

80-
The expected output of the inference should be a single JSON object with key-value pairs, similar to above.
43+
### Generative AI insights
8144

82-
### Call list default columns
45+
The item in Dynamo with the key `LLMPromptSummaryTemplate` contains 1 or more attributes. Each attribute is a single prompt that will be invoked for each call analyzed. Each attribute contains an attribute name and value. The attribute name is an integer, followed by a `#`, followed by the name of the insight. The number signifies the order of the insight. For example, `1#Summary` will show up first.
8346

84-
The call list main screen contains additional pre-defined columns. If the output of the inference contains JSON with the column names (or the names are keys in the multiple inferences per call), the values will propogate to the main call list. The names columns are: `Summary`, `Topic`, `Product`, `Resolved`, `Callback`, `Politeness`, `Actions`. They are also in the default prompt.
47+
Default attributes:
8548

86-
## Generative AI Queries
49+
| Key | Description | Prompt |
50+
| ----- | -------- | ---------- |
51+
| `1#Summary` | What is a summary of the transcript? | `<br><br>Human: Answer the questions below, defined in <question></question> based on the transcript defined in <transcript></transcript>. If you cannot answer the question, reply with 'n/a'. Use gender neutral pronouns. When you reply, only respond with the answer.<br><br><question>What is a summary of the transcript?</question><br><br><transcript><br>{transcript}<br></transcript><br><br>Assistant:` |
52+
| `2#Topic` | What is the topic of the call? | `<br><br>Human: Answer the questions below, defined in <question></question> based on the transcript defined in <transcript></transcript>. If you cannot answer the question, reply with 'n/a'. Use gender neutral pronouns. When you reply, only respond with the answer.<br><br><question>What is the topic of the call? For example, iphone issue, billing issue, cancellation. Only reply with the topic, nothing more.</question><br><br><transcript><br>{transcript}<br></transcript><br><br>Assistant:` |
53+
| `3#Product` | What product did the customer call about? | `<br><br>Human: Answer the questions below, defined in <question></question> based on the transcript defined in <transcript></transcript>. If you cannot answer the question, reply with 'n/a'. Use gender neutral pronouns. When you reply, only respond with the answer.<br><br><question>What product did the customer call about? For example, internet, broadband, mobile phone, mobile plans. Only reply with the product, nothing more.</question><br><br><transcript><br>{transcript}<br></transcript><br><br>Assistant:` |
54+
| `4#Resolved` | Did the agent resolve the customer's questions? Only reply with yes or no. | `<br><br>Human: Answer the questions below, defined in <question></question> based on the transcript defined in <transcript></transcript>. If you cannot answer the question, reply with 'n/a'. Use gender neutral pronouns. When you reply, only respond with the answer.<br><br><question>Did the agent resolve the customer's questions? Only reply with yes or no, nothing more. </question><br><br><transcript><br>{transcript}<br></transcript><br><br>Assistant:` |
55+
| `5#Callback` | Was this a callback? | `<br><br>Human: Answer the questions below, defined in <question></question> based on the transcript defined in <transcript></transcript>. If you cannot answer the question, reply with 'n/a'. Use gender neutral pronouns. When you reply, only respond with the answer.<br><br><question>Was this a callback? (yes or no) Only reply with yes or no, nothing more.</question><br><br><transcript><br>{transcript}<br></transcript><br><br>Assistant:` |
56+
| `6#Politeness` | Was the agent polite and professional? | `<br><br>Human: Answer the question below, defined in <question></question> based on the transcript defined in <transcript></transcript>. If you cannot answer the question, reply with 'n/a'. Use gender neutral pronouns. When you reply, only respond with the answer.<br><br><question>Was the agent polite and professional? (yes or no) Only reply with yes or no, nothing more.</question><br><br><transcript><br>{transcript}<br></transcript><br><br>Assistant:` |
57+
| `7#Actions` | What actions did the Agent take? | `<br><br>Human: Answer the question below, defined in <question></question> based on the transcript defined in <transcript></transcript>. If you cannot answer the question, reply with 'n/a'. Use gender neutral pronouns. When you reply, only respond with the answer.<br><br><question>What actions did the Agent take? </question><br><br><transcript><br>{transcript}<br></transcript><br><br>Assistant:` |
8758

88-
For interactive queries from within PCA, it uses a different parameter, named `LLMPromptQueryTemplate`. This will only run a single inference per question.
59+
The `<br>` tags are replaced with newlines, and `{transcript}` is replaced with the call transcript. Some Bedrock models such as Claude require newlines in specific spots.
8960

90-
The default value is:
61+
#### Customizing
9162

92-
```
93-
<br>
94-
<br>Human: You are an AI chatbot. Carefully read the following transcript within <transcript></transcript>
95-
and then provide a short answer to the question. If the answer cannot be determined from the transcript or
96-
the context, then reply saying Sorry, I don't know. Use gender neutral pronouns. Skip the preamble; when you reply, only
97-
respond with the answer.
98-
<br>
99-
<br><question>{question}</question>
100-
<br>
101-
<br><transcript>
102-
<br>{transcript}
103-
<br></transcript>
104-
<br>
105-
<br>Assistant:
106-
```
63+
You can add your own additional attributes and prompts by editing this item in DynamoDB. Make sure you include an order number and insight name in the attribute name. For example `9#NPS Score`. You can use any of the above prompts as a starting point for crafting a prompt. Do not forget to include `{transcript}` as a placeholder, otherwise your transcript will not be included in the LLM inference!
10764

108-
The `<br>` tags are replaced with newlines, and `{transcript}` is replaced with the call transcript.
65+
### Call list default columns
66+
67+
The call list main screen contains additional pre-defined columns. If the output of the inference contains the column names, the values will propogate to the main call list. The names columns are: `Summary`, `Topic`, `Product`, `Resolved`, `Callback`, `Politeness`, `Actions`. They are also in the default prompt.
10968

pca-main-nokendra.template

Lines changed: 16 additions & 8 deletions
Original file line numberDiff line numberDiff line change
@@ -1,6 +1,6 @@
11
AWSTemplateFormatVersion: "2010-09-09"
22

3-
Description: Amazon Transcribe Post Call Analytics - PCA (v0.7.4) (uksb-1sn29lk73)
3+
Description: Amazon Transcribe Post Call Analytics - PCA (v0.7.5) (uksb-1sn29lk73)
44

55
Parameters:
66

@@ -758,11 +758,17 @@ Resources:
758758
ServiceToken: !GetAtt TestBedrockModelFunction.Arn
759759
LLMModelId: !Ref SummarizationBedrockModelId
760760

761+
LLMPromptConfigure:
762+
Type: AWS::CloudFormation::Stack
763+
Properties:
764+
TemplateURL: pca-server/cfn/lib/llm.template
765+
761766
########################################################
762767
# SSM Stack
763768
########################################################
764769
SSM:
765770
Type: AWS::CloudFormation::Stack
771+
DependsOn: LLMPromptConfigure
766772
Properties:
767773
TemplateURL: pca-ssm/cfn/ssm.template
768774
Parameters:
@@ -876,6 +882,7 @@ Resources:
876882
- ShouldDeployBedrockBoto3Layer
877883
- !GetAtt BedrockBoto3Layer.Outputs.Boto3Layer
878884
- ''
885+
LLMTableName: !GetAtt LLMPromptConfigure.Outputs.LLMTableName
879886

880887
PCAUI:
881888
Type: AWS::CloudFormation::Stack
@@ -911,6 +918,7 @@ Resources:
911918
- ShouldDeployBedrockBoto3Layer
912919
- !GetAtt BedrockBoto3Layer.Outputs.Boto3Layer
913920
- ''
921+
LLMTableName: !GetAtt LLMPromptConfigure.Outputs.LLMTableName
914922

915923
PcaDashboards:
916924
Type: AWS::CloudFormation::Stack
@@ -1086,10 +1094,10 @@ Outputs:
10861094
Description: Lambda function arn that will generate a string of the entire transcript for custom Lambda functions to use.
10871095
Value: !GetAtt PCAServer.Outputs.FetchTranscriptArn
10881096

1089-
LLMPromptSummaryTemplateParameter:
1090-
Description: The LLM summary prompt template in SSM Parameter Store - open to customise call summary prompts.
1091-
Value: !Sub "https://${AWS::Region}.console.aws.amazon.com/systems-manager/parameters/${SSM.Outputs.LLMPromptSummaryTemplateParameter}"
1092-
1093-
LLMPromptQueryTemplateParameter:
1094-
Description: The LLM query prompt template in SSM Parameter Store - open to customise query prompts.
1095-
Value: !Sub "https://${AWS::Region}.console.aws.amazon.com/systems-manager/parameters/${SSM.Outputs.LLMPromptQueryTemplateParameter}"
1097+
LLMPromptSummaryTemplate:
1098+
Description: The LLM summary prompt template in DynamoDB Table - open to customise summary prompts.
1099+
Value: !Sub "https://${AWS::Region}.console.aws.amazon.com/dynamodbv2/home?region=${AWS::Region}#edit-item?itemMode=2&pk=LLMPromptSummaryTemplate&route=ROUTE_ITEM_EXPLORER&sk=&table=${LLMPromptConfigure.Outputs.LLMTableName}"
1100+
1101+
LLMPromptQueryTemplate:
1102+
Description: The LLM query prompt template in DynamoDB Table - open to customise query prompts.
1103+
Value: !Sub "https://${AWS::Region}.console.aws.amazon.com/dynamodbv2/home?region=${AWS::Region}#edit-item?itemMode=2&pk=LLMPromptQueryTemplate&route=ROUTE_ITEM_EXPLORER&sk=&table=${LLMPromptConfigure.Outputs.LLMTableName}"

0 commit comments

Comments
 (0)