Skip to content
This repository was archived by the owner on Jul 9, 2025. It is now read-only.

Commit 14c0779

Browse files
authored
Update bedrock v4 docs (#1554)
1 parent db7c8db commit 14c0779

File tree

2 files changed

+17
-5
lines changed

2 files changed

+17
-5
lines changed

content/en/references/configuration.md

Lines changed: 2 additions & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -94,7 +94,8 @@ This section covers configuration options that are specific to certain AWS servi
9494

9595
| Variable | Example Values | Description |
9696
| - | - | - |
97-
| `LOCALSTACK_ENABLE_BEDROCK` | `1` | Use the Bedrock provider |
97+
| `BEDROCK_PREWARM` | `0` (default) \| `1` | Pre-warm the Bedrock engine directly on LocalStack startup instead of on demand. |
98+
| `DEFAULT_BEDROCK_MODEL` | `qwen2.5:0.5b` (default) | The model to use to handle text model invocations in Bedrock. Any text-based model available for Ollama is usable. |
9899

99100
### BigData (EMR, Athena, Glue)
100101

content/en/user-guide/aws/bedrock/index.md

Lines changed: 15 additions & 4 deletions
Original file line numberDiff line numberDiff line change
@@ -15,17 +15,27 @@ The supported APIs are available on our [API Coverage Page](https://docs.localst
1515

1616
This guide is designed for users new to AWS Bedrock and assumes basic knowledge of the AWS CLI and our `awslocal` wrapper script.
1717

18-
Start your LocalStack container using your preferred method using the `LOCALSTACK_ENABLE_BEDROCK=1` configuration variable.
18+
Start your LocalStack container using your preferred method with or without pre-warming the Bedrock engine.
1919
We will demonstrate how to use Bedrock by following these steps:
2020

2121
1. Listing available foundation models
2222
2. Invoking a model for inference
2323
3. Using the conversation API
2424

25+
### Pre-warming the Bedrock engine
26+
27+
The startup of the Bedrock engine can take some time.
28+
Per default, we only start it once you send a request to one of the `bedrock-runtime` APIs.
29+
However, if you want to start the engine when localstack starts to avoid long wait times on your first request you can set the flag `BEDROCK_PREWARM`.
30+
2531
### List available foundation models
2632

2733
You can view all available foundation models using the [`ListFoundationModels`](https://docs.aws.amazon.com/bedrock/latest/APIReference/API_ListFoundationModels.html) API.
28-
This will show you which models are available for use in your local environment.
34+
This will show you which models are available on AWS Bedrock.
35+
{{< callout "note">}}
36+
The actual model that will be used for emulation will differ from the ones defined in this list.
37+
You can define the used model with `DEFAULT_BEDROCK_MODEL`
38+
{{< / callout >}}
2939

3040
Run the following command:
3141

@@ -36,7 +46,8 @@ $ awslocal bedrock list-foundation-models
3646
### Invoke a model
3747

3848
You can use the [`InvokeModel`](https://docs.aws.amazon.com/bedrock/latest/APIReference/API_runtime_InvokeModel.html) API to send requests to a specific model.
39-
In this example, we'll use the Llama 3 model to process a simple prompt.
49+
In this example, we selected the Llama 3 model to process a simple prompt.
50+
However, the actual model will be defined by the `DEFAULT_BEDROCK_MODEL` environment variable.
4051

4152
Run the following command:
4253

@@ -75,5 +86,5 @@ $ awslocal bedrock-runtime converse \
7586

7687
## Limitations
7788

78-
* LocalStack Bedrock implementation is mock-only and does not run any LLM model locally.
89+
* LocalStack Bedrock currently only officially supports text-based models.
7990
* Currently, GPU models are not supported by the LocalStack Bedrock implementation.

0 commit comments

Comments
 (0)