You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
{{ message }}
This repository was archived by the owner on Jul 9, 2025. It is now read-only.
Copy file name to clipboardExpand all lines: content/en/references/configuration.md
+2-1Lines changed: 2 additions & 1 deletion
Original file line number
Diff line number
Diff line change
@@ -94,7 +94,8 @@ This section covers configuration options that are specific to certain AWS servi
94
94
95
95
| Variable | Example Values | Description |
96
96
| - | - | - |
97
-
|`LOCALSTACK_ENABLE_BEDROCK`|`1`| Use the Bedrock provider |
97
+
|`BEDROCK_PREWARM`|`0` (default) \|`1`| Pre-warm the Bedrock engine directly on LocalStack startup instead of on demand. |
98
+
|`DEFAULT_BEDROCK_MODEL`|`qwen2.5:0.5b` (default) | The model to use to handle text model invocations in Bedrock. Any text-based model available for Ollama is usable. |
Copy file name to clipboardExpand all lines: content/en/user-guide/aws/bedrock/index.md
+15-4Lines changed: 15 additions & 4 deletions
Original file line number
Diff line number
Diff line change
@@ -15,17 +15,27 @@ The supported APIs are available on our [API Coverage Page](https://docs.localst
15
15
16
16
This guide is designed for users new to AWS Bedrock and assumes basic knowledge of the AWS CLI and our `awslocal` wrapper script.
17
17
18
-
Start your LocalStack container using your preferred method using the `LOCALSTACK_ENABLE_BEDROCK=1` configuration variable.
18
+
Start your LocalStack container using your preferred method with or without pre-warming the Bedrock engine.
19
19
We will demonstrate how to use Bedrock by following these steps:
20
20
21
21
1. Listing available foundation models
22
22
2. Invoking a model for inference
23
23
3. Using the conversation API
24
24
25
+
### Pre-warming the Bedrock engine
26
+
27
+
The startup of the Bedrock engine can take some time.
28
+
Per default, we only start it once you send a request to one of the `bedrock-runtime` APIs.
29
+
However, if you want to start the engine when localstack starts to avoid long wait times on your first request you can set the flag `BEDROCK_PREWARM`.
30
+
25
31
### List available foundation models
26
32
27
33
You can view all available foundation models using the [`ListFoundationModels`](https://docs.aws.amazon.com/bedrock/latest/APIReference/API_ListFoundationModels.html) API.
28
-
This will show you which models are available for use in your local environment.
34
+
This will show you which models are available on AWS Bedrock.
35
+
{{< callout "note">}}
36
+
The actual model that will be used for emulation will differ from the ones defined in this list.
37
+
You can define the used model with `DEFAULT_BEDROCK_MODEL`
You can use the [`InvokeModel`](https://docs.aws.amazon.com/bedrock/latest/APIReference/API_runtime_InvokeModel.html) API to send requests to a specific model.
39
-
In this example, we'll use the Llama 3 model to process a simple prompt.
49
+
In this example, we selected the Llama 3 model to process a simple prompt.
50
+
However, the actual model will be defined by the `DEFAULT_BEDROCK_MODEL` environment variable.
0 commit comments