` (e.g. `3.0.2`)
+ * These tags can be used if you want to use a very specific release.
It will not be updated.
- - This tag can be used if you really want to avoid any changes to the image (not even minimal bug fixes).
+ * This tag can be used if you really want to avoid any changes to the image (not even minimal bug fixes).
### How can I access LocalStack from an alternative computer?
@@ -110,12 +110,12 @@ To fix this, set the following environment variables:
Set the system locale (language for non-Unicode programs) to UTF-8 to avoid Unicode errors.
Follow these steps:
-- Open the Control Panel.
-- Go to "Clock and Region" or "Region and Language."
-- Click on the "Administrative" tab.
-- Click on the "Change system locale" button.
-- Select "Beta: Use Unicode UTF-8 for worldwide language support" and click "OK."
-- Restart your computer to apply the changes.
+* Open the Control Panel.
+* Go to "Clock and Region" or "Region and Language."
+* Click on the "Administrative" tab.
+* Click on the "Change system locale" button.
+* Select "Beta: Use Unicode UTF-8 for worldwide language support" and click "OK."
+* Restart your computer to apply the changes.
If you would like to keep the system locale as it is, you can mitigate the issue by using the command `localstack --no-banner`.
@@ -304,7 +304,7 @@ $ dig api.localstack.cloud
If the result has some other status than `status: NOERROR,` your machine cannot resolve this domain.
Some corporate DNS servers might filter requests to certain domains.
-Contact your network administrator to safelist` localstack.cloud` domains.
+Contact your network administrator to safelist`localstack.cloud` domains.
### How does LocalStack Pro handle security patches and bug fixes?
@@ -326,4 +326,6 @@ For more details, please take a look at our [Enterprise offering](https://locals
### How does the LocalStack Web Application communicate with the LocalStack container?
-The LocalStack Web Application connects to your LocalStack container running on your local machine and retrieves the information directly via the `localhost` without using the internet. Features such as Resource Browsers, IAM Policy Stream, Chaos Engineering dashboard, and others communicate directly with the LocalStack container using your browser. None of the information is sent to the internet, or stored on any external servers maintained by LocalStack.
+The LocalStack Web Application connects to your LocalStack container running on your local machine and retrieves the information directly via the `localhost` without using the internet.
+Features such as Resource Browsers, IAM Policy Stream, Chaos Engineering dashboard, and others communicate directly with the LocalStack container using your browser.
+None of the information is sent to the internet, or stored on any external servers maintained by LocalStack.
diff --git a/content/en/getting-started/help-and-support/index.md b/content/en/getting-started/help-and-support/index.md
index 0c76574d50..b0edd0510a 100644
--- a/content/en/getting-started/help-and-support/index.md
+++ b/content/en/getting-started/help-and-support/index.md
@@ -10,7 +10,9 @@ cascade:
## Introduction
-We strive to make it as easy as possible for you to use LocalStack, and we are very grateful for any feedback. We provide different levels of support to help you with your queries and issues. The support you receive depends on the plan you are on.
+We strive to make it as easy as possible for you to use LocalStack, and we are very grateful for any feedback.
+We provide different levels of support to help you with your queries and issues.
+The support you receive depends on the plan you are on.
| Plan | Support Level |
|------|---------------|
@@ -22,30 +24,37 @@ We strive to make it as easy as possible for you to use LocalStack, and we are v
## Community Support
-LocalStack's Community support is available to all users of the LocalStack Community Edition & Hobby Plan users. You can avail community support through the following channels:
+LocalStack's Community support is available to all users of the LocalStack Community Edition & Hobby Plan users.
+You can avail community support through the following channels:
- [LocalStack Discuss](https://discuss.localstack.cloud/)
- [LocalStack Slack Community](https://localstack.cloud/slack)
- [GitHub Issue](https://github.com/localstack/docs/issues/new)
-Community support is provided on a best-effort basis and is not guaranteed. We also encourage you to help others in the community by answering questions and sharing your experiences.
+Community support is provided on a best-effort basis and is not guaranteed.
+We also encourage you to help others in the community by answering questions and sharing your experiences.
### LocalStack Discuss
-LocalStack Discuss allows our community users to ask questions, share ideas, and discuss topics related to LocalStack. To create a new topic on Discuss, follow these steps:
+LocalStack Discuss allows our community users to ask questions, share ideas, and discuss topics related to LocalStack.
+To create a new topic on Discuss, follow these steps:
- Create a new account on [LocalStack Discuss](https://discuss.localstack.cloud/) by clicking the **Sign Up** button.
- Once you have created an account, you can create a new topic by clicking the **New Topic** button.
- Choose the appropriate category for your topic and provide a title and description.
- Click the **Create Topic** button to submit your topic.
-LocalStack Discuss is public, allowing us to keep a record of these questions and answers for the larger community to use over time. However, you should avoid sharing any sensitive information on the platform (such as Auth Tokens, private configuration, etc.).
+LocalStack Discuss is public, allowing us to keep a record of these questions and answers for the larger community to use over time.
+However, you should avoid sharing any sensitive information on the platform (such as Auth Tokens, private configuration, etc.).
### LocalStack Slack Community
-LocalStack Slack Community includes LocalStack users, contributors, and maintainers. If you need help with the community version of LocalStack, please use the `#help` channel. You can sign up for the [LocalStack Slack Community](https://localstack.cloud/slack) by creating an account.
+LocalStack Slack Community includes LocalStack users, contributors, and maintainers.
+If you need help with the community version of LocalStack, please use the `#help` channel.
+You can sign up for the [LocalStack Slack Community](https://localstack.cloud/slack) by creating an account.
-However, the messages on Slack are not accessible after three months, so it is not the best place to ask questions that may be useful to others in the future. For that, we recommend using LocalStack Discuss.
+However, the messages on Slack are not accessible after three months, so it is not the best place to ask questions that may be useful to others in the future.
+For that, we recommend using LocalStack Discuss.
### GitHub Issue
@@ -54,7 +63,8 @@ You can use GitHub Issue to:
- [Request new features](https://github.com/localstack/localstack/issues/new?assignees=&labels=type%3A+feature%2Cstatus%3A+triage+needed&template=feature-request.yml&title=feature+request%3A+%3Ctitle%3E)
- [Report existing bugs](https://github.com/localstack/localstack/issues/new?assignees=&labels=type%3A+bug%2Cstatus%3A+triage+needed&template=bug-report.yml&title=bug%3A+%3Ctitle%3E)
-Make sure to follow the issue templates and provide as much information as possible. If you have encountered outdated documentation, please report it on our [documentation GitHub page](https://github.com/localstack/docs).
+Make sure to follow the issue templates and provide as much information as possible.
+If you have encountered outdated documentation, please report it on our [documentation GitHub page](https://github.com/localstack/docs).
## Dedicated support
@@ -101,7 +111,8 @@ To create a support ticket:
You can optionally choose to continue the conversation via email or via the Web Application.
{{< callout "note" >}}
-In many scenarios, we ask our customers to use Diagnosis endpoint to help us retrieve additional information. To use LocalStack's Diagnosis endpoint:
+In many scenarios, we ask our customers to use Diagnosis endpoint to help us retrieve additional information.
+To use LocalStack's Diagnosis endpoint:
- Set the environment variable `LS_LOG=trace`
- Start LocalStack
@@ -114,7 +125,8 @@ Ensure that you avoid sending the diagnostic output to public channels or forums
## Enterprise Support
-A customer portal is a home behind a login where customers can view, open, and reply to their support tickets. Currently, the **customer portal** is only **available to Enterprise customers**.
+A customer portal is a home behind a login where customers can view, open, and reply to their support tickets.
+Currently, the **customer portal** is only **available to Enterprise customers**.
You can find the customer portal here: [https://support.localstack.cloud/portal](https://support.localstack.cloud/portal)
@@ -126,32 +138,39 @@ You can find the customer portal here: [https://support.localstack.cloud/portal]
If you are a member of an organization with an enterprise LocalStack subscription, you will receive an invitation to create an account and join the LocalStack Support Portal via email.
-Follow the instructions in the email and set up your account by clicking on the **Sign up** button. You will be asked to create a password. Once you do so, you will be able to log in and start using the customer portal to create, view, and engage with tickets.
+Follow the instructions in the email and set up your account by clicking on the **Sign up** button.
+You will be asked to create a password.
+Once you do so, you will be able to log in and start using the customer portal to create, view, and engage with tickets.
### Creating a Support Ticket
-You can open a new ticket with LocalStack support by going to the **Create a Support Ticket** link. You will be redirected to a form where you will have to provide certain information to file a new support ticket.
+You can open a new ticket with LocalStack support by going to the **Create a Support Ticket** link.
+You will be redirected to a form where you will have to provide certain information to file a new support ticket.
-
-{{< img src="file-a-support-ticket.png" alt="Filing a support ticket" class="img-fluid shadow rounded" width="800px" >}}
+
+{{< img src="file-a-support-ticket.png" alt="Filing a support ticket" class="img-fluid shadow rounded" width="800px" >}}
-The form consists of two parts. One is basic information, which is mandatory to fill out, and additional information, which adds more context to your issue but is not mandatory. Once all the mandatory fields are filled out, you can create a new support ticket by clicking on the Submit button. Once the ticket is submitted, it will be reported to LocalStack support, who will get back to you on that query as soon as possible. A ticket will show up in the ticket list as soon as it’s submitted.
+The form consists of two parts.
+One is basic information, which is mandatory to fill out, and additional information, which adds more context to your issue but is not mandatory.
+Once all the mandatory fields are filled out, you can create a new support ticket by clicking on the Submit button.
+Once the ticket is submitted, it will be reported to LocalStack support, who will get back to you on that query as soon as possible.
+A ticket will show up in the ticket list as soon as it’s submitted.
#### Basic Information
You need to fill out the following fields, which are mandatory to open a new ticket:
-- **Type** - Choose the type of your query from the following options:
- - **Issue** - Select this when you are facing an issue using LocalStack.
- - **General inquiry** - Select this when you have a general question regarding LocalStack.
- - **Feature request** - Select this when you are looking for a feature that is not yet implemented in LocalStack.
-- **Ticket name** - Provide a descriptive name for the ticket that summarizes your inquiry.
-- **Description** - Provide a comprehensive description of your inquiry, explaining all the details that will help us understand your query.
+- **Type** - Choose the type of your query from the following options:
+ - **Issue** - Select this when you are facing an issue using LocalStack.
+ - **General inquiry** - Select this when you have a general question regarding LocalStack.
+ - **Feature request** - Select this when you are looking for a feature that is not yet implemented in LocalStack.
+- **Ticket name** - Provide a descriptive name for the ticket that summarizes your inquiry.
+- **Description** - Provide a comprehensive description of your inquiry, explaining all the details that will help us understand your query.
#### Additional Information
-- **CI Issue?** - If the query is related to a CI issue, select the one that best fits your query from the dropdown.
-- **Operating system** - From the dropdown, select the operating system you are using.
-- **Affected Services** - From the dropdown, select the AWS service that is affected in your query.
-- **File upload** - Here you can provide any additional files that you believe would be helpful for LocalStack support (e.g., screenshots, log files, etc.).
\ No newline at end of file
+- **CI Issue?** - If the query is related to a CI issue, select the one that best fits your query from the dropdown.
+- **Operating system** - From the dropdown, select the operating system you are using.
+- **Affected Services** - From the dropdown, select the AWS service that is affected in your query.
+- **File upload** - Here you can provide any additional files that you believe would be helpful for LocalStack support (e.g., screenshots, log files, etc.).
diff --git a/content/en/getting-started/installation.md b/content/en/getting-started/installation.md
index ab7cd89cd7..f399e639da 100644
--- a/content/en/getting-started/installation.md
+++ b/content/en/getting-started/installation.md
@@ -11,7 +11,8 @@ cascade:
## LocalStack CLI
-The quickest way get started with LocalStack is by using the LocalStack CLI. It allows you to start LocalStack from your command line.
+The quickest way get started with LocalStack is by using the LocalStack CLI.
+It allows you to start LocalStack from your command line.
Please make sure that you have a working [Docker installation](https://docs.docker.com/get-docker/) on your machine before moving on.
The CLI starts and manages the LocalStack Docker container.
@@ -357,7 +358,8 @@ $ docker run \
{{< callout "note" >}}
- This command pulls the current nightly build from the `master` branch (if you don't have the image locally) and **not** the latest supported version.
- If you want to use a specific version of LocalStack, use the appropriate tag: `docker run --rm -it -p 4566:4566 -p 4510-4559:4510-4559 localstack/localstack:`. Check-out the [LocalStack releases](https://github.com/localstack/localstack/releases) to know more about specific LocalStack versions.
+ If you want to use a specific version of LocalStack, use the appropriate tag: `docker run --rm -it -p 4566:4566 -p 4510-4559:4510-4559 localstack/localstack:`.
+ Check-out the [LocalStack releases](https://github.com/localstack/localstack/releases) to know more about specific LocalStack versions.
- If you are using LocalStack with an [auth token]({{< ref "auth-token" >}}), you need to specify the image tag as `localstack/localstack-pro` in your Docker setup.
Going forward, `localstack/localstack-pro` image will contain our Pro-supported services and APIs.
@@ -371,7 +373,8 @@ $ docker run \
This could be seen as the "expert mode" of starting LocalStack.
If you are looking for a simpler method of starting LocalStack, please use the [LocalStack CLI]({{< ref "#localstack-cli" >}}).
-- To facilitate interoperability, configuration variables can be prefixed with `LOCALSTACK_` in docker. For instance, setting `LOCALSTACK_PERSISTENCE=1` is equivalent to `PERSISTENCE=1`.
+- To facilitate interoperability, configuration variables can be prefixed with `LOCALSTACK_` in docker.
+ For instance, setting `LOCALSTACK_PERSISTENCE=1` is equivalent to `PERSISTENCE=1`.
- To configure an auth token, refer to the [auth token]({{< ref "auth-token" >}}) documentation.
{{< /callout >}}
@@ -395,7 +398,6 @@ $ helm upgrade --install localstack localstack-repo/localstack
The Helm charts are not maintained in the main repository, but in a [separate one](https://github.com/localstack/helm-charts).
-
## Updating
The LocalStack CLI allows you to easily update the different components of LocalStack.
@@ -445,7 +447,8 @@ $ DNS_ADDRESS=0 localstack start
#### How should I access the LocalStack logs on my local machine?
-You can now avail logging output and error reporting using LocalStack logs. To access the logs, run the following command:
+You can now avail logging output and error reporting using LocalStack logs.
+To access the logs, run the following command:
{{< command >}}
$ localstack logs
@@ -453,9 +456,11 @@ $ localstack logs
AWS requests are now logged uniformly in the INFO log level (set by default or when `DEBUG=0`).
The format is:
+
```text
AWS . => ()
```
+
Requests to HTTP endpoints are logged in a similar way:
```text
diff --git a/content/en/getting-started/quickstart/index.md b/content/en/getting-started/quickstart/index.md
index f2ba1878b1..07a06a22cd 100644
--- a/content/en/getting-started/quickstart/index.md
+++ b/content/en/getting-started/quickstart/index.md
@@ -10,7 +10,9 @@ cascade:
## Introduction
-In this quickstart guide, we'll walk you through the process of starting LocalStack on your local machine and deploying a [serverless image resizer application](https://github.com/localstack-samples/sample-serverless-image-resizer-s3-lambda) that utilizes several AWS services. This guide aims to help you understand how to use LocalStack for the development and testing of your AWS applications locally. It introduces you to the following key concepts:
+In this quickstart guide, we'll walk you through the process of starting LocalStack on your local machine and deploying a [serverless image resizer application](https://github.com/localstack-samples/sample-serverless-image-resizer-s3-lambda) that utilizes several AWS services.
+This guide aims to help you understand how to use LocalStack for the development and testing of your AWS applications locally.
+It introduces you to the following key concepts:
- Starting a LocalStack instance on your local machine.
- Deploying an AWS serverless application infrastructure locally.
@@ -46,7 +48,8 @@ An internal SES LocalStack testing endpoint (`/_localstack/aws/ses`) is configur
- [AWS CLI](https://docs.aws.amazon.com/cli/latest/userguide/getting-started-install.html) & [`awslocal` wrapper](https://docs.localstack.cloud/user-guide/integrations/aws-cli/#localstack-aws-cli-awslocal)
- `jq`, `zip` & `curl`
-You can start LocalStack using the `localstack` CLI. Start the LocalStack Pro container with your `LOCALSTACK_AUTH_TOKEN` pre-configured:
+You can start LocalStack using the `localstack` CLI.
+Start the LocalStack Pro container with your `LOCALSTACK_AUTH_TOKEN` pre-configured:
{{< tabpane >}}
{{< tab header="macOS/Linux" lang="shell" >}}
@@ -74,7 +77,9 @@ You can now follow the instructions below to start LocalStack, deploy the sample
### Setup a virtual environment
-To deploy the sample application, you need to have specific Python packages are installed. It is advisable to utilize a virtual environment for the installation process, allowing the packages to be installed in an isolated environment. Execute the following commands to create a virtual environment and install the packages in `requirements-dev.txt`:
+To deploy the sample application, you need to have specific Python packages are installed.
+It is advisable to utilize a virtual environment for the installation process, allowing the packages to be installed in an isolated environment.
+Execute the following commands to create a virtual environment and install the packages in `requirements-dev.txt`:
{{< tabpane >}}
{{< tab header="macOS/Linux" lang="shell" >}}
@@ -90,7 +95,8 @@ pip install -r requirements-dev.txt
{{< /tabpane >}}
{{< callout "tip" >}}
-If you are encountering issues with the installation of the packages, such as Pillow, ensure you use the same version as the Python Lambdas (3.9) for Pillow to work. If you're using pyenv, install and activate Python 3.9 with the following commands:
+If you are encountering issues with the installation of the packages, such as Pillow, ensure you use the same version as the Python Lambdas (3.9) for Pillow to work.
+If you're using pyenv, install and activate Python 3.9 with the following commands:
{{< command >}}
$ pyenv install 3.9.0
$ pyenv global 3.9.0
@@ -99,9 +105,14 @@ $ pyenv global 3.9.0
### Setup the serverless image resizer
-This application enables serverless image resizing using [S3](https://docs.localstack.cloud/user-guide/aws/s3/), [SSM](https://docs.localstack.cloud/user-guide/aws/ssm/), [Lambda](https://docs.localstack.cloud/user-guide/aws/lambda/), [SNS](https://docs.localstack.cloud/user-guide/aws/sns/), and [SES](https://docs.localstack.cloud/user-guide/aws/ses/). A simple web interface allows users to upload and view resized images. A Lambda function generates S3 pre-signed URLs for direct uploads, while S3 bucket notifications trigger image resizing. Another Lambda function lists and provides pre-signed URLs for browser display. The application also handles Lambda failures through SNS and SES email notifications.
+This application enables serverless image resizing using [S3](https://docs.localstack.cloud/user-guide/aws/s3/), [SSM](https://docs.localstack.cloud/user-guide/aws/ssm/), [Lambda](https://docs.localstack.cloud/user-guide/aws/lambda/), [SNS](https://docs.localstack.cloud/user-guide/aws/sns/), and [SES](https://docs.localstack.cloud/user-guide/aws/ses/).
+A simple web interface allows users to upload and view resized images.
+A Lambda function generates S3 pre-signed URLs for direct uploads, while S3 bucket notifications trigger image resizing.
+Another Lambda function lists and provides pre-signed URLs for browser display.
+The application also handles Lambda failures through SNS and SES email notifications.
-The sample application uses AWS CLI and our `awslocal` wrapper to deploy the application to LocalStack. You can build and deploy the sample application on LocalStack by running the following command:
+The sample application uses AWS CLI and our `awslocal` wrapper to deploy the application to LocalStack.
+You can build and deploy the sample application on LocalStack by running the following command:
{{< command >}}
$ bin/deploy.sh
@@ -110,7 +121,8 @@ $ bin/deploy.sh
Alternatively, you can follow these instructions to deploy the sample application manually step-by-step.
{{< callout "tip" >}}
-In absence of the `awslocal` wrapper, you can use the `aws` CLI directly, by configuring an [endpoint URL](https://docs.localstack.cloud/user-guide/integrations/aws-cli/#configuring-an-endpoint-url) or a [custom profile](https://docs.localstack.cloud/user-guide/integrations/aws-cli/#configuring-a-custom-profile) like `localstack`. You can then swap `awslocal` with `aws --endpoint-url=http://localhost:4566` or `aws --profile=localstack` in the commands below.
+In absence of the `awslocal` wrapper, you can use the `aws` CLI directly, by configuring an [endpoint URL](https://docs.localstack.cloud/user-guide/integrations/aws-cli/#configuring-an-endpoint-url) or a [custom profile](https://docs.localstack.cloud/user-guide/integrations/aws-cli/#configuring-a-custom-profile) like `localstack`.
+You can then swap `awslocal` with `aws --endpoint-url=http://localhost:4566` or `aws --profile=localstack` in the commands below.
{{< /callout >}}
#### Create the S3 buckets
@@ -139,7 +151,8 @@ $ awslocal ssm put-parameter \
$ awslocal sns create-topic --name failed-resize-topic
{{< / command >}}
-To receive immediate alerts in case of image resize failures, subscribe an email address to the system. You can use the following command to subscribe an email address to the SNS topic:
+To receive immediate alerts in case of image resize failures, subscribe an email address to the system.
+You can use the following command to subscribe an email address to the SNS topic:
{{< command >}}
$ awslocal sns subscribe \
@@ -213,7 +226,7 @@ mkdir package
pip install -r requirements.txt -t package
zip lambda.zip handler.py
cd package
-zip -r ../lambda.zip *;
+zip -r ../lambda.zip*;
cd ../..
{{< /tab >}}
{{< /tabpane >}}
@@ -270,17 +283,22 @@ To access the application, go to [**https://webapp.s3-website.localhost.localsta
-Paste the `presign` and `list` Lambda function URLs into the application and click **Apply**. Alternatively, click on **Load from API** to automatically load the URLs.
+Paste the `presign` and `list` Lambda function URLs into the application and click **Apply**.
+Alternatively, click on **Load from API** to automatically load the URLs.
-Upload an image, and click **Upload**. The upload form uses the `presign` Lambda to request an S3 pre-signed POST URL, forwarding the POST request to S3. Asynchronous resizing (maximum 400x400 pixels) occurs through S3 bucket notifications.
+Upload an image, and click **Upload**.
+The upload form uses the `presign` Lambda to request an S3 pre-signed POST URL, forwarding the POST request to S3.
+Asynchronous resizing (maximum 400x400 pixels) occurs through S3 bucket notifications.
-If successful, the application displays a **success!** alert. Click **Refresh** to trigger your browser to request the `list` Lambda URL, returning a JSON document of all items in the images (`localstack-thumbnails-app-images`) and resized images (`localstack-thumbnails-app-resized`) bucket.
+If successful, the application displays a **success!** alert.
+Click **Refresh** to trigger your browser to request the `list` Lambda URL, returning a JSON document of all items in the images (`localstack-thumbnails-app-images`) and resized images (`localstack-thumbnails-app-resized`) bucket.
### View the deployed resources
-You can inspect the resources deployed as part of the sample application by accessing the [**LocalStack Web Application**](https://app.localstack.cloud/). Navigate to your [**Default Instance**](https://app.localstack.cloud/inst/default/status) to view the deployed resources.
+You can inspect the resources deployed as part of the sample application by accessing the [**LocalStack Web Application**](https://app.localstack.cloud/).
+Navigate to your [**Default Instance**](https://app.localstack.cloud/inst/default/status) to view the deployed resources.
@@ -296,13 +314,15 @@ To run automated integration tests against the sample application, use the follo
$ pytest -v
{{< / command >}}
-Additionally, you can verify that when the `resize` Lambda fails, an SNS message is sent to a topic that an SES subscription listens to, triggering an email with the raw failure message. Since there's no real email server involved, you can use the LocalStack SES developer endpoint to list messages sent via SES:
+Additionally, you can verify that when the `resize` Lambda fails, an SNS message is sent to a topic that an SES subscription listens to, triggering an email with the raw failure message.
+Since there's no real email server involved, you can use the LocalStack SES developer endpoint to list messages sent via SES:
{{< command >}}
$ curl -s http://localhost.localstack.cloud:4566/_aws/ses | jq
{{< / command >}}
-An alternative option is to use a service like MailHog or `smtp4dev`. Start LocalStack with `SMTP_HOST=host.docker.internal:1025`, pointing to the mock SMTP server.
+An alternative option is to use a service like MailHog or `smtp4dev`.
+Start LocalStack with `SMTP_HOST=host.docker.internal:1025`, pointing to the mock SMTP server.
### Destroy the local infrastructure
@@ -312,13 +332,15 @@ Now that you've learned how to deploy a local AWS infrastructure for your sample
$ localstack stop
{{< / command >}}
-LocalStack is ephemeral, meaning it doesn't persist any data across restarts. It runs inside a Docker container, and once it's stopped, all locally created resources are automatically removed.
+LocalStack is ephemeral, meaning it doesn't persist any data across restarts.
+It runs inside a Docker container, and once it's stopped, all locally created resources are automatically removed.
To persist the local cloud resources across restarts, navigate to our [persistence documentation]({{< ref "user-guide/state-management/persistence" >}}) or learn about [Cloud Pods]({{< ref "user-guide/state-management/cloud-pods" >}}), our next generation state management utility.
## Next Steps
-Congratulations on deploying an AWS application locally using LocalStack! To expand your LocalStack capabilities, explore the following based on your expertise:
+Congratulations on deploying an AWS application locally using LocalStack!
+To expand your LocalStack capabilities, explore the following based on your expertise:
- [Tutorials]({{< ref "tutorials" >}}): Check out our tutorials to learn how to use LocalStack across various AWS services and application stacks.
- [User Guide]({{< ref "user-guide" >}}): Explore LocalStack's emulated AWS services, third-party integrations, tooling, CI service providers, and more in our User Guide.
diff --git a/content/en/legal/third-party-software-tools/index.md b/content/en/legal/third-party-software-tools/index.md
index b71a0d3f8b..11c93e3d6c 100644
--- a/content/en/legal/third-party-software-tools/index.md
+++ b/content/en/legal/third-party-software-tools/index.md
@@ -27,4 +27,4 @@ requests | Apache License 2.0
subprocess32 | PSF License
**Other tools:** |
Elasticsearch | Apache License 2.0
-kinesis-mock | MIT License
\ No newline at end of file
+kinesis-mock | MIT License
diff --git a/content/en/references/api-key.md b/content/en/references/api-key.md
index b0dcdeccc5..6e5c99f5cc 100644
--- a/content/en/references/api-key.md
+++ b/content/en/references/api-key.md
@@ -10,9 +10,11 @@ aliases:
---
{{< callout "warning" >}}
-- LocalStack is transitioning from API Keys to Auth Tokens for activation. Auth Tokens streamline license management and remove the need for developers to adjust their setup when license changes occur.
-- For detailed information and guidance on migrating your LocalStack setup to Auth Tokens, please consult our [Auth Token documentation]({{< ref "auth-token" >}}).
-- API Keys will remain functional for LocalStack Pro and Enterprise users until the next major release. Following this release, LocalStack Pro and Enterprise will exclusively use Auth Tokens.
+- LocalStack is transitioning from API Keys to Auth Tokens for activation.
+ Auth Tokens streamline license management and remove the need for developers to adjust their setup when license changes occur.
+- For detailed information and guidance on migrating your LocalStack setup to Auth Tokens, please consult our [Auth Token documentation]({{< ref "auth-token" >}}).
+- API Keys will remain functional for LocalStack Pro and Enterprise users until the next major release.
+ Following this release, LocalStack Pro and Enterprise will exclusively use Auth Tokens.
{{< /callout >}}
The LocalStack API key is a unique identifier to activate your LocalStack license needed to start LocalStack Pro.
@@ -20,7 +22,8 @@ You can find your API key in the [LocalStack Web app](https://app.localstack.clo
This guide demonstrates how you can use your new LocalStack licenses and go over some best practices regarding the usage, activation, and safety of your LocalStack API key.
{{< callout "warning" >}}
-- Avoid sharing your API key with anyone. Ensure that you do not commit it to any source code management systems (like Git repositories).
+- Avoid sharing your API key with anyone.
+ Ensure that you do not commit it to any source code management systems (like Git repositories).
- If you push an API key to a public repository, it has potentially been exposed and might remain in the history (even if you try to rewrite it).
- If you accidentally publish your API key, please [contact us](https://localstack.cloud/contact/) immediately to get your API key rotated!
- If you want to use your API key in your CI environment, check out our [CI documentation]({{< ref "user-guide/ci" >}}) to see the proper way to handle secrets in your CI environment to store your API key securely.
@@ -28,7 +31,8 @@ This guide demonstrates how you can use your new LocalStack licenses and go over
### Starting LocalStack via CLI
-LocalStack expects your API key to be present in the environment variable `LOCALSTACK_API_KEY`. You can define the `LOCALSTACK_API_KEY` environment variable before or while starting LocalStack using the `localstack` CLI.
+LocalStack expects your API key to be present in the environment variable `LOCALSTACK_API_KEY`.
+You can define the `LOCALSTACK_API_KEY` environment variable before or while starting LocalStack using the `localstack` CLI.
{{< tabpane >}}
{{< tab header="macOS/Linux" lang="shell" >}}
@@ -72,7 +76,8 @@ environment:
- LOCALSTACK_API_KEY=${LOCALSTACK_API_KEY- }
```
-You can set the API key manually, or you can use the `export` command to set the API key in your current shell session. The API key will be passed into your LocalStack container, such that the key activation can take place.
+You can set the API key manually, or you can use the `export` command to set the API key in your current shell session.
+The API key will be passed into your LocalStack container, such that the key activation can take place.
## Licensing-related configuration
@@ -86,7 +91,8 @@ The easiest way to check if LocalStack is activated is to query the health endpo
$ curl localhost:4566/_localstack/health | jq
{{< / command >}}
-If a Pro-only [service]({{< ref "aws" >}}) -- like [XRay]({{< ref "xray" >}}) -- is running, LocalStack has started successfully. You can also check the logs of the LocalStack container to see if the activation was successful.
+If a Pro-only [service]({{< ref "aws" >}}) -- like [XRay]({{< ref "xray" >}}) -- is running, LocalStack has started successfully.
+You can also check the logs of the LocalStack container to see if the activation was successful.
{{< command >}}
[...] Successfully activated API key
@@ -99,7 +105,7 @@ Otherwise, check our collected most [common activation issues](#common-activatio
Since LocalStack v2.0.0, the image `localstack/localstack-pro` requires a successful key activation to start.
If the key activation fails, LocalStack will quit with an error messages that may look something like this:
-```
+```bash
===============================================
API key activation failed! 🔑❌
@@ -113,9 +119,9 @@ Due to this error, Localstack has quit. LocalStack pro features can only be used
```
There are several reasons a key activation can fail:
-* Missing credentials: Using `localstack/localstack-pro` requires per default to have an API key set.
-* Invalid key: there is no valid license associated with the key, for example because the license has expired.
-* License server cannot be reached: LocalStack will try to perform an offline license activation if the license server cannot be reached, but will require a re-activation every 24 hours.
+- Missing credentials: Using `localstack/localstack-pro` requires per default to have an API key set.
+- Invalid key: there is no valid license associated with the key, for example because the license has expired.
+- License server cannot be reached: LocalStack will try to perform an offline license activation if the license server cannot be reached, but will require a re-activation every 24 hours.
If you are using the `localstack/localstack-pro` image, but cannot activate your license key, we recommend falling back to the community image `localstack/localstack`.
If that is not an option, you can set `ACTIVATE_PRO=0` which will attempt to start LocalStack without pro features.
diff --git a/content/en/references/arm64-support/index.md b/content/en/references/arm64-support/index.md
index c5baeb8139..5a83d2b355 100644
--- a/content/en/references/arm64-support/index.md
+++ b/content/en/references/arm64-support/index.md
@@ -119,8 +119,8 @@ pyenv global 3.11.9
Then clone LocalStack to your machine, run `make install` and then `make start`.
-
### Raspberry Pi
+
If you want to run LocalStack on your Raspberry Pi, make sure to use a 64bit operating system.
In our experience, it works best on a Raspberry Pi 4 8GB with [Ubuntu Server 20.04 64Bit for Raspberry Pi](https://ubuntu.com/download/raspberry-pi).
diff --git a/content/en/references/configuration.md b/content/en/references/configuration.md
index 06bfdcda5c..e6b1ef43cc 100644
--- a/content/en/references/configuration.md
+++ b/content/en/references/configuration.md
@@ -20,7 +20,8 @@ For instance, setting `LOCALSTACK_PERSISTENCE=1` is equivalent to `PERSISTENCE=1
You can also use [Profiles](#profiles).
-Configurations marked as **Deprecated** will be removed in the next major version. You can find previously removed configuration variables under [Legacy](#legacy).
+Configurations marked as **Deprecated** will be removed in the next major version.
+You can find previously removed configuration variables under [Legacy](#legacy).
## Core
@@ -43,8 +44,6 @@ Options that affect the core LocalStack system.
| `ALLOW_NONSTANDARD_REGIONS` | `0` (default) | Allows the use of non-standard AWS regions. By default, LocalStack only accepts [standard AWS regions](https://docs.aws.amazon.com/general/latest/gr/rande.html). |
| `PARITY_AWS_ACCESS_KEY_ID` | `0` (default) | Enables the use production-like access key IDs. By default, LocalStack issues keys with `LSIA...` and `LKIA...` prefix, and will reject keys that start with `ASIA...` or `AKIA...`. |
-[1]: http://docs.aws.amazon.com/cli/latest/reference/#available-services
-
## CLI
These options are applicable when using the CLI to start LocalStack.
@@ -92,9 +91,10 @@ This section covers configuration options that are specific to certain AWS servi
| `BIGDATA_DOCKER_FLAGS` | | Additional flags for the bigdata container. Same restrictions as `LAMBDA_DOCKER_FLAGS`.
### CloudFormation
+
| Variable | Example Values | Description |
| - | - | - |
-| `CFN_LEGACY_TEMPLATE_DEPLOYER` | `0` (default) \|`1` | Switch back to the old deployment engine. Note that this is only available temporarily to allow for a smoother roll-out of the new deployment order.
+| `CFN_LEGACY_TEMPLATE_DEPLOYER` | `0` (default) \|`1` | Switch back to the old deployment engine. Note that this is only available temporarily to allow for a smoother roll-out of the new deployment order.
| `CFN_PER_RESOURCE_TIMEOUT` | `300` (default) | Set the timeout to deploy each individual CloudFormation resource.
| `CFN_VERBOSE_ERRORS` | `0` (default) \|`1` | Show exceptions for CloudFormation deploy errors.
| `CFN_STRING_REPLACEMENT_DENY_LIST` | `""` (default) \|`https://api-1.execute-api.us-east-2.amazonaws.com/test-resource,https://api-2.execute-api.us-east-2.amazonaws.com/test-resource` | Comma-separated list of AWS URLs that should not be modified to point to Localstack. For example, when deploying a CloudFormation template we might want to leave certain resources pointing to actual AWS URLs, or even leave environment variables with URLs like that untouched.
@@ -124,7 +124,7 @@ This section covers configuration options that are specific to certain AWS servi
| `DYNAMODB_CORS` | `*` | Enable CORS support for specific allow-list list the domains separated by `,` use `*` for public access (default is `*`) |
| `DYNAMODB_REMOVE_EXPIRED_ITEMS` | `0`\|`1` | Enables [Time to Live (TTL)](https://docs.aws.amazon.com/amazondynamodb/latest/developerguide/TTL.html) feature |
-### ECR
+### ECR
| Variable | Example Values | Description |
| - | - | - |
@@ -165,7 +165,6 @@ This section covers configuration options that are specific to certain AWS servi
| `PROVIDER_OVERRIDE_ELASTICACHE` | `legacy` | Use the legacy ElastiCache provider. |
| `REDIS_CONTAINER_MODE` | `1`\|`0` (default) | Start ElastiCache cache nodes in separate containers instead of in the LocalStack container |
-
### Elasticsearch
{{< callout >}}
@@ -180,6 +179,7 @@ See [here](#opensearch).
| `PROVIDER_OVERRIDE_EVENTS` | `v2` | Use the new EventBridge provider. |
### IAM
+
| Variable | Example Values | Description |
| - | - | - |
| `ENFORCE_IAM` | `0` (default)\|`1` | Enable IAM policy evaluation and enforcement. If this is disabled (the default), IAM policies will have no effect to your requests. |
@@ -313,7 +313,6 @@ Please be aware that the following options may have severe security implications
| `EXTRA_CORS_EXPOSE_HEADERS` | | Comma-separated list of header names to be be added to Access-Control-Expose-Headers CORS header. |
| `ENABLE_CONFIG_UPDATES` | `0` (default) | Whether to enable dynamic configuration updates at runtime. |
-
## Emails
Please check with your SMTP email service provider for the following settings.
@@ -375,7 +374,6 @@ To learn more about these configuration options, see [Cloud Pods]({{< ref "user-
| `DEVELOP_PORT` | | Port number for debugpy server
| `WAIT_FOR_DEBUGGER` | | Forces LocalStack to wait for a debugger to start the services
-
## DNS
To learn more about these configuration options, see [DNS Server]({{< ref "dns-server" >}}).
@@ -403,7 +401,6 @@ To learn more about these configuration options, see [DNS Server]({{< ref "dns-s
| `LOCALSTACK_API_KEY` | | **Deprecated since 3.0.0** [API key]({{< ref "api-key" >}}) to activate LocalStack Pro.
**Use the `LOCALSTACK_AUTH_TOKEN` instead (except for [CI environments]({{< ref "user-guide/ci/" >}})).** |
| `LOG_LICENSE_ISSUES` | `0` \| `1` (default) | Whether to log issues with the license activation to the console. |
-
## Legacy
These configurations have already been removed and **won't have any effect** on newer versions of LocalStack.
@@ -416,7 +413,7 @@ These configurations have already been removed and **won't have any effect** on
| `_BACKEND` | 3.0.0 | `http://localhost:7577` | Custom endpoint URL to use for a specific service, where `` is the uppercase service name. |
| `_PORT_EXTERNAL` | 3.0.0 | `4567` | Port number to expose a specific service externally . `SQS_PORT_EXTERNAL`, e.g. , is used when returning queue URLs from the SQS service to the client. |
| `ACTIVATE_NEW_POD_CLIENT` | 3.0.0 | `0`\|`1` (default) | Whether to use the new Cloud Pods client leveraging LocalStack container's APIs. |
-| `BIGDATA_MONO_CONTAINER` | 3.0.0 |`0`\|`1` (default) | Whether to spin Big Data services inside the LocalStack main container. Glue jobs breaks when using `BIGDATA_MONO_CONTAINER=0`. |
+| `BIGDATA_MONO_CONTAINER` | 3.0.0 |`0`\|`1` (default) | Whether to spin Big Data services inside the LocalStack main container. Glue jobs breaks when using `BIGDATA_MONO_CONTAINER=0`. |
| `DEFAULT_REGION` | 3.0.0 | `us-east-1` (default) | AWS region to use when talking to the API (needs to be activated via `USE_SINGLE_REGION=1`). LocalStack now has full multi-region support. |
| `EDGE_BIND_HOST` | 3.0.0 | `127.0.0.1` (default), `0.0.0.0` (docker)| Address the edge service binds to. Use `GATEWAY_LISTEN` instead. |
| `EDGE_FORWARD_URL` | 3.0.0 | `http://10.0.10.5678` | Optional target URL to forward all edge requests to (e.g., for distributed deployments) |
diff --git a/content/en/references/coverage/_index.md b/content/en/references/coverage/_index.md
index f2c30713ef..3e4f671c2d 100644
--- a/content/en/references/coverage/_index.md
+++ b/content/en/references/coverage/_index.md
@@ -18,7 +18,7 @@ function searchForServiceNameInLink() {
var input, filter, div, elements, a, i, txtValue;
input = document.getElementById('serviceNameCoverageInput');
filter = input.value.toUpperCase();
- div = document.getElementsByClassName('section-index')[0]
+ div = document.getElementsByClassName('section-index')(0)
elements = div.getElementsByClassName('entry');
// Loop through all list items, and hide those who don't match the search query
diff --git a/content/en/references/coverage/coverage_account/index.md b/content/en/references/coverage/coverage_account/index.md
index ba948edb13..e8ac970b6f 100644
--- a/content/en/references/coverage/coverage_account/index.md
+++ b/content/en/references/coverage/coverage_account/index.md
@@ -7,7 +7,9 @@ hide_readingtime: true
---
## Coverage Overview
+
{{< localstack_coverage_table service="account" >}}
## Testing Details
+
{{< localstack_coverage_details service="account" >}}
diff --git a/content/en/references/coverage/coverage_acm-pca/index.md b/content/en/references/coverage/coverage_acm-pca/index.md
index bccaf32ee2..00ee7c6f7a 100644
--- a/content/en/references/coverage/coverage_acm-pca/index.md
+++ b/content/en/references/coverage/coverage_acm-pca/index.md
@@ -7,7 +7,9 @@ hide_readingtime: true
---
## Coverage Overview
+
{{< localstack_coverage_table service="acm-pca" >}}
## Testing Details
+
{{< localstack_coverage_details service="acm-pca" >}}
diff --git a/content/en/references/coverage/coverage_acm/index.md b/content/en/references/coverage/coverage_acm/index.md
index 83880babfb..eaf8627e6d 100644
--- a/content/en/references/coverage/coverage_acm/index.md
+++ b/content/en/references/coverage/coverage_acm/index.md
@@ -7,7 +7,9 @@ hide_readingtime: true
---
## Coverage Overview
+
{{< localstack_coverage_table service="acm" >}}
## Testing Details
+
{{< localstack_coverage_details service="acm" >}}
diff --git a/content/en/references/coverage/coverage_amplify/index.md b/content/en/references/coverage/coverage_amplify/index.md
index 1bc1befa04..c22d178e2e 100644
--- a/content/en/references/coverage/coverage_amplify/index.md
+++ b/content/en/references/coverage/coverage_amplify/index.md
@@ -7,7 +7,9 @@ hide_readingtime: true
---
## Coverage Overview
+
{{< localstack_coverage_table service="amplify" >}}
## Testing Details
+
{{< localstack_coverage_details service="amplify" >}}
diff --git a/content/en/references/coverage/coverage_apigateway/index.md b/content/en/references/coverage/coverage_apigateway/index.md
index e8a759da15..e9853b58b8 100644
--- a/content/en/references/coverage/coverage_apigateway/index.md
+++ b/content/en/references/coverage/coverage_apigateway/index.md
@@ -7,7 +7,9 @@ hide_readingtime: true
---
## Coverage Overview
+
{{< localstack_coverage_table service="apigateway" >}}
## Testing Details
+
{{< localstack_coverage_details service="apigateway" >}}
diff --git a/content/en/references/coverage/coverage_apigatewaymanagementapi/index.md b/content/en/references/coverage/coverage_apigatewaymanagementapi/index.md
index 08e63020d2..75671e74ca 100644
--- a/content/en/references/coverage/coverage_apigatewaymanagementapi/index.md
+++ b/content/en/references/coverage/coverage_apigatewaymanagementapi/index.md
@@ -7,7 +7,9 @@ hide_readingtime: true
---
## Coverage Overview
+
{{< localstack_coverage_table service="apigatewaymanagementapi" >}}
## Testing Details
+
{{< localstack_coverage_details service="apigatewaymanagementapi" >}}
diff --git a/content/en/references/coverage/coverage_apigatewayv2/index.md b/content/en/references/coverage/coverage_apigatewayv2/index.md
index 602bdda08f..da35a49e7c 100644
--- a/content/en/references/coverage/coverage_apigatewayv2/index.md
+++ b/content/en/references/coverage/coverage_apigatewayv2/index.md
@@ -7,7 +7,9 @@ hide_readingtime: true
---
## Coverage Overview
+
{{< localstack_coverage_table service="apigatewayv2" >}}
## Testing Details
+
{{< localstack_coverage_details service="apigatewayv2" >}}
diff --git a/content/en/references/coverage/coverage_appconfig/index.md b/content/en/references/coverage/coverage_appconfig/index.md
index 88b151cbf1..3bc202698e 100644
--- a/content/en/references/coverage/coverage_appconfig/index.md
+++ b/content/en/references/coverage/coverage_appconfig/index.md
@@ -7,7 +7,9 @@ hide_readingtime: true
---
## Coverage Overview
+
{{< localstack_coverage_table service="appconfig" >}}
## Testing Details
+
{{< localstack_coverage_details service="appconfig" >}}
diff --git a/content/en/references/coverage/coverage_appconfigdata/index.md b/content/en/references/coverage/coverage_appconfigdata/index.md
index cbe636ea14..4b3bb2e949 100644
--- a/content/en/references/coverage/coverage_appconfigdata/index.md
+++ b/content/en/references/coverage/coverage_appconfigdata/index.md
@@ -7,7 +7,9 @@ hide_readingtime: true
---
## Coverage Overview
+
{{< localstack_coverage_table service="appconfigdata" >}}
## Testing Details
+
{{< localstack_coverage_details service="appconfigdata" >}}
diff --git a/content/en/references/coverage/coverage_application-autoscaling/index.md b/content/en/references/coverage/coverage_application-autoscaling/index.md
index 9019113b22..097920d8db 100644
--- a/content/en/references/coverage/coverage_application-autoscaling/index.md
+++ b/content/en/references/coverage/coverage_application-autoscaling/index.md
@@ -7,7 +7,9 @@ hide_readingtime: true
---
## Coverage Overview
+
{{< localstack_coverage_table service="application-autoscaling" >}}
## Testing Details
+
{{< localstack_coverage_details service="application-autoscaling" >}}
diff --git a/content/en/references/coverage/coverage_appsync/index.md b/content/en/references/coverage/coverage_appsync/index.md
index 44395ecde8..4d8267c3d5 100644
--- a/content/en/references/coverage/coverage_appsync/index.md
+++ b/content/en/references/coverage/coverage_appsync/index.md
@@ -7,7 +7,9 @@ hide_readingtime: true
---
## Coverage Overview
+
{{< localstack_coverage_table service="appsync" >}}
## Testing Details
+
{{< localstack_coverage_details service="appsync" >}}
diff --git a/content/en/references/coverage/coverage_athena/index.md b/content/en/references/coverage/coverage_athena/index.md
index 879cc26e7b..f1b173a3be 100644
--- a/content/en/references/coverage/coverage_athena/index.md
+++ b/content/en/references/coverage/coverage_athena/index.md
@@ -7,7 +7,9 @@ hide_readingtime: true
---
## Coverage Overview
+
{{< localstack_coverage_table service="athena" >}}
## Testing Details
+
{{< localstack_coverage_details service="athena" >}}
diff --git a/content/en/references/coverage/coverage_autoscaling/index.md b/content/en/references/coverage/coverage_autoscaling/index.md
index 332510daf8..ac75ba3c12 100644
--- a/content/en/references/coverage/coverage_autoscaling/index.md
+++ b/content/en/references/coverage/coverage_autoscaling/index.md
@@ -7,7 +7,9 @@ hide_readingtime: true
---
## Coverage Overview
+
{{< localstack_coverage_table service="autoscaling" >}}
## Testing Details
+
{{< localstack_coverage_details service="autoscaling" >}}
diff --git a/content/en/references/coverage/coverage_backup/index.md b/content/en/references/coverage/coverage_backup/index.md
index a726d2a82c..3a1ff820ba 100644
--- a/content/en/references/coverage/coverage_backup/index.md
+++ b/content/en/references/coverage/coverage_backup/index.md
@@ -7,7 +7,9 @@ hide_readingtime: true
---
## Coverage Overview
+
{{< localstack_coverage_table service="backup" >}}
## Testing Details
+
{{< localstack_coverage_details service="backup" >}}
diff --git a/content/en/references/coverage/coverage_batch/index.md b/content/en/references/coverage/coverage_batch/index.md
index 54147dfee8..6721a03e95 100644
--- a/content/en/references/coverage/coverage_batch/index.md
+++ b/content/en/references/coverage/coverage_batch/index.md
@@ -7,7 +7,9 @@ hide_readingtime: true
---
## Coverage Overview
+
{{< localstack_coverage_table service="batch" >}}
## Testing Details
+
{{< localstack_coverage_details service="batch" >}}
diff --git a/content/en/references/coverage/coverage_ce/index.md b/content/en/references/coverage/coverage_ce/index.md
index 7903944c12..921013f34d 100644
--- a/content/en/references/coverage/coverage_ce/index.md
+++ b/content/en/references/coverage/coverage_ce/index.md
@@ -7,7 +7,9 @@ hide_readingtime: true
---
## Coverage Overview
+
{{< localstack_coverage_table service="ce" >}}
## Testing Details
+
{{< localstack_coverage_details service="ce" >}}
diff --git a/content/en/references/coverage/coverage_cloudformation/index.md b/content/en/references/coverage/coverage_cloudformation/index.md
index 180eeab28d..ee55e97da9 100644
--- a/content/en/references/coverage/coverage_cloudformation/index.md
+++ b/content/en/references/coverage/coverage_cloudformation/index.md
@@ -7,7 +7,9 @@ hide_readingtime: true
---
## Coverage Overview
+
{{< localstack_coverage_table service="cloudformation" >}}
## Testing Details
+
{{< localstack_coverage_details service="cloudformation" >}}
diff --git a/content/en/references/coverage/coverage_cloudfront/index.md b/content/en/references/coverage/coverage_cloudfront/index.md
index b5fb87d1b2..456f1094a1 100644
--- a/content/en/references/coverage/coverage_cloudfront/index.md
+++ b/content/en/references/coverage/coverage_cloudfront/index.md
@@ -7,7 +7,9 @@ hide_readingtime: true
---
## Coverage Overview
+
{{< localstack_coverage_table service="cloudfront" >}}
## Testing Details
+
{{< localstack_coverage_details service="cloudfront" >}}
diff --git a/content/en/references/coverage/coverage_cloudtrail/index.md b/content/en/references/coverage/coverage_cloudtrail/index.md
index 2dc5115b01..807ad54f1d 100644
--- a/content/en/references/coverage/coverage_cloudtrail/index.md
+++ b/content/en/references/coverage/coverage_cloudtrail/index.md
@@ -7,7 +7,9 @@ hide_readingtime: true
---
## Coverage Overview
+
{{< localstack_coverage_table service="cloudtrail" >}}
## Testing Details
+
{{< localstack_coverage_details service="cloudtrail" >}}
diff --git a/content/en/references/coverage/coverage_cloudwatch/index.md b/content/en/references/coverage/coverage_cloudwatch/index.md
index 7b8e976c40..f1245f890b 100644
--- a/content/en/references/coverage/coverage_cloudwatch/index.md
+++ b/content/en/references/coverage/coverage_cloudwatch/index.md
@@ -7,7 +7,9 @@ hide_readingtime: true
---
## Coverage Overview
+
{{< localstack_coverage_table service="cloudwatch" >}}
## Testing Details
+
{{< localstack_coverage_details service="cloudwatch" >}}
diff --git a/content/en/references/coverage/coverage_codecommit/index.md b/content/en/references/coverage/coverage_codecommit/index.md
index 9a3c846b06..5cfe58e9b5 100644
--- a/content/en/references/coverage/coverage_codecommit/index.md
+++ b/content/en/references/coverage/coverage_codecommit/index.md
@@ -7,7 +7,9 @@ hide_readingtime: true
---
## Coverage Overview
+
{{< localstack_coverage_table service="codecommit" >}}
## Testing Details
+
{{< localstack_coverage_details service="codecommit" >}}
diff --git a/content/en/references/coverage/coverage_cognito-identity/index.md b/content/en/references/coverage/coverage_cognito-identity/index.md
index b20a2cdc01..1f6adde57e 100644
--- a/content/en/references/coverage/coverage_cognito-identity/index.md
+++ b/content/en/references/coverage/coverage_cognito-identity/index.md
@@ -7,7 +7,9 @@ hide_readingtime: true
---
## Coverage Overview
+
{{< localstack_coverage_table service="cognito-identity" >}}
## Testing Details
+
{{< localstack_coverage_details service="cognito-identity" >}}
diff --git a/content/en/references/coverage/coverage_cognito-idp/index.md b/content/en/references/coverage/coverage_cognito-idp/index.md
index 811f648998..8f433d967d 100644
--- a/content/en/references/coverage/coverage_cognito-idp/index.md
+++ b/content/en/references/coverage/coverage_cognito-idp/index.md
@@ -7,7 +7,9 @@ hide_readingtime: true
---
## Coverage Overview
+
{{< localstack_coverage_table service="cognito-idp" >}}
## Testing Details
+
{{< localstack_coverage_details service="cognito-idp" >}}
diff --git a/content/en/references/coverage/coverage_config/index.md b/content/en/references/coverage/coverage_config/index.md
index 4257a2c520..6e2943f550 100644
--- a/content/en/references/coverage/coverage_config/index.md
+++ b/content/en/references/coverage/coverage_config/index.md
@@ -7,7 +7,9 @@ hide_readingtime: true
---
## Coverage Overview
+
{{< localstack_coverage_table service="config" >}}
## Testing Details
+
{{< localstack_coverage_details service="config" >}}
diff --git a/content/en/references/coverage/coverage_dms/index.md b/content/en/references/coverage/coverage_dms/index.md
index 09fda7837b..21c66a99e2 100644
--- a/content/en/references/coverage/coverage_dms/index.md
+++ b/content/en/references/coverage/coverage_dms/index.md
@@ -7,7 +7,9 @@ hide_readingtime: true
---
## Coverage Overview
+
{{< localstack_coverage_table service="dms" >}}
## Testing Details
+
{{< localstack_coverage_details service="dms" >}}
diff --git a/content/en/references/coverage/coverage_docdb/index.md b/content/en/references/coverage/coverage_docdb/index.md
index 0a22d0ae54..2fae2b5405 100644
--- a/content/en/references/coverage/coverage_docdb/index.md
+++ b/content/en/references/coverage/coverage_docdb/index.md
@@ -7,7 +7,9 @@ hide_readingtime: true
---
## Coverage Overview
+
{{< localstack_coverage_table service="docdb" >}}
## Testing Details
+
{{< localstack_coverage_details service="docdb" >}}
diff --git a/content/en/references/coverage/coverage_dynamodb/index.md b/content/en/references/coverage/coverage_dynamodb/index.md
index ef7bbdd441..661e0419de 100644
--- a/content/en/references/coverage/coverage_dynamodb/index.md
+++ b/content/en/references/coverage/coverage_dynamodb/index.md
@@ -7,7 +7,9 @@ hide_readingtime: true
---
## Coverage Overview
+
{{< localstack_coverage_table service="dynamodb" >}}
## Testing Details
+
{{< localstack_coverage_details service="dynamodb" >}}
diff --git a/content/en/references/coverage/coverage_dynamodbstreams/index.md b/content/en/references/coverage/coverage_dynamodbstreams/index.md
index 4cb2adbd9d..c0a4dc6f5c 100644
--- a/content/en/references/coverage/coverage_dynamodbstreams/index.md
+++ b/content/en/references/coverage/coverage_dynamodbstreams/index.md
@@ -7,7 +7,9 @@ hide_readingtime: true
---
## Coverage Overview
+
{{< localstack_coverage_table service="dynamodbstreams" >}}
## Testing Details
+
{{< localstack_coverage_details service="dynamodbstreams" >}}
diff --git a/content/en/references/coverage/coverage_ec2/index.md b/content/en/references/coverage/coverage_ec2/index.md
index 8161e65db1..74cd22f8db 100644
--- a/content/en/references/coverage/coverage_ec2/index.md
+++ b/content/en/references/coverage/coverage_ec2/index.md
@@ -7,7 +7,9 @@ hide_readingtime: true
---
## Coverage Overview
+
{{< localstack_coverage_table service="ec2" >}}
## Testing Details
+
{{< localstack_coverage_details service="ec2" >}}
diff --git a/content/en/references/coverage/coverage_ecr/index.md b/content/en/references/coverage/coverage_ecr/index.md
index c304057805..dfc028d0ae 100644
--- a/content/en/references/coverage/coverage_ecr/index.md
+++ b/content/en/references/coverage/coverage_ecr/index.md
@@ -7,7 +7,9 @@ hide_readingtime: true
---
## Coverage Overview
+
{{< localstack_coverage_table service="ecr" >}}
## Testing Details
+
{{< localstack_coverage_details service="ecr" >}}
diff --git a/content/en/references/coverage/coverage_ecs/index.md b/content/en/references/coverage/coverage_ecs/index.md
index abbc197672..c49670afb6 100644
--- a/content/en/references/coverage/coverage_ecs/index.md
+++ b/content/en/references/coverage/coverage_ecs/index.md
@@ -7,7 +7,9 @@ hide_readingtime: true
---
## Coverage Overview
+
{{< localstack_coverage_table service="ecs" >}}
## Testing Details
+
{{< localstack_coverage_details service="ecs" >}}
diff --git a/content/en/references/coverage/coverage_efs/index.md b/content/en/references/coverage/coverage_efs/index.md
index 36c54330dd..c6afc095f7 100644
--- a/content/en/references/coverage/coverage_efs/index.md
+++ b/content/en/references/coverage/coverage_efs/index.md
@@ -7,7 +7,9 @@ hide_readingtime: true
---
## Coverage Overview
+
{{< localstack_coverage_table service="efs" >}}
## Testing Details
+
{{< localstack_coverage_details service="efs" >}}
diff --git a/content/en/references/coverage/coverage_eks/index.md b/content/en/references/coverage/coverage_eks/index.md
index 07d2ddad47..74bc35218b 100644
--- a/content/en/references/coverage/coverage_eks/index.md
+++ b/content/en/references/coverage/coverage_eks/index.md
@@ -7,7 +7,9 @@ hide_readingtime: true
---
## Coverage Overview
+
{{< localstack_coverage_table service="eks" >}}
## Testing Details
+
{{< localstack_coverage_details service="eks" >}}
diff --git a/content/en/references/coverage/coverage_elasticache/index.md b/content/en/references/coverage/coverage_elasticache/index.md
index e99d6bfc84..29a8571182 100644
--- a/content/en/references/coverage/coverage_elasticache/index.md
+++ b/content/en/references/coverage/coverage_elasticache/index.md
@@ -7,7 +7,9 @@ hide_readingtime: true
---
## Coverage Overview
+
{{< localstack_coverage_table service="elasticache" >}}
## Testing Details
+
{{< localstack_coverage_details service="elasticache" >}}
diff --git a/content/en/references/coverage/coverage_elasticbeanstalk/index.md b/content/en/references/coverage/coverage_elasticbeanstalk/index.md
index 255696e9e6..e7e72880ee 100644
--- a/content/en/references/coverage/coverage_elasticbeanstalk/index.md
+++ b/content/en/references/coverage/coverage_elasticbeanstalk/index.md
@@ -7,7 +7,9 @@ hide_readingtime: true
---
## Coverage Overview
+
{{< localstack_coverage_table service="elasticbeanstalk" >}}
## Testing Details
+
{{< localstack_coverage_details service="elasticbeanstalk" >}}
diff --git a/content/en/references/coverage/coverage_elastictranscoder/index.md b/content/en/references/coverage/coverage_elastictranscoder/index.md
index 9340b111c6..711e6cf1b3 100644
--- a/content/en/references/coverage/coverage_elastictranscoder/index.md
+++ b/content/en/references/coverage/coverage_elastictranscoder/index.md
@@ -7,7 +7,9 @@ hide_readingtime: true
---
## Coverage Overview
+
{{< localstack_coverage_table service="elastictranscoder" >}}
## Testing Details
+
{{< localstack_coverage_details service="elastictranscoder" >}}
diff --git a/content/en/references/coverage/coverage_elb/index.md b/content/en/references/coverage/coverage_elb/index.md
index b0fc74eb27..1af341e56c 100644
--- a/content/en/references/coverage/coverage_elb/index.md
+++ b/content/en/references/coverage/coverage_elb/index.md
@@ -7,7 +7,9 @@ hide_readingtime: true
---
## Coverage Overview
+
{{< localstack_coverage_table service="elb" >}}
## Testing Details
+
{{< localstack_coverage_details service="elb" >}}
diff --git a/content/en/references/coverage/coverage_elbv2/index.md b/content/en/references/coverage/coverage_elbv2/index.md
index 815a45c6fa..47a6e737a5 100644
--- a/content/en/references/coverage/coverage_elbv2/index.md
+++ b/content/en/references/coverage/coverage_elbv2/index.md
@@ -7,7 +7,9 @@ hide_readingtime: true
---
## Coverage Overview
+
{{< localstack_coverage_table service="elbv2" >}}
## Testing Details
+
{{< localstack_coverage_details service="elbv2" >}}
diff --git a/content/en/references/coverage/coverage_emr-serverless/index.md b/content/en/references/coverage/coverage_emr-serverless/index.md
index 6ddb11f4f1..3d2bc4a4cf 100644
--- a/content/en/references/coverage/coverage_emr-serverless/index.md
+++ b/content/en/references/coverage/coverage_emr-serverless/index.md
@@ -7,7 +7,9 @@ hide_readingtime: true
---
## Coverage Overview
+
{{< localstack_coverage_table service="emr-serverless" >}}
## Testing Details
+
{{< localstack_coverage_details service="emr-serverless" >}}
diff --git a/content/en/references/coverage/coverage_emr/index.md b/content/en/references/coverage/coverage_emr/index.md
index d138d9a6dd..b0dca11d2e 100644
--- a/content/en/references/coverage/coverage_emr/index.md
+++ b/content/en/references/coverage/coverage_emr/index.md
@@ -7,7 +7,9 @@ hide_readingtime: true
---
## Coverage Overview
+
{{< localstack_coverage_table service="emr" >}}
## Testing Details
+
{{< localstack_coverage_details service="emr" >}}
diff --git a/content/en/references/coverage/coverage_es/index.md b/content/en/references/coverage/coverage_es/index.md
index e6340721ef..0ae98c114c 100644
--- a/content/en/references/coverage/coverage_es/index.md
+++ b/content/en/references/coverage/coverage_es/index.md
@@ -7,7 +7,9 @@ hide_readingtime: true
---
## Coverage Overview
+
{{< localstack_coverage_table service="es" >}}
## Testing Details
+
{{< localstack_coverage_details service="es" >}}
diff --git a/content/en/references/coverage/coverage_events/index.md b/content/en/references/coverage/coverage_events/index.md
index d94a12697e..9c20fd47d0 100644
--- a/content/en/references/coverage/coverage_events/index.md
+++ b/content/en/references/coverage/coverage_events/index.md
@@ -7,7 +7,9 @@ hide_readingtime: true
---
## Coverage Overview
+
{{< localstack_coverage_table service="events" >}}
## Testing Details
+
{{< localstack_coverage_details service="events" >}}
diff --git a/content/en/references/coverage/coverage_firehose/index.md b/content/en/references/coverage/coverage_firehose/index.md
index 0e2b98a595..1932efce2b 100644
--- a/content/en/references/coverage/coverage_firehose/index.md
+++ b/content/en/references/coverage/coverage_firehose/index.md
@@ -7,7 +7,9 @@ hide_readingtime: true
---
## Coverage Overview
+
{{< localstack_coverage_table service="firehose" >}}
## Testing Details
+
{{< localstack_coverage_details service="firehose" >}}
diff --git a/content/en/references/coverage/coverage_fis/index.md b/content/en/references/coverage/coverage_fis/index.md
index dcb5cbc753..7d5d019bf9 100644
--- a/content/en/references/coverage/coverage_fis/index.md
+++ b/content/en/references/coverage/coverage_fis/index.md
@@ -7,7 +7,9 @@ hide_readingtime: true
---
## Coverage Overview
+
{{< localstack_coverage_table service="fis" >}}
## Testing Details
+
{{< localstack_coverage_details service="fis" >}}
diff --git a/content/en/references/coverage/coverage_glacier/index.md b/content/en/references/coverage/coverage_glacier/index.md
index 413d95c959..81b32ccf38 100644
--- a/content/en/references/coverage/coverage_glacier/index.md
+++ b/content/en/references/coverage/coverage_glacier/index.md
@@ -7,7 +7,9 @@ hide_readingtime: true
---
## Coverage Overview
+
{{< localstack_coverage_table service="glacier" >}}
## Testing Details
+
{{< localstack_coverage_details service="glacier" >}}
diff --git a/content/en/references/coverage/coverage_glue/index.md b/content/en/references/coverage/coverage_glue/index.md
index af4663bffc..e964e1e86a 100644
--- a/content/en/references/coverage/coverage_glue/index.md
+++ b/content/en/references/coverage/coverage_glue/index.md
@@ -7,7 +7,9 @@ hide_readingtime: true
---
## Coverage Overview
+
{{< localstack_coverage_table service="glue" >}}
## Testing Details
+
{{< localstack_coverage_details service="glue" >}}
diff --git a/content/en/references/coverage/coverage_iam/index.md b/content/en/references/coverage/coverage_iam/index.md
index d6d655bc4a..28409996a0 100644
--- a/content/en/references/coverage/coverage_iam/index.md
+++ b/content/en/references/coverage/coverage_iam/index.md
@@ -7,7 +7,9 @@ hide_readingtime: true
---
## Coverage Overview
+
{{< localstack_coverage_table service="iam" >}}
## Testing Details
+
{{< localstack_coverage_details service="iam" >}}
diff --git a/content/en/references/coverage/coverage_identitystore/index.md b/content/en/references/coverage/coverage_identitystore/index.md
index 35555771d6..34eda20d33 100644
--- a/content/en/references/coverage/coverage_identitystore/index.md
+++ b/content/en/references/coverage/coverage_identitystore/index.md
@@ -7,7 +7,9 @@ hide_readingtime: true
---
## Coverage Overview
+
{{< localstack_coverage_table service="identitystore" >}}
## Testing Details
+
{{< localstack_coverage_details service="identitystore" >}}
diff --git a/content/en/references/coverage/coverage_iot-data/index.md b/content/en/references/coverage/coverage_iot-data/index.md
index 60328c560d..82f9db0acf 100644
--- a/content/en/references/coverage/coverage_iot-data/index.md
+++ b/content/en/references/coverage/coverage_iot-data/index.md
@@ -7,7 +7,9 @@ hide_readingtime: true
---
## Coverage Overview
+
{{< localstack_coverage_table service="iot-data" >}}
## Testing Details
+
{{< localstack_coverage_details service="iot-data" >}}
diff --git a/content/en/references/coverage/coverage_iot/index.md b/content/en/references/coverage/coverage_iot/index.md
index 3839604003..301fe1be4a 100644
--- a/content/en/references/coverage/coverage_iot/index.md
+++ b/content/en/references/coverage/coverage_iot/index.md
@@ -7,7 +7,9 @@ hide_readingtime: true
---
## Coverage Overview
+
{{< localstack_coverage_table service="iot" >}}
## Testing Details
+
{{< localstack_coverage_details service="iot" >}}
diff --git a/content/en/references/coverage/coverage_iotanalytics/index.md b/content/en/references/coverage/coverage_iotanalytics/index.md
index 2f00f0a8d5..8ccd2690a3 100644
--- a/content/en/references/coverage/coverage_iotanalytics/index.md
+++ b/content/en/references/coverage/coverage_iotanalytics/index.md
@@ -7,7 +7,9 @@ hide_readingtime: true
---
## Coverage Overview
+
{{< localstack_coverage_table service="iotanalytics" >}}
## Testing Details
+
{{< localstack_coverage_details service="iotanalytics" >}}
diff --git a/content/en/references/coverage/coverage_iotwireless/index.md b/content/en/references/coverage/coverage_iotwireless/index.md
index 19ffd8a6c8..8774fbf8e3 100644
--- a/content/en/references/coverage/coverage_iotwireless/index.md
+++ b/content/en/references/coverage/coverage_iotwireless/index.md
@@ -7,7 +7,9 @@ hide_readingtime: true
---
## Coverage Overview
+
{{< localstack_coverage_table service="iotwireless" >}}
## Testing Details
+
{{< localstack_coverage_details service="iotwireless" >}}
diff --git a/content/en/references/coverage/coverage_kafka/index.md b/content/en/references/coverage/coverage_kafka/index.md
index 393b97e69a..bed3e6e40b 100644
--- a/content/en/references/coverage/coverage_kafka/index.md
+++ b/content/en/references/coverage/coverage_kafka/index.md
@@ -7,7 +7,9 @@ hide_readingtime: true
---
## Coverage Overview
+
{{< localstack_coverage_table service="kafka" >}}
## Testing Details
+
{{< localstack_coverage_details service="kafka" >}}
diff --git a/content/en/references/coverage/coverage_kinesis/index.md b/content/en/references/coverage/coverage_kinesis/index.md
index a86a8c65a9..1f3ab9b445 100644
--- a/content/en/references/coverage/coverage_kinesis/index.md
+++ b/content/en/references/coverage/coverage_kinesis/index.md
@@ -7,7 +7,9 @@ hide_readingtime: true
---
## Coverage Overview
+
{{< localstack_coverage_table service="kinesis" >}}
## Testing Details
+
{{< localstack_coverage_details service="kinesis" >}}
diff --git a/content/en/references/coverage/coverage_kinesisanalytics/index.md b/content/en/references/coverage/coverage_kinesisanalytics/index.md
index e13f888f91..70f1fd618a 100644
--- a/content/en/references/coverage/coverage_kinesisanalytics/index.md
+++ b/content/en/references/coverage/coverage_kinesisanalytics/index.md
@@ -7,7 +7,9 @@ hide_readingtime: true
---
## Coverage Overview
+
{{< localstack_coverage_table service="kinesisanalytics" >}}
## Testing Details
+
{{< localstack_coverage_details service="kinesisanalytics" >}}
diff --git a/content/en/references/coverage/coverage_kinesisanalyticsv2/index.md b/content/en/references/coverage/coverage_kinesisanalyticsv2/index.md
index c0c07e60f0..bb6c5024dc 100644
--- a/content/en/references/coverage/coverage_kinesisanalyticsv2/index.md
+++ b/content/en/references/coverage/coverage_kinesisanalyticsv2/index.md
@@ -7,7 +7,9 @@ hide_readingtime: true
---
## Coverage Overview
+
{{< localstack_coverage_table service="kinesisanalyticsv2" >}}
## Testing Details
+
{{< localstack_coverage_details service="kinesisanalyticsv2" >}}
diff --git a/content/en/references/coverage/coverage_kms/index.md b/content/en/references/coverage/coverage_kms/index.md
index 33842d678f..0f7f880c45 100644
--- a/content/en/references/coverage/coverage_kms/index.md
+++ b/content/en/references/coverage/coverage_kms/index.md
@@ -7,7 +7,9 @@ hide_readingtime: true
---
## Coverage Overview
+
{{< localstack_coverage_table service="kms" >}}
## Testing Details
+
{{< localstack_coverage_details service="kms" >}}
diff --git a/content/en/references/coverage/coverage_lakeformation/index.md b/content/en/references/coverage/coverage_lakeformation/index.md
index 3311c8cdae..5e0afd4302 100644
--- a/content/en/references/coverage/coverage_lakeformation/index.md
+++ b/content/en/references/coverage/coverage_lakeformation/index.md
@@ -7,7 +7,9 @@ hide_readingtime: true
---
## Coverage Overview
+
{{< localstack_coverage_table service="lakeformation" >}}
## Testing Details
+
{{< localstack_coverage_details service="lakeformation" >}}
diff --git a/content/en/references/coverage/coverage_lambda/index.md b/content/en/references/coverage/coverage_lambda/index.md
index d41793a91d..9aaf6dc469 100644
--- a/content/en/references/coverage/coverage_lambda/index.md
+++ b/content/en/references/coverage/coverage_lambda/index.md
@@ -7,7 +7,9 @@ hide_readingtime: true
---
## Coverage Overview
+
{{< localstack_coverage_table service="lambda" >}}
## Testing Details
+
{{< localstack_coverage_details service="lambda" >}}
diff --git a/content/en/references/coverage/coverage_logs/index.md b/content/en/references/coverage/coverage_logs/index.md
index cc3a3041d0..db807ce18d 100644
--- a/content/en/references/coverage/coverage_logs/index.md
+++ b/content/en/references/coverage/coverage_logs/index.md
@@ -7,7 +7,9 @@ hide_readingtime: true
---
## Coverage Overview
+
{{< localstack_coverage_table service="logs" >}}
## Testing Details
+
{{< localstack_coverage_details service="logs" >}}
diff --git a/content/en/references/coverage/coverage_managedblockchain/index.md b/content/en/references/coverage/coverage_managedblockchain/index.md
index e0ca9ae908..8580912ebd 100644
--- a/content/en/references/coverage/coverage_managedblockchain/index.md
+++ b/content/en/references/coverage/coverage_managedblockchain/index.md
@@ -7,7 +7,9 @@ hide_readingtime: true
---
## Coverage Overview
+
{{< localstack_coverage_table service="managedblockchain" >}}
## Testing Details
+
{{< localstack_coverage_details service="managedblockchain" >}}
diff --git a/content/en/references/coverage/coverage_mediastore-data/index.md b/content/en/references/coverage/coverage_mediastore-data/index.md
index dd64f34f51..bc091b3df1 100644
--- a/content/en/references/coverage/coverage_mediastore-data/index.md
+++ b/content/en/references/coverage/coverage_mediastore-data/index.md
@@ -7,7 +7,9 @@ hide_readingtime: true
---
## Coverage Overview
+
{{< localstack_coverage_table service="mediastore-data" >}}
## Testing Details
+
{{< localstack_coverage_details service="mediastore-data" >}}
diff --git a/content/en/references/coverage/coverage_mediastore/index.md b/content/en/references/coverage/coverage_mediastore/index.md
index 9b1d83f2cb..e68f12b4f8 100644
--- a/content/en/references/coverage/coverage_mediastore/index.md
+++ b/content/en/references/coverage/coverage_mediastore/index.md
@@ -7,7 +7,9 @@ hide_readingtime: true
---
## Coverage Overview
+
{{< localstack_coverage_table service="mediastore" >}}
## Testing Details
+
{{< localstack_coverage_details service="mediastore" >}}
diff --git a/content/en/references/coverage/coverage_memorydb/index.md b/content/en/references/coverage/coverage_memorydb/index.md
index 5bd9a49476..b0776a3c5c 100644
--- a/content/en/references/coverage/coverage_memorydb/index.md
+++ b/content/en/references/coverage/coverage_memorydb/index.md
@@ -7,7 +7,9 @@ hide_readingtime: true
---
## Coverage Overview
+
{{< localstack_coverage_table service="memorydb" >}}
## Testing Details
+
{{< localstack_coverage_details service="memorydb" >}}
diff --git a/content/en/references/coverage/coverage_mq/index.md b/content/en/references/coverage/coverage_mq/index.md
index 9772b21516..62ff8ecc47 100644
--- a/content/en/references/coverage/coverage_mq/index.md
+++ b/content/en/references/coverage/coverage_mq/index.md
@@ -7,7 +7,9 @@ hide_readingtime: true
---
## Coverage Overview
+
{{< localstack_coverage_table service="mq" >}}
## Testing Details
+
{{< localstack_coverage_details service="mq" >}}
diff --git a/content/en/references/coverage/coverage_mwaa/index.md b/content/en/references/coverage/coverage_mwaa/index.md
index b5b4c4178b..4d2015f842 100644
--- a/content/en/references/coverage/coverage_mwaa/index.md
+++ b/content/en/references/coverage/coverage_mwaa/index.md
@@ -7,7 +7,9 @@ hide_readingtime: true
---
## Coverage Overview
+
{{< localstack_coverage_table service="mwaa" >}}
## Testing Details
+
{{< localstack_coverage_details service="mwaa" >}}
diff --git a/content/en/references/coverage/coverage_neptune/index.md b/content/en/references/coverage/coverage_neptune/index.md
index 6873773635..e8d6fcea1f 100644
--- a/content/en/references/coverage/coverage_neptune/index.md
+++ b/content/en/references/coverage/coverage_neptune/index.md
@@ -7,7 +7,9 @@ hide_readingtime: true
---
## Coverage Overview
+
{{< localstack_coverage_table service="neptune" >}}
## Testing Details
+
{{< localstack_coverage_details service="neptune" >}}
diff --git a/content/en/references/coverage/coverage_opensearch/index.md b/content/en/references/coverage/coverage_opensearch/index.md
index a53b570c38..a33238b9ec 100644
--- a/content/en/references/coverage/coverage_opensearch/index.md
+++ b/content/en/references/coverage/coverage_opensearch/index.md
@@ -7,7 +7,9 @@ hide_readingtime: true
---
## Coverage Overview
+
{{< localstack_coverage_table service="opensearch" >}}
## Testing Details
+
{{< localstack_coverage_details service="opensearch" >}}
diff --git a/content/en/references/coverage/coverage_organizations/index.md b/content/en/references/coverage/coverage_organizations/index.md
index 9f43c0689b..5560de5706 100644
--- a/content/en/references/coverage/coverage_organizations/index.md
+++ b/content/en/references/coverage/coverage_organizations/index.md
@@ -7,7 +7,9 @@ hide_readingtime: true
---
## Coverage Overview
+
{{< localstack_coverage_table service="organizations" >}}
## Testing Details
+
{{< localstack_coverage_details service="organizations" >}}
diff --git a/content/en/references/coverage/coverage_pinpoint/index.md b/content/en/references/coverage/coverage_pinpoint/index.md
index 8e97f60974..6dd8aa569b 100644
--- a/content/en/references/coverage/coverage_pinpoint/index.md
+++ b/content/en/references/coverage/coverage_pinpoint/index.md
@@ -7,7 +7,9 @@ hide_readingtime: true
---
## Coverage Overview
+
{{< localstack_coverage_table service="pinpoint" >}}
## Testing Details
+
{{< localstack_coverage_details service="pinpoint" >}}
diff --git a/content/en/references/coverage/coverage_pipes/index.md b/content/en/references/coverage/coverage_pipes/index.md
index 0c283cc819..b3f56065f4 100644
--- a/content/en/references/coverage/coverage_pipes/index.md
+++ b/content/en/references/coverage/coverage_pipes/index.md
@@ -7,7 +7,9 @@ hide_readingtime: true
---
## Coverage Overview
+
{{< localstack_coverage_table service="pipes" >}}
## Testing Details
+
{{< localstack_coverage_details service="pipes" >}}
diff --git a/content/en/references/coverage/coverage_qldb-session/index.md b/content/en/references/coverage/coverage_qldb-session/index.md
index b935879e3e..ec09cbd8b6 100644
--- a/content/en/references/coverage/coverage_qldb-session/index.md
+++ b/content/en/references/coverage/coverage_qldb-session/index.md
@@ -7,7 +7,9 @@ hide_readingtime: true
---
## Coverage Overview
+
{{< localstack_coverage_table service="qldb-session" >}}
## Testing Details
+
{{< localstack_coverage_details service="qldb-session" >}}
diff --git a/content/en/references/coverage/coverage_qldb/index.md b/content/en/references/coverage/coverage_qldb/index.md
index 87a79395ae..6b1f76ebc0 100644
--- a/content/en/references/coverage/coverage_qldb/index.md
+++ b/content/en/references/coverage/coverage_qldb/index.md
@@ -7,7 +7,9 @@ hide_readingtime: true
---
## Coverage Overview
+
{{< localstack_coverage_table service="qldb" >}}
## Testing Details
+
{{< localstack_coverage_details service="qldb" >}}
diff --git a/content/en/references/coverage/coverage_ram/index.md b/content/en/references/coverage/coverage_ram/index.md
index 8a56988fcf..84a7fe99fc 100644
--- a/content/en/references/coverage/coverage_ram/index.md
+++ b/content/en/references/coverage/coverage_ram/index.md
@@ -7,7 +7,9 @@ hide_readingtime: true
---
## Coverage Overview
+
{{< localstack_coverage_table service="ram" >}}
## Testing Details
+
{{< localstack_coverage_details service="ram" >}}
diff --git a/content/en/references/coverage/coverage_rds-data/index.md b/content/en/references/coverage/coverage_rds-data/index.md
index 270559d429..daf36a7460 100644
--- a/content/en/references/coverage/coverage_rds-data/index.md
+++ b/content/en/references/coverage/coverage_rds-data/index.md
@@ -7,7 +7,9 @@ hide_readingtime: true
---
## Coverage Overview
+
{{< localstack_coverage_table service="rds-data" >}}
## Testing Details
+
{{< localstack_coverage_details service="rds-data" >}}
diff --git a/content/en/references/coverage/coverage_rds/index.md b/content/en/references/coverage/coverage_rds/index.md
index 37739b3cd7..bdcb729965 100644
--- a/content/en/references/coverage/coverage_rds/index.md
+++ b/content/en/references/coverage/coverage_rds/index.md
@@ -7,7 +7,9 @@ hide_readingtime: true
---
## Coverage Overview
+
{{< localstack_coverage_table service="rds" >}}
## Testing Details
+
{{< localstack_coverage_details service="rds" >}}
diff --git a/content/en/references/coverage/coverage_redshift-data/index.md b/content/en/references/coverage/coverage_redshift-data/index.md
index 4ed5733c43..bc73e337d0 100644
--- a/content/en/references/coverage/coverage_redshift-data/index.md
+++ b/content/en/references/coverage/coverage_redshift-data/index.md
@@ -7,7 +7,9 @@ hide_readingtime: true
---
## Coverage Overview
+
{{< localstack_coverage_table service="redshift-data" >}}
## Testing Details
+
{{< localstack_coverage_details service="redshift-data" >}}
diff --git a/content/en/references/coverage/coverage_redshift/index.md b/content/en/references/coverage/coverage_redshift/index.md
index 349816be74..59522b7734 100644
--- a/content/en/references/coverage/coverage_redshift/index.md
+++ b/content/en/references/coverage/coverage_redshift/index.md
@@ -7,7 +7,9 @@ hide_readingtime: true
---
## Coverage Overview
+
{{< localstack_coverage_table service="redshift" >}}
## Testing Details
+
{{< localstack_coverage_details service="redshift" >}}
diff --git a/content/en/references/coverage/coverage_resource-groups/index.md b/content/en/references/coverage/coverage_resource-groups/index.md
index 0b89c6ca4a..a6cabefdd3 100644
--- a/content/en/references/coverage/coverage_resource-groups/index.md
+++ b/content/en/references/coverage/coverage_resource-groups/index.md
@@ -7,7 +7,9 @@ hide_readingtime: true
---
## Coverage Overview
+
{{< localstack_coverage_table service="resource-groups" >}}
## Testing Details
+
{{< localstack_coverage_details service="resource-groups" >}}
diff --git a/content/en/references/coverage/coverage_resourcegroupstaggingapi/index.md b/content/en/references/coverage/coverage_resourcegroupstaggingapi/index.md
index 1a6163ec04..06c9f070e8 100644
--- a/content/en/references/coverage/coverage_resourcegroupstaggingapi/index.md
+++ b/content/en/references/coverage/coverage_resourcegroupstaggingapi/index.md
@@ -7,7 +7,9 @@ hide_readingtime: true
---
## Coverage Overview
+
{{< localstack_coverage_table service="resourcegroupstaggingapi" >}}
## Testing Details
+
{{< localstack_coverage_details service="resourcegroupstaggingapi" >}}
diff --git a/content/en/references/coverage/coverage_route53/index.md b/content/en/references/coverage/coverage_route53/index.md
index 2a0663e61d..1d4010cd60 100644
--- a/content/en/references/coverage/coverage_route53/index.md
+++ b/content/en/references/coverage/coverage_route53/index.md
@@ -7,7 +7,9 @@ hide_readingtime: true
---
## Coverage Overview
+
{{< localstack_coverage_table service="route53" >}}
## Testing Details
+
{{< localstack_coverage_details service="route53" >}}
diff --git a/content/en/references/coverage/coverage_route53resolver/index.md b/content/en/references/coverage/coverage_route53resolver/index.md
index 709fcb4727..45b34447db 100644
--- a/content/en/references/coverage/coverage_route53resolver/index.md
+++ b/content/en/references/coverage/coverage_route53resolver/index.md
@@ -7,7 +7,9 @@ hide_readingtime: true
---
## Coverage Overview
+
{{< localstack_coverage_table service="route53resolver" >}}
## Testing Details
+
{{< localstack_coverage_details service="route53resolver" >}}
diff --git a/content/en/references/coverage/coverage_s3/index.md b/content/en/references/coverage/coverage_s3/index.md
index 16e7457dc1..1d6c974c96 100644
--- a/content/en/references/coverage/coverage_s3/index.md
+++ b/content/en/references/coverage/coverage_s3/index.md
@@ -7,7 +7,9 @@ hide_readingtime: true
---
## Coverage Overview
+
{{< localstack_coverage_table service="s3" >}}
## Testing Details
+
{{< localstack_coverage_details service="s3" >}}
diff --git a/content/en/references/coverage/coverage_s3control/index.md b/content/en/references/coverage/coverage_s3control/index.md
index 3e37f409f8..32ed32fe3b 100644
--- a/content/en/references/coverage/coverage_s3control/index.md
+++ b/content/en/references/coverage/coverage_s3control/index.md
@@ -7,7 +7,9 @@ hide_readingtime: true
---
## Coverage Overview
+
{{< localstack_coverage_table service="s3control" >}}
## Testing Details
+
{{< localstack_coverage_details service="s3control" >}}
diff --git a/content/en/references/coverage/coverage_sagemaker-runtime/index.md b/content/en/references/coverage/coverage_sagemaker-runtime/index.md
index 476f902a17..6ca547c1d7 100644
--- a/content/en/references/coverage/coverage_sagemaker-runtime/index.md
+++ b/content/en/references/coverage/coverage_sagemaker-runtime/index.md
@@ -7,7 +7,9 @@ hide_readingtime: true
---
## Coverage Overview
+
{{< localstack_coverage_table service="sagemaker-runtime" >}}
## Testing Details
+
{{< localstack_coverage_details service="sagemaker-runtime" >}}
diff --git a/content/en/references/coverage/coverage_sagemaker/index.md b/content/en/references/coverage/coverage_sagemaker/index.md
index be4674140f..05e00bce92 100644
--- a/content/en/references/coverage/coverage_sagemaker/index.md
+++ b/content/en/references/coverage/coverage_sagemaker/index.md
@@ -7,7 +7,9 @@ hide_readingtime: true
---
## Coverage Overview
+
{{< localstack_coverage_table service="sagemaker" >}}
## Testing Details
+
{{< localstack_coverage_details service="sagemaker" >}}
diff --git a/content/en/references/coverage/coverage_scheduler/index.md b/content/en/references/coverage/coverage_scheduler/index.md
index 8c5135bfa9..f9447c96b2 100644
--- a/content/en/references/coverage/coverage_scheduler/index.md
+++ b/content/en/references/coverage/coverage_scheduler/index.md
@@ -7,7 +7,9 @@ hide_readingtime: true
---
## Coverage Overview
+
{{< localstack_coverage_table service="scheduler" >}}
## Testing Details
+
{{< localstack_coverage_details service="scheduler" >}}
diff --git a/content/en/references/coverage/coverage_secretsmanager/index.md b/content/en/references/coverage/coverage_secretsmanager/index.md
index 3c7f9e2545..a7b080e0fc 100644
--- a/content/en/references/coverage/coverage_secretsmanager/index.md
+++ b/content/en/references/coverage/coverage_secretsmanager/index.md
@@ -7,7 +7,9 @@ hide_readingtime: true
---
## Coverage Overview
+
{{< localstack_coverage_table service="secretsmanager" >}}
## Testing Details
+
{{< localstack_coverage_details service="secretsmanager" >}}
diff --git a/content/en/references/coverage/coverage_serverlessrepo/index.md b/content/en/references/coverage/coverage_serverlessrepo/index.md
index ca5d577332..695a29470b 100644
--- a/content/en/references/coverage/coverage_serverlessrepo/index.md
+++ b/content/en/references/coverage/coverage_serverlessrepo/index.md
@@ -7,7 +7,9 @@ hide_readingtime: true
---
## Coverage Overview
+
{{< localstack_coverage_table service="serverlessrepo" >}}
## Testing Details
+
{{< localstack_coverage_details service="serverlessrepo" >}}
diff --git a/content/en/references/coverage/coverage_servicediscovery/index.md b/content/en/references/coverage/coverage_servicediscovery/index.md
index 414c4c326f..abda81fd4f 100644
--- a/content/en/references/coverage/coverage_servicediscovery/index.md
+++ b/content/en/references/coverage/coverage_servicediscovery/index.md
@@ -7,7 +7,9 @@ hide_readingtime: true
---
## Coverage Overview
+
{{< localstack_coverage_table service="servicediscovery" >}}
## Testing Details
+
{{< localstack_coverage_details service="servicediscovery" >}}
diff --git a/content/en/references/coverage/coverage_ses/index.md b/content/en/references/coverage/coverage_ses/index.md
index 24d8701895..ce2d28510b 100644
--- a/content/en/references/coverage/coverage_ses/index.md
+++ b/content/en/references/coverage/coverage_ses/index.md
@@ -7,7 +7,9 @@ hide_readingtime: true
---
## Coverage Overview
+
{{< localstack_coverage_table service="ses" >}}
## Testing Details
+
{{< localstack_coverage_details service="ses" >}}
diff --git a/content/en/references/coverage/coverage_sesv2/index.md b/content/en/references/coverage/coverage_sesv2/index.md
index af779b9e3b..717fd83e46 100644
--- a/content/en/references/coverage/coverage_sesv2/index.md
+++ b/content/en/references/coverage/coverage_sesv2/index.md
@@ -7,7 +7,9 @@ hide_readingtime: true
---
## Coverage Overview
+
{{< localstack_coverage_table service="sesv2" >}}
## Testing Details
+
{{< localstack_coverage_details service="sesv2" >}}
diff --git a/content/en/references/coverage/coverage_sns/index.md b/content/en/references/coverage/coverage_sns/index.md
index 501c805986..59ff0ffa31 100644
--- a/content/en/references/coverage/coverage_sns/index.md
+++ b/content/en/references/coverage/coverage_sns/index.md
@@ -7,7 +7,9 @@ hide_readingtime: true
---
## Coverage Overview
+
{{< localstack_coverage_table service="sns" >}}
## Testing Details
+
{{< localstack_coverage_details service="sns" >}}
diff --git a/content/en/references/coverage/coverage_sqs/index.md b/content/en/references/coverage/coverage_sqs/index.md
index 87618fd860..e7c12a3eaf 100644
--- a/content/en/references/coverage/coverage_sqs/index.md
+++ b/content/en/references/coverage/coverage_sqs/index.md
@@ -7,7 +7,9 @@ hide_readingtime: true
---
## Coverage Overview
+
{{< localstack_coverage_table service="sqs" >}}
## Testing Details
+
{{< localstack_coverage_details service="sqs" >}}
diff --git a/content/en/references/coverage/coverage_ssm/index.md b/content/en/references/coverage/coverage_ssm/index.md
index cc71600c51..3213244dab 100644
--- a/content/en/references/coverage/coverage_ssm/index.md
+++ b/content/en/references/coverage/coverage_ssm/index.md
@@ -7,7 +7,9 @@ hide_readingtime: true
---
## Coverage Overview
+
{{< localstack_coverage_table service="ssm" >}}
## Testing Details
+
{{< localstack_coverage_details service="ssm" >}}
diff --git a/content/en/references/coverage/coverage_sso-admin/index.md b/content/en/references/coverage/coverage_sso-admin/index.md
index 5eac825dec..cf1311da03 100644
--- a/content/en/references/coverage/coverage_sso-admin/index.md
+++ b/content/en/references/coverage/coverage_sso-admin/index.md
@@ -7,7 +7,9 @@ hide_readingtime: true
---
## Coverage Overview
+
{{< localstack_coverage_table service="sso-admin" >}}
## Testing Details
+
{{< localstack_coverage_details service="sso-admin" >}}
diff --git a/content/en/references/coverage/coverage_stepfunctions/index.md b/content/en/references/coverage/coverage_stepfunctions/index.md
index a8ec76b871..1a61305e0c 100644
--- a/content/en/references/coverage/coverage_stepfunctions/index.md
+++ b/content/en/references/coverage/coverage_stepfunctions/index.md
@@ -7,7 +7,9 @@ hide_readingtime: true
---
## Coverage Overview
+
{{< localstack_coverage_table service="stepfunctions" >}}
## Testing Details
+
{{< localstack_coverage_details service="stepfunctions" >}}
diff --git a/content/en/references/coverage/coverage_sts/index.md b/content/en/references/coverage/coverage_sts/index.md
index ae0462616e..8aaadfe15d 100644
--- a/content/en/references/coverage/coverage_sts/index.md
+++ b/content/en/references/coverage/coverage_sts/index.md
@@ -7,7 +7,9 @@ hide_readingtime: true
---
## Coverage Overview
+
{{< localstack_coverage_table service="sts" >}}
## Testing Details
+
{{< localstack_coverage_details service="sts" >}}
diff --git a/content/en/references/coverage/coverage_support/index.md b/content/en/references/coverage/coverage_support/index.md
index 4e18f10f14..1a42cac65f 100644
--- a/content/en/references/coverage/coverage_support/index.md
+++ b/content/en/references/coverage/coverage_support/index.md
@@ -7,7 +7,9 @@ hide_readingtime: true
---
## Coverage Overview
+
{{< localstack_coverage_table service="support" >}}
## Testing Details
+
{{< localstack_coverage_details service="support" >}}
diff --git a/content/en/references/coverage/coverage_swf/index.md b/content/en/references/coverage/coverage_swf/index.md
index 94b256c3f9..31e84cf996 100644
--- a/content/en/references/coverage/coverage_swf/index.md
+++ b/content/en/references/coverage/coverage_swf/index.md
@@ -7,7 +7,9 @@ hide_readingtime: true
---
## Coverage Overview
+
{{< localstack_coverage_table service="swf" >}}
## Testing Details
+
{{< localstack_coverage_details service="swf" >}}
diff --git a/content/en/references/coverage/coverage_textract/index.md b/content/en/references/coverage/coverage_textract/index.md
index 7af74455b5..99296918e6 100644
--- a/content/en/references/coverage/coverage_textract/index.md
+++ b/content/en/references/coverage/coverage_textract/index.md
@@ -7,7 +7,9 @@ hide_readingtime: true
---
## Coverage Overview
+
{{< localstack_coverage_table service="textract" >}}
## Testing Details
+
{{< localstack_coverage_details service="textract" >}}
diff --git a/content/en/references/coverage/coverage_timestream-query/index.md b/content/en/references/coverage/coverage_timestream-query/index.md
index d6a020911b..e0b451397c 100644
--- a/content/en/references/coverage/coverage_timestream-query/index.md
+++ b/content/en/references/coverage/coverage_timestream-query/index.md
@@ -7,7 +7,9 @@ hide_readingtime: true
---
## Coverage Overview
+
{{< localstack_coverage_table service="timestream-query" >}}
## Testing Details
+
{{< localstack_coverage_details service="timestream-query" >}}
diff --git a/content/en/references/coverage/coverage_timestream-write/index.md b/content/en/references/coverage/coverage_timestream-write/index.md
index d0a8c90442..19e864ab70 100644
--- a/content/en/references/coverage/coverage_timestream-write/index.md
+++ b/content/en/references/coverage/coverage_timestream-write/index.md
@@ -7,7 +7,9 @@ hide_readingtime: true
---
## Coverage Overview
+
{{< localstack_coverage_table service="timestream-write" >}}
## Testing Details
+
{{< localstack_coverage_details service="timestream-write" >}}
diff --git a/content/en/references/coverage/coverage_transcribe/index.md b/content/en/references/coverage/coverage_transcribe/index.md
index 13bb4afcf4..1d4c24b408 100644
--- a/content/en/references/coverage/coverage_transcribe/index.md
+++ b/content/en/references/coverage/coverage_transcribe/index.md
@@ -7,7 +7,9 @@ hide_readingtime: true
---
## Coverage Overview
+
{{< localstack_coverage_table service="transcribe" >}}
## Testing Details
+
{{< localstack_coverage_details service="transcribe" >}}
diff --git a/content/en/references/coverage/coverage_transfer/index.md b/content/en/references/coverage/coverage_transfer/index.md
index 88ec35a7c7..be0cb21de9 100644
--- a/content/en/references/coverage/coverage_transfer/index.md
+++ b/content/en/references/coverage/coverage_transfer/index.md
@@ -7,7 +7,9 @@ hide_readingtime: true
---
## Coverage Overview
+
{{< localstack_coverage_table service="transfer" >}}
## Testing Details
+
{{< localstack_coverage_details service="transfer" >}}
diff --git a/content/en/references/coverage/coverage_wafv2/index.md b/content/en/references/coverage/coverage_wafv2/index.md
index 3758bf7075..b356f81660 100644
--- a/content/en/references/coverage/coverage_wafv2/index.md
+++ b/content/en/references/coverage/coverage_wafv2/index.md
@@ -7,7 +7,9 @@ hide_readingtime: true
---
## Coverage Overview
+
{{< localstack_coverage_table service="wafv2" >}}
## Testing Details
+
{{< localstack_coverage_details service="wafv2" >}}
diff --git a/content/en/references/coverage/coverage_xray/index.md b/content/en/references/coverage/coverage_xray/index.md
index 9335c19e86..0ea58574fa 100644
--- a/content/en/references/coverage/coverage_xray/index.md
+++ b/content/en/references/coverage/coverage_xray/index.md
@@ -7,7 +7,9 @@ hide_readingtime: true
---
## Coverage Overview
+
{{< localstack_coverage_table service="xray" >}}
## Testing Details
+
{{< localstack_coverage_details service="xray" >}}
diff --git a/content/en/references/cross-account-access.md b/content/en/references/cross-account-access.md
index e033aac7e9..d7457d8b5d 100644
--- a/content/en/references/cross-account-access.md
+++ b/content/en/references/cross-account-access.md
@@ -18,40 +18,35 @@ Please report any issues on our [GitHub issue tracker](https://github.com/locals
Cross-account/cross-region access happens when a client attempts to access a resource in another account or region than what it is configured with:
{{< command >}}
-#
# Create a queue in one account and region
-#
-
$ AWS_ACCESS_KEY_ID=111111111111 awslocal sqs create-queue \
--queue-name my-queue \
--region ap-south-1
+
{
"QueueUrl": "http://sqs.ap-south-1.localhost.localstack.cloud:443/111111111111/my-queue"
}
+
-#
# Set some attributes
-#
-
$ AWS_ACCESS_KEY_ID=111111111111 awslocal sqs set-queue-attributes \
--attributes VisibilityTimeout=60 \
--queue-url http://sqs.ap-south-1.localhost.localstack.cloud:443/111111111111/my-queue \
- --region ap-south-1
-
-#
-# Retrieve the queue attribute from another account and region.
-# The required information for LocalStack to locate the queue is available in the queue URL.
-#
+ --region ap-south-1
+# Retrieve the queue attribute from another account and region
+# The required information for LocalStack to locate the queue is available in the queue URL
$ AWS_ACCESS_KEY_ID=222222222222 awslocal sqs get-queue-attributes \
--attribute-names VisibilityTimeout \
--region eu-central-1 \
--queue-url http://sqs.ap-south-1.localhost.localstack.cloud:443/111111111111/my-queue
+
{
"Attributes": {
"VisibilityTimeout": "60"
}
}
+
{{< /command >}}
## Cross-Account
diff --git a/content/en/references/custom-tls-certificates.md b/content/en/references/custom-tls-certificates.md
index 9e007cd6c4..8be4d091f6 100644
--- a/content/en/references/custom-tls-certificates.md
+++ b/content/en/references/custom-tls-certificates.md
@@ -24,7 +24,7 @@ They all can be summarised as:
1. get your proxy's custom certificate into the system certificate store, and
2. configure [`requests`](https://pypi.python.org/pypi/requests) to use the custom certificate,
-3. configure [`curl`](https://curl.se/) to use the custom certificate, and
+3. configure [`curl`](https://curl.se/) to use the custom certificate, and
4. configure [`node.js`](https://nodejs.org/) to use the custom certificate.
## Creating a custom docker image
@@ -53,12 +53,13 @@ $ docker build -t .
{{< callout "tip" >}}
Certificate files must end in `.crt` to be included in the system certificate store.
-If your certificate file ends with `.pem`, you can rename it to end in `.crt`.
+If your certificate file ends with `.pem`, you can rename it to end in `.crt`.
{{< /callout >}}
### Starting LocalStack with the custom image
-LocalStack now needs to be configured to use this custom image. The workflow is different depending on how you start localstack.
+LocalStack now needs to be configured to use this custom image.
+The workflow is different depending on how you start localstack.
{{< tabpane lang="bash">}}
{{< tab header="CLI" lang="bash" >}}
@@ -77,7 +78,8 @@ services:
## Custom TLS certificates with init hooks
-It is recommended to create a `boot` init hook. Create a directory on your local system that includes
+It is recommended to create a `boot` init hook.
+Create a directory on your local system that includes
* the certificate you wish to copy, and
* the following shell script:
@@ -103,11 +105,14 @@ and follow the instructions fn the [init hooks documentation]({{< ref "init-hook
### Linux
-On linux the custom certificate should be added to your `ca-certificates` bundle. For example on Debian based systems (as root):
+On linux the custom certificate should be added to your `ca-certificates` bundle.
+For example on Debian based systems (as root):
{{< command >}}
# cp /usr/local/share/ca-certificates
+
# update-ca-certificates
+
{{< / command >}}
Then run LocalStack with the environment variables `REQUESTS_CA_BUNDLE`, `CURL_CA_BUNDLE`, and `NODE_EXTRA_CA_CERTS``:
@@ -121,7 +126,8 @@ $ NODE_EXTRA_CA_CERTS=/etc/ssl/certs/ca-certificates.crt \
### macOS
-On macOS the custom certificate should be added to your keychain. See [this Apple support article](https://support.apple.com/en-gb/guide/keychain-access/kyca2431/mac) for more information.
+On macOS the custom certificate should be added to your keychain.
+See [this Apple support article](https://support.apple.com/en-gb/guide/keychain-access/kyca2431/mac) for more information.
Then run LocalStack with the environment variables `REQUESTS_CA_BUNDLE`, `CURL_CA_BUNDLE`, and `NODE_EXTRA_CA_CERTS``:
@@ -134,4 +140,5 @@ $ NODE_EXTRA_CA_CERTS=/etc/ssl/certs/ca-certificates.crt \
### Windows
-Currently host mode does not work with Windows. If you are using WSL2 you should follow the [Linux]({{< ref "#linux" >}}) steps above.
+Currently host mode does not work with Windows.
+If you are using WSL2 you should follow the [Linux]({{< ref "#linux" >}}) steps above.
diff --git a/content/en/references/docker-images.md b/content/en/references/docker-images.md
index cb9d6ce36f..c904e0f95d 100644
--- a/content/en/references/docker-images.md
+++ b/content/en/references/docker-images.md
@@ -7,40 +7,59 @@ description: >
Overview of LocalStack Docker images and their purpose
---
-LocalStack functions as a local “mini-cloud” operating system that runs inside a Docker container. LocalStack has multiple components, which include process management, file system abstraction, event processing, schedulers, and more. Running inside a Docker container, LocalStack exposes external network ports for integrations, SDKs, or CLI interfaces to connect to LocalStack APIs. The LocalStack & LocalStack Pro Docker images have been downloaded over 130+ million times and provide a multi-arch build compatible with AMD/x86 and ARM-based CPU architectures. This section will cover the different Docker images available for LocalStack and how to use them.
+LocalStack functions as a local “mini-cloud” operating system that runs inside a Docker container.
+LocalStack has multiple components, which include process management, file system abstraction, event processing, schedulers, and more.
+Running inside a Docker container, LocalStack exposes external network ports for integrations, SDKs, or CLI interfaces to connect to LocalStack APIs.
+The LocalStack & LocalStack Pro Docker images have been downloaded over 130+ million times and provide a multi-arch build compatible with AMD/x86 and ARM-based CPU architectures.
+This section will cover the different Docker images available for LocalStack and how to use them.
## LocalStack Community image
-The LocalStack Community image (`localstack/localstack`) contains the community and open-source version of our [core cloud emulator](https://github.com/localstack/localstack). To use the LocalStack Community image, you can pull the image from Docker Hub:
+The LocalStack Community image (`localstack/localstack`) contains the community and open-source version of our [core cloud emulator](https://github.com/localstack/localstack).
+To use the LocalStack Community image, you can pull the image from Docker Hub:
{{< command >}}
$ docker pull localstack/localstack:latest
{{< / command >}}
-To use the LocalStack Community image, you don't need to sign-up for an account on [LocalStack Web Application](https://app.localstack.cloud). The Community image is free to use and does not require any API key to run. The Community image can be used to run [local AWS services](https://docs.localstack.cloud/user-guide/aws/) with [integrations](https://docs.localstack.cloud/user-guide/integrations/) on your local machine or in your [continuous integration pipelines](https://docs.localstack.cloud/user-guide/ci/).
+To use the LocalStack Community image, you don't need to sign-up for an account on [LocalStack Web Application](https://app.localstack.cloud).
+The Community image is free to use and does not require any API key to run.
+The Community image can be used to run [local AWS services](https://docs.localstack.cloud/user-guide/aws/) with [integrations](https://docs.localstack.cloud/user-guide/integrations/) on your local machine or in your [continuous integration pipelines](https://docs.localstack.cloud/user-guide/ci/).
-The Community image also covers a limited set of [LocalStack Tools](https://docs.localstack.cloud/user-guide/tools/) to make your life as a cloud developer easier. You can use [LocalStack Desktop](https://docs.localstack.cloud/user-guide/tools/localstack-desktop/) or [LocalStack Docker Extension](https://docs.localstack.cloud/user-guide/tools/localstack-docker-extension/) to use LocalStack with a graphical user interface.
+The Community image also covers a limited set of [LocalStack Tools](https://docs.localstack.cloud/user-guide/tools/) to make your life as a cloud developer easier.
+You can use [LocalStack Desktop](https://docs.localstack.cloud/user-guide/tools/localstack-desktop/) or [LocalStack Docker Extension](https://docs.localstack.cloud/user-guide/tools/localstack-docker-extension/) to use LocalStack with a graphical user interface.
-You can use the Community image to start your LocalStack container using various [installation methods](https://docs.localstack.cloud/getting-started/installation/). While configuring to run LocalStack with Docker or Docker Compose, run the `localstack/localstack` image with the appropriate tag you have pulled (if not `latest`).
+You can use the Community image to start your LocalStack container using various [installation methods](https://docs.localstack.cloud/getting-started/installation/).
+While configuring to run LocalStack with Docker or Docker Compose, run the `localstack/localstack` image with the appropriate tag you have pulled (if not `latest`).
## LocalStack Pro image
-LocalStack Pro contains various advanced extensions to the LocalStack base platform. With LocalStack Pro image, you can access all the emulated AWS cloud services running entirely on your local machine. To use the LocalStack Pro image, you can pull the image from Docker Hub:
+LocalStack Pro contains various advanced extensions to the LocalStack base platform.
+With LocalStack Pro image, you can access all the emulated AWS cloud services running entirely on your local machine.
+To use the LocalStack Pro image, you can pull the image from Docker Hub:
{{< command >}}
$ docker pull localstack/localstack-pro:latest
{{< / command >}}
-To use the LocalStack Pro image, you must configure an environment variable named `LOCALSTACK_AUTH_TOKEN` to contain your auth token. The LocalStack Pro image will display a warning if you do not set an auth token (or if the license is invalid/expired) and will not activate the Pro features. LocalStack Pro gives you access to the complete set of LocalStack features, including the [LocalStack Web Application](https://app.localstack.cloud) and [dedicated customer support](https://docs.localstack.cloud/getting-started/help-and-support/#pro-support).
+To use the LocalStack Pro image, you must configure an environment variable named `LOCALSTACK_AUTH_TOKEN` to contain your auth token.
+The LocalStack Pro image will display a warning if you do not set an auth token (or if the license is invalid/expired) and will not activate the Pro features.
+LocalStack Pro gives you access to the complete set of LocalStack features, including the [LocalStack Web Application](https://app.localstack.cloud) and [dedicated customer support](https://docs.localstack.cloud/getting-started/help-and-support/#pro-support).
-You can use the Pro image to start your LocalStack container using various [installation methods](https://docs.localstack.cloud/getting-started/installation/). While configuring to run LocalStack with Docker or Docker Compose, run the `localstack/localstack-pro` image with the appropriate tag you have pulled (if not `latest`).
+You can use the Pro image to start your LocalStack container using various [installation methods](https://docs.localstack.cloud/getting-started/installation/).
+While configuring to run LocalStack with Docker or Docker Compose, run the `localstack/localstack-pro` image with the appropriate tag you have pulled (if not `latest`).
{{< callout >}}
-Earlier, we maintained `localstack/localstack-light` and `localstack/localstack-full` images. They have been deprecated and are removed with the LocalStack 2.0 release. The [BigData image](https://hub.docker.com/r/localstack/bigdata/tags), which started as a `bigdata_container` container, has also been deprecated in favor of a BigData Mono container which installs dependencies directly into the LocalStack (`localstack-main`) container.
+Earlier, we maintained `localstack/localstack-light` and `localstack/localstack-full` images.
+They have been deprecated and are removed with the LocalStack 2.0 release.
+The [BigData image](https://hub.docker.com/r/localstack/bigdata/tags), which started as a `bigdata_container` container, has also been deprecated in favor of a BigData Mono container which installs dependencies directly into the LocalStack (`localstack-main`) container.
{{< /callout >}}
## Image tags
-We use tags for versions with significant features, enhancements, or bug fixes - following [semantic versioning](https://semver.org). To ensure that we move quickly and steadily, we run nightly builds, where all our updates are available on the `latest` tag of LocalStack's Docker image. We intend to announce more significant features and enhancements during major & minor releases. We occasionally create patch releases for minor bug fixes and enhancements, to ensure that we can deliver changes quickly while not breaking your existing workflows (in case you prefer not to use `latest`).
+We use tags for versions with significant features, enhancements, or bug fixes - following [semantic versioning](https://semver.org).
+To ensure that we move quickly and steadily, we run nightly builds, where all our updates are available on the `latest` tag of LocalStack's Docker image.
+We intend to announce more significant features and enhancements during major & minor releases.
+We occasionally create patch releases for minor bug fixes and enhancements, to ensure that we can deliver changes quickly while not breaking your existing workflows (in case you prefer not to use `latest`).
To check out the various tags available for LocalStack, you can visit the [LocalStack Community](https://hub.docker.com/r/localstack/localstack/tags?page=1&ordering=last_updated) & [LocalStack Pro](https://hub.docker.com/r/localstack/localstack-pro/tags?page=1&ordering=last_updated) Docker Hub pages.
diff --git a/content/en/references/external-ports.md b/content/en/references/external-ports.md
index 25d8ad3f55..3e14ca2db4 100644
--- a/content/en/references/external-ports.md
+++ b/content/en/references/external-ports.md
@@ -13,7 +13,7 @@ This documentation discusses two approaches to access these external services wi
## Proxy Functionality for External Services
LocalStack offers a proxy functionality to access external services indirectly.
-In this approach, LocalStack assigns local domains to the external services based on the individual service's configuration.
+In this approach, LocalStack assigns local domains to the external services based on the individual service's configuration.
For instance, if OpenSearch is configured to use the [`OPENSEARCH_ENDPOINT_STRATEGY=domain`]({{< ref "opensearch#endpoints" >}}) setting, a cluster can be reached using the domain name `...localhost.localstack.cloud`.
Incoming messages to these domains are relayed to servers running on ports that do not require external accessibility.
diff --git a/content/en/references/filesystem.md b/content/en/references/filesystem.md
index 2265d9524c..dc1f1fce6b 100644
--- a/content/en/references/filesystem.md
+++ b/content/en/references/filesystem.md
@@ -45,8 +45,8 @@ LocalStack uses following directory layout when running within a container.
- `/var/lib/localstack/tmp`: temporary data that is not expected to survive LocalStack runs (may be cleared when LocalStack starts or stops)
- `/var/lib/localstack/cache`: temporary data that is expected to survive LocalStack runs (is not cleared when LocalStack starts or stops)
-
### Configuration
+
- `/etc/localstack`: configuration directory
- `/etc/localstack/init`: root directory for [initialization hooks]({{< ref `init-hooks` >}})
@@ -133,7 +131,6 @@ For example, you have created an OpenSearch cluster and are trying to access tha
-
}}" class="justify-content-between d-flex flex-column text-center">
diff --git a/content/en/references/network-troubleshooting/endpoint-url/_index.md b/content/en/references/network-troubleshooting/endpoint-url/_index.md
index 65efa39605..b7092008f6 100644
--- a/content/en/references/network-troubleshooting/endpoint-url/_index.md
+++ b/content/en/references/network-troubleshooting/endpoint-url/_index.md
@@ -12,7 +12,10 @@ This documentation provides step-by-step guidance on how to access LocalStack se
{{< figure src="../images/1.svg" width="400" >}}
-Suppose you have LocalStack installed on your machine and want to access it using the AWS CLI. To connect, you must expose port 4566 from your LocalStack instance and connect to `localhost` or a domain name that points to `localhost`. While the LocalStack CLI does this automatically, when running the Docker container directly or with docker compose, you must configure it manually. Check out the [getting started documentation]({{< ref "getting-started/installation" >}}) for more information.
+Suppose you have LocalStack installed on your machine and want to access it using the AWS CLI.
+To connect, you must expose port 4566 from your LocalStack instance and connect to `localhost` or a domain name that points to `localhost`.
+While the LocalStack CLI does this automatically, when running the Docker container directly or with docker compose, you must configure it manually.
+Check out the [getting started documentation]({{< ref "getting-started/installation" >}}) for more information.
{{< callout "tip" >}}
If you bind a domain name to `localhost`, ensure that you are not subject to [DNS rebind protection]({{< ref "dns-server#dns-rebind-protection" >}}).
@@ -24,7 +27,7 @@ You can also use the `GATEWAY_LISTEN` [configuration variable]({{< ref "referenc
{{< figure src="../images/4.svg" width="400" >}}
-Suppose your code is running inside an ECS container that LocalStack has created.
+Suppose your code is running inside an ECS container that LocalStack has created.
The LocalStack instance is available at the domain `localhost.localstack.cloud`.
All subdomains of `localhost.localstack.cloud` also resolve to the LocalStack instance, e.g. API Gateway default URLs.
@@ -55,7 +58,7 @@ aws --endpoint-url http://localstack-main:4566 s3api list-buckets
{{}}
services:
localstack:
- # ... other configuration here
+ # other configuration here
environment:
MAIN_DOCKER_NETWORK=ls
networks:
@@ -71,7 +74,6 @@ networks:
{{}}
-
## From your container
{{< figure src="../images/7.svg" width="400" >}}
@@ -95,7 +97,7 @@ localstack wait
# get the ip address of the LocalStack container
docker inspect localstack-main | \
- jq -r '.[0].NetworkSettings.Networks | to_entries | .[].value.IPAddress'
+ jq -r '.[0].NetworkSettings.Networks | to_entries | .[].value.IPAddress'
# prints 172.27.0.2
# run your application container
@@ -108,7 +110,7 @@ docker run --rm -it --network ls --name localstack-main localstack
# get the ip address of the LocalStack container
docker inspect localstack-main | \
- jq -r '.[0].NetworkSettings.Networks | to_entries | .[].value.IPAddress'
+ jq -r '.[0].NetworkSettings.Networks | to_entries | .[].value.IPAddress'
# prints 172.27.0.2
# run your application container
@@ -123,7 +125,7 @@ services:
image: localstack/localstack
ports:
# Now only required if you need to access LocalStack from the host
- - "127.0.0.1:4566:4566"
+ - "127.0.0.1:4566:4566"
# Now only required if you need to access LocalStack from the host
- "127.0.0.1:4510-4559:4510-4559"
environment:
@@ -155,7 +157,6 @@ networks:
{{< / tab >}}
{{% / tabpane %}}
-
For LocalStack versions before 2.3.0
To facilitate access to LocalStack from within the container, it's recommended to start LocalStack in a user-defined network and set the MAIN_DOCKER_NETWORK
environment variable to the network's name.
@@ -184,11 +185,11 @@ docker run --rm it --network my-network
{{}}
services:
localstack:
- # ... other configuration here
+ # other configuration here
networks:
- ls
your_container:
- # ... other configuration here
+ # other configuration here
networks:
- ls
networks:
@@ -225,10 +226,10 @@ docker run --rm -it -p 4566:4566 localstack
{{}}
services:
localstack:
- # ... other configuration here
+ # other configuration here
ports:
- "4566:4566"
- # ... other ports
+ # other ports
{{}}
{{}}
diff --git a/content/en/references/network-troubleshooting/transparent-endpoint-injection/_index.md b/content/en/references/network-troubleshooting/transparent-endpoint-injection/_index.md
index 2899dcb813..69b4c35af0 100644
--- a/content/en/references/network-troubleshooting/transparent-endpoint-injection/_index.md
+++ b/content/en/references/network-troubleshooting/transparent-endpoint-injection/_index.md
@@ -7,7 +7,8 @@ tags:
- networking
---
-Suppose you're attempting to access LocalStack, but you're relying on transparent endpoint injection to redirect AWS (`*.amazonaws.com`) requests. In such cases, there are different approaches you can take depending on your setup.
+Suppose you're attempting to access LocalStack, but you're relying on transparent endpoint injection to redirect AWS (`*.amazonaws.com`) requests.
+In such cases, there are different approaches you can take depending on your setup.
## From your host
diff --git a/content/en/references/podman.md b/content/en/references/podman.md
index 2d3d465bfc..4ae9b1e53e 100644
--- a/content/en/references/podman.md
+++ b/content/en/references/podman.md
@@ -13,7 +13,9 @@ Podman support is still experimental, and the following docs give you an overvie
From the Podman docs:
-> Podman is a daemonless, open source, Linux native tool designed to make it easy to find, run, build, share and deploy applications using Open Containers Initiative (OCI) Containers and Container Images. Podman provides a command line interface (CLI) familiar to anyone who has used the Docker Container Engine. Most users can simply alias Docker to Podman (`alias docker=podman`) without any problems.
+> Podman is a daemonless, open source, Linux native tool designed to make it easy to find, run, build, share and deploy applications using Open Containers Initiative (OCI) Containers and Container Images.
+> Podman provides a command line interface (CLI) familiar to anyone who has used the Docker Container Engine.
+> Most users can simply alias Docker to Podman (`alias docker=podman`) without any problems.
## Options
@@ -24,7 +26,9 @@ To run `localstack`, simply aliasing `alias docker=podman` is not enough, for th
Here are several options on running LocalStack using podman:
### podman-docker
-The package `podman-docker` emulates the Docker CLI using podman. It creates the following links:
+
+The package `podman-docker` emulates the Docker CLI using podman.
+It creates the following links:
- `/usr/bin/docker -> /usr/bin/podman`
- `/var/run/docker.sock -> /run/podman/podman.sock`
@@ -34,7 +38,9 @@ This package is available for some distros:
- https://packages.debian.org/sid/podman-docker
### Rootfull Podman with podman-docker
+
The simplest option is to run `localstack` using `podman` by having `podman-docker` and running `localstack start` as root
+
```sh
# you have to start the podman socket first
sudo systemctl start podman
@@ -44,6 +50,7 @@ sudo sh -c 'DEBUG=1 localstack start'
```
### Rootfull Podman without podman-docker
+
```sh
# you still have to start the podman socket first
sudo systemctl start podman
@@ -53,6 +60,7 @@ sudo sh -c 'DEBUG=1 DOCKER_CMD=podman DOCKER_HOST=unix://run/podman/podman.sock
```
### Rootless Podman
+
You have to prepare your environment first:
- https://wiki.archlinux.org/title/Podman#Rootless_Podman
- https://github.com/containers/podman/blob/main/docs/tutorials/rootless_tutorial.md
@@ -67,12 +75,15 @@ DEBUG=1 DOCKER_CMD="podman" DOCKER_SOCK=$XDG_RUNTIME_DIR/podman/podman.sock DOCK
```
If you have problems with [subuid and subgid](https://wiki.archlinux.org/title/Podman#Set_subuid_and_subgid), you could try to use [overlay.ignore_chown_errors option](https://www.redhat.com/sysadmin/controlling-access-rootless-podman-users)
+
```sh
DEBUG=1 DOCKER_CMD="podman --storage-opt overlay.ignore_chown_errors=true" DOCKER_SOCK=$XDG_RUNTIME_DIR/podman/podman.sock DOCKER_HOST=unix://$XDG_RUNTIME_DIR/podman/podman.sock localstack start
```
+
### Podman on Windows
-You can run Podman on Windows using [WSLv2](https://learn.microsoft.com/en-us/windows/wsl/about#what-is-wsl-2). In the guide, we use a Docker Compose setup to run LocalStack.
+You can run Podman on Windows using [WSLv2](https://learn.microsoft.com/en-us/windows/wsl/about#what-is-wsl-2).
+In the guide, we use a Docker Compose setup to run LocalStack.
Initialize and start Podman:
@@ -81,13 +92,15 @@ $ podman machine init
$ podman machine start
{{< / command >}}
-At this stage, Podman operates in rootless mode, where exposing port 443 on Windows is not possible. To enable this, switch Podman to rootful mode using the following command:
+At this stage, Podman operates in rootless mode, where exposing port 443 on Windows is not possible.
+To enable this, switch Podman to rootful mode using the following command:
{{< command >}}
podman machine set --rootful
{{< / command >}}
-For the Docker Compose setup, use the following configuration. When running in rootless mode, ensure to comment out the HTTPS gateway port, as it is unable to bind to privileged ports below 1024.
+For the Docker Compose setup, use the following configuration.
+When running in rootless mode, ensure to comment out the HTTPS gateway port, as it is unable to bind to privileged ports below 1024.
```yaml
version: "3.8"
diff --git a/content/en/references/usage-tracking.md b/content/en/references/usage-tracking.md
index ad72f10ec2..e525294463 100644
--- a/content/en/references/usage-tracking.md
+++ b/content/en/references/usage-tracking.md
@@ -9,11 +9,14 @@ aliases:
## Overview
-For license activations, we track the timestamp and the licensing credentials. We need to do this to make CI credits work. It is tracked regardless of whether the user disables event tracking since we collect this in the backend, not the client.
+For license activations, we track the timestamp and the licensing credentials.
+We need to do this to make CI credits work.
+It is tracked regardless of whether the user disables event tracking since we collect this in the backend, not the client.
## LocalStack usage statistics
-For Pro users, most of the information is collected to populate the [Stack Insights](https://docs.localstack.cloud/user-guide/web-application/stack-insights) dashboard. Collecting basic anonymized usage of AWS services helps us better direct engineering efforts to services that are used the most or cause the most issues.
+For Pro users, most of the information is collected to populate the [Stack Insights](https://docs.localstack.cloud/user-guide/web-application/stack-insights) dashboard.
+Collecting basic anonymized usage of AWS services helps us better direct engineering efforts to services that are used the most or cause the most issues.
### Session information
@@ -50,7 +53,8 @@ The AWS API call metadata includes:
- The service being called (like `s3` or `lambda`)
- The operation being called (like `PutObject`, `CreateQueue`, `DeleteQueue`)
- The HTTP status code of the response
-- If it is a 400 error, we collect the error type and message. If it is a 500 error (internal LocalStack error), and `DEBUG=1` is enabled, we may also collect the stack trace to help us identify LocalStack bugs
+- If it is a 400 error, we collect the error type and message.
+ If it is a 500 error (internal LocalStack error), and `DEBUG=1` is enabled, we may also collect the stack trace to help us identify LocalStack bugs
- Whether the call originated from inside LocalStack
- The region user made the call to
- The dummy account ID user made the request
@@ -83,7 +87,8 @@ For the community image, we only track service, operation, status code, and how
### CLI invocations
-We collect an anonymized event if a CLI command was invoked, but do not collect any of the parameter values. This event is not connected to the session or the auth token.
+We collect an anonymized event if a CLI command was invoked, but do not collect any of the parameter values.
+This event is not connected to the session or the auth token.
Here is an example of a CLI invocation event:
@@ -113,7 +118,8 @@ We collect the usage of particular features in an anonymized and aggregated way.
- Specific LocalStack configuration values
- Content or file names of files being uploaded to S3
-- More generally, we don't collect any parameters of AWS API Calls. We do not track S3 bucket names, Lambda function names, EC2 configurations, or anything similar
+- More generally, we don't collect any parameters of AWS API Calls.
+ We do not track S3 bucket names, Lambda function names, EC2 configurations, or anything similar
- Any sensitive information about the request (like credentials and URL parameters)
## Configuration
diff --git a/content/en/tutorials/_index.md b/content/en/tutorials/_index.md
index 0cc7469c5a..272fdfa0d2 100644
--- a/content/en/tutorials/_index.md
+++ b/content/en/tutorials/_index.md
@@ -15,4 +15,4 @@ type: tutorials
---
-
\ No newline at end of file
+
diff --git a/content/en/tutorials/cloud-pods-collaborative-debugging/index.md b/content/en/tutorials/cloud-pods-collaborative-debugging/index.md
index 2904101a6a..75ea166835 100644
--- a/content/en/tutorials/cloud-pods-collaborative-debugging/index.md
+++ b/content/en/tutorials/cloud-pods-collaborative-debugging/index.md
@@ -33,8 +33,10 @@ By replicating environments, teams can share the exact conditions under which a
For developing AWS applications locally, the tool of choice is LocalStack, which can sustain a full-blown comprehensive stack.
However, when issues appear, and engineers need a second opinion from a colleague, recreating the environment from scratch can leave
-details slipping through the cracks. This is where Cloud Pods come in, to encapsulate the state of the LocalStack instance and allow for seamless
-collaboration. While databases have snapshots, similarly, LocalStack uses Cloud Pods for reproducing state and data.
+details slipping through the cracks.
+This is where Cloud Pods come in, to encapsulate the state of the LocalStack instance and allow for seamless
+collaboration.
+While databases have snapshots, similarly, LocalStack uses Cloud Pods for reproducing state and data.
In this tutorial, we will explore a common situation where a basic IAM misconfiguration causes unnecessary delays in finding the right solution.
We will also discuss the best practices to prevent this and review some options for configuring Cloud Pod storage.
@@ -50,27 +52,32 @@ The full sample application can be found [on GitHub](https://github.com/localsta
- Basic knowledge of AWS services (API Gateway, Lambda, DynamoDB, IAM)
- Basic understanding of Terraform for provisioning AWS resources
-In this demo scenario, a new colleague, Bob, joins the company, clones the application repository, and starts working on the Lambda code. He will add the necessary
-resources in the Terraform configuration file and some IAM policies that the functions need in order to access the database.
-He is following good practice rules, where the resource has only the necessary permissions. However, Bob encounters an error despite this.
+In this demo scenario, a new colleague, Bob, joins the company, clones the application repository, and starts working on the Lambda code.
+He will add the necessary
+resources in the Terraform configuration file and some IAM policies that the functions need in order to access the database.
+He is following good practice rules, where the resource has only the necessary permissions.
+However, Bob encounters an error despite this.
### Architecture Overview
The stack consists of an API Gateway that exposes endpoints and integrates with two Lambda functions responsible for adding and fetching
-products from a DynamoDB database. IAM policies are enforced to ensure compliance with the
+products from a DynamoDB database.
+IAM policies are enforced to ensure compliance with the
**[principle of least privilege](https://en.wikipedia.org/wiki/Principle_of_least_privilege)**, and the logs will be sent to the CloudWatch service.
### Note
-This demo application is suitable for AWS and behaves the same as on LocalStack. You can try this out by running the Terraform configuration file against the AWS platform.
+This demo application is suitable for AWS and behaves the same as on LocalStack.
+You can try this out by running the Terraform configuration file against the AWS platform.

### Starting LocalStack
-In the root directory, there is a `docker-compose.yml` file that will spin up version 3.3.0 of LocalStack, with an
-important configuration flag, `ENFORCE_IAM=1`, which will facilitate IAM policy evaluation and enforcement. For this
-example, a `LOCALSTACK_AUTH_TOKEN` is needed, which you can find in the LocalStack web app on the
+In the root directory, there is a `docker-compose.yml` file that will spin up version 3.3.0 of LocalStack, with an
+important configuration flag, `ENFORCE_IAM=1`, which will facilitate IAM policy evaluation and enforcement.
+For this
+example, a `LOCALSTACK_AUTH_TOKEN` is needed, which you can find in the LocalStack web app on the
[Getting Started](https://app.localstack.cloud/getting-started) page.
{{< command >}}
@@ -81,7 +88,8 @@ $ docker compose up
### The Terraform Configuration File
The entire Terraform configuration file for setting up the application stack is available in the same repository at
-https://github.com/localstack-samples/cloud-pods-collaboration-demo/blob/main/terraform/main.tf. To deploy all the resources on LocalStack,
+https://github.com/localstack-samples/cloud-pods-collaboration-demo/blob/main/terraform/main.tf.
+To deploy all the resources on LocalStack,
navigate to the project's root folder and use the following commands:
{{< command >}}
@@ -91,15 +99,16 @@ $ tflocal plan
$ tflocal apply --auto-approve
{{ command >}}
-`tflocal` is a small wrapper script to run Terraform against LocalStack. The endpoints for all services are configured to point to the
+`tflocal` is a small wrapper script to run Terraform against LocalStack.
+The endpoints for all services are configured to point to the
LocalStack API, which allows you to deploy your unmodified Terraform scripts against LocalStack.
- **`init`**: This command initializes the Terraform working directory, installs any necessary plugins, and sets up the backend.
- **`plan`**: Creates an execution plan, which allows you to review the actions Terraform will take to change your infrastructure.
-- **`apply`**: Finally, the **`apply`** command applies the changes required to reach the desired state of the configuration.
+- **`apply`**: Finally, the **`apply`** command applies the changes required to reach the desired state of the configuration.
If **`-auto-approve`** is used, it bypasses the interactive approval step normally required.
-As mentioned previously, there is something missing from this configuration, and that is the **`GetItem`** operation permission
+As mentioned previously, there is something missing from this configuration, and that is the **`GetItem`** operation permission
for one of the Lambda functions:
```java
@@ -129,7 +138,8 @@ Bob has mistakenly used `dynamodb:Scan` and `dynamodb:Query`, but missed adding
### Reproducing the issue locally
-Let’s test out the current state of the application. The Terraform configuration file outputs the REST API ID of the API Gateway.
+Let’s test out the current state of the application.
+The Terraform configuration file outputs the REST API ID of the API Gateway.
We can capture that value and use it further to invoke the **`add-product`** Lambda:
{{< command >}}
@@ -150,7 +160,8 @@ $ curl --location "http://$rest_api_id.execute-api.localhost.localstack.cloud:45
--data '{
"id": "34534",
"name": "EcoFriendly Water Bottle",
- "description": "A durable, eco-friendly water bottle designed to keep your drinks cold for up to 24 hours and hot for up to 12 hours. Made from high-quality, food-grade stainless steel, it'\''s perfect for your daily hydration needs.",
+ "description": "A durable, eco-friendly water bottle designed to keep your drinks cold for up to 24 hours and hot for up to 12 hou
+s. Made from high-quality, food-grade stainless steel, it'\''s perfect for your daily hydration needs.",
"price": "29.99"
}'
@@ -159,7 +170,8 @@ $ curl --location "http://$rest_api_id.execute-api.localhost.localstack.cloud:45
--data '{
"id": "82736",
"name": "Sustainable Hydration Flask",
- "description": "This sustainable hydration flask is engineered to maintain your beverages at the ideal temperature—cold for 24 hours and hot for 12 hours. Constructed with premium, food-grade stainless steel, it offers an environmentally friendly solution to stay hydrated throughout the day.",
+ "description": "This sustainable hydration flask is engineered to maintain your beverages at the ideal temperature—cold for 24 hours and hot for 12 hours.
+Constructed with premium, food-grade stainless steel, it offers an environmentally friendly solution to stay hydrated throughout the day.",
"price": "31.50"
}'
{{ command >}}
@@ -175,9 +187,10 @@ Internal server error⏎
{{ command >}}
-
-An `Internal server error⏎` does not give out too much information. Bob does not know for sure what could be
-causing this. The Lambda code and the configurations look fine to him.
+An `Internal server error⏎` does not give out too much information.
+Bob does not know for sure what could be
+causing this.
+The Lambda code and the configurations look fine to him.
## Using Cloud Pods for collaborative debugging
@@ -197,7 +210,7 @@ Services: sts,iam,apigateway,dynamodb,lambda,s3,cloudwatch,logs
LocalStack provides a remote storage backend that can be used to store the state of your application and share it with your team members.
-The Cloud Pods CLI is included in the LocalStack CLI installation, so there’s no need for additional plugins to begin using it.
+The Cloud Pods CLI is included in the LocalStack CLI installation, so there’s no need for additional plugins to begin using it.
The `LOCALSTACK_AUTH_TOKEN` needs to be set as an environment variable.
Additionally, there are other commands for managing Cloud Pods included in the CLI:
@@ -222,14 +235,13 @@ Commands:
{{ command >}}
-
### Pulling and Loading the Cloud Pod
The workflow between Alice and Bob is incredibly easy:

-Now, in a fresh LocalStack instance, Alice can immediately load the Cloud Pod, because she's part of the
+Now, in a fresh LocalStack instance, Alice can immediately load the Cloud Pod, because she's part of the
same organization:
{{< command >}}
@@ -241,35 +253,44 @@ Cloud Pod cloud-pod-product-app successfully loaded
### Debugging and Resolving the Issue
-Not only can Alice easily reproduce the bug now, but she also has access to the state and data of the services
+Not only can Alice easily reproduce the bug now, but she also has access to the state and data of the services
involved, meaning that the Lambda logs are still in the CloudWatch log groups.

-By spotting the error message, there’s an instant starting point for checking the source of the problem. The error message displayed in the logs is very specific:
+By spotting the error message, there’s an instant starting point for checking the source of the problem.
+The error message displayed in the logs is very specific:
`"Error: User: arn:aws:sts::000000000000:assumed-role/productRole/get-product is not authorized to perform: dynamodb:GetItem on resource: arn:aws:dynamodb:us-east-1:000000000000:table/Products because no identity-based policy allows the dynamodb:GetItem action (Service: DynamoDb, Status Code: 400, Request ID: d50e9dad-a01a-4860-8c21-e844a930ba7d)"`
### Identifying the Misconfiguration
-The error points to a permissions issue related to accessing DynamoDB. The action **`dynamodb:GetItem`** is
-not authorized for the role, preventing the retrieval of a product by its ID. This kind of error was not foreseen as one
-of the exceptions to be handled in the application. IAM policies are not always easy and straightforward, so it's a well known fact that
+The error points to a permissions issue related to accessing DynamoDB.
+The action **`dynamodb:GetItem`** is
+not authorized for the role, preventing the retrieval of a product by its ID.
+This kind of error was not foreseen as one
+of the exceptions to be handled in the application.
+IAM policies are not always easy and straightforward, so it's a well known fact that
these configurations are prone to mistakes.
-To confirm the finding, Alice now has the exact same environment to reproduces the error in. There are no machine specific configurations and
-no other manual changes. This leads to the next step in troubleshooting: **inspecting the Terraform configuration file** responsible
+To confirm the finding, Alice now has the exact same environment to reproduces the error in.
+There are no machine specific configurations and
+no other manual changes.
+This leads to the next step in troubleshooting: **inspecting the Terraform configuration file** responsible
for defining the permissions attached to the Lambda role for interacting with DynamoDB.
### Fixing the Terraform Configuration
Upon review, Alice discovers that the Terraform configuration does not include the necessary permission **`dynamodb:GetItem`** in the
-policy attached to the Lambda role. This oversight explains the error message. The Terraform configuration file acts as a
+policy attached to the Lambda role.
+This oversight explains the error message.
+The Terraform configuration file acts as a
blueprint for AWS resource permissions, and any missing action can lead to errors related to authorization.
-This scenario underscores the importance of thorough review and testing of IAM roles and policies when working with AWS resources.
-It's easy to overlook a single action in a policy, but as we've seen, such an omission can significantly impact application
-functionality. By carefully checking the Terraform configuration files and ensuring that all necessary permissions are included,
+This scenario underscores the importance of thorough review and testing of IAM roles and policies when working with AWS resources.
+It's easy to overlook a single action in a policy, but as we've seen, such an omission can significantly impact application
+functionality.
+By carefully checking the Terraform configuration files and ensuring that all necessary permissions are included,
developers can avoid similar issues and ensure a smoother, error-free interaction with AWS services.
The action list should now look like this:
@@ -293,19 +314,22 @@ resource "aws_iam_policy" "lambda_dynamodb_policy" {
},
]
})
-}
+}
{{ command >}}
-To double-check, Alice creates the stack on AWS, and observes that the issue is the same, related to policy
+To double-check, Alice creates the stack on AWS, and observes that the issue is the same, related to policy
misconfiguration:

### Impact on the team
-Alice has updated the infrastructure and deployed a new version of the Cloud Pod with the necessary fixes. Bob will
-access the updated infrastructure and proceed with his tasks. Meanwhile, Carol is developing integration tests for the
-CI pipeline. She will use the stable version of the infrastructure to ensure that the workflows function effectively from
+Alice has updated the infrastructure and deployed a new version of the Cloud Pod with the necessary fixes.
+Bob will
+access the updated infrastructure and proceed with his tasks.
+Meanwhile, Carol is developing integration tests for the
+CI pipeline.
+She will use the stable version of the infrastructure to ensure that the workflows function effectively from
start to finish.

@@ -320,12 +344,14 @@ The Cloud Pods command-line interface enables users to manage these remotes with
## Conclusion
-Cloud Pods play a crucial role in team collaboration, significantly speeding up development processes. The multiple and
-versatile options for remote storage can support different business requirements for companies that prefer using the
-environments they control. Cloud Pods are not just for teamwork; they also excel in other areas, such as creating
+Cloud Pods play a crucial role in team collaboration, significantly speeding up development processes.
+The multiple and
+versatile options for remote storage can support different business requirements for companies that prefer using the
+environments they control.
+Cloud Pods are not just for teamwork; they also excel in other areas, such as creating
resources in Continuous Integration (CI) for ultra-fast testing pipelines.
## Additional resources
- [Cloud Pods documentation](https://docs.localstack.cloud/user-guide/state-management/cloud-pods/)
-- [Terraform for AWS](https://developer.hashicorp.com/terraform/tutorials/aws-get-started)
\ No newline at end of file
+- [Terraform for AWS](https://developer.hashicorp.com/terraform/tutorials/aws-get-started)
diff --git a/content/en/tutorials/ecs-ecr-container-app/index.md b/content/en/tutorials/ecs-ecr-container-app/index.md
index 0db2965367..b138b23b79 100644
--- a/content/en/tutorials/ecs-ecr-container-app/index.md
+++ b/content/en/tutorials/ecs-ecr-container-app/index.md
@@ -24,22 +24,28 @@ pro: true
leadimage: "ecs-ecr-container-app-featured-image.png"
---
-[Amazon Elastic Container Service (ECS)](https://aws.amazon.com/ecs/) is a fully-managed container orchestration service that simplifies the deployment, management, and scaling of Docker containers on AWS. With support for two [launch types](https://docs.aws.amazon.com/AmazonECS/latest/developerguide/launch_types.html), EC2 and Fargate, ECS allows you to run containers on your cluster of EC2 instances or have AWS manage your underlying infrastructure with Fargate. The Fargate launch type provides a serverless-like experience for running containers, allowing you to focus on your applications instead of infrastructure.
+[Amazon Elastic Container Service (ECS)](https://aws.amazon.com/ecs/) is a fully-managed container orchestration service that simplifies the deployment, management, and scaling of Docker containers on AWS.
+With support for two [launch types](https://docs.aws.amazon.com/AmazonECS/latest/developerguide/launch_types.html), EC2 and Fargate, ECS allows you to run containers on your cluster of EC2 instances or have AWS manage your underlying infrastructure with Fargate.
+The Fargate launch type provides a serverless-like experience for running containers, allowing you to focus on your applications instead of infrastructure.
-[Amazon Elastic Container Registry (ECR)](https://aws.amazon.com/ecr/) is a fully-managed service that allows you to store, manage, and deploy Docker container images. It is tightly integrated with other AWS services such as ECS, EKS, and Lambda, enabling you to quickly deploy your container images to these services. With ECR, you can version, tag, and manage your container images’ lifecycles independently of your applications, making it easy to maintain and deploy your containers.
+[Amazon Elastic Container Registry (ECR)](https://aws.amazon.com/ecr/) is a fully-managed service that allows you to store, manage, and deploy Docker container images.
+It is tightly integrated with other AWS services such as ECS, EKS, and Lambda, enabling you to quickly deploy your container images to these services.
+With ECR, you can version, tag, and manage your container images’ lifecycles independently of your applications, making it easy to maintain and deploy your containers.
-ECS tasks can pull container images from ECR repositories and are customizable using task definitions to specify settings such as CPU and memory limits, environment variables, and networking configurations. [LocalStack Pro](https://localstack.cloud/) allows creating ECR registries, repositories, and ECS clusters and tasks on your local machine. This tutorial will showcase using LocalStack to set up an NGINX web server to serve a static website using CloudFormation templates in a local AWS environment.
+ECS tasks can pull container images from ECR repositories and are customizable using task definitions to specify settings such as CPU and memory limits, environment variables, and networking configurations. [LocalStack Pro](https://localstack.cloud/) allows creating ECR registries, repositories, and ECS clusters and tasks on your local machine.
+This tutorial will showcase using LocalStack to set up an NGINX web server to serve a static website using CloudFormation templates in a local AWS environment.
## Prerequisites
-- [LocalStack Pro](https://localstack.cloud/pricing/)
-- [awslocal]({{< ref "aws-cli#localstack-aws-cli-awslocal" >}})
-- [Docker](https://docker.io/)
-- [`cURL`](https://curl.se/download.html)
+- [LocalStack Pro](https://localstack.cloud/pricing/)
+- [awslocal]({{< ref "aws-cli#localstack-aws-cli-awslocal" >}})
+- [Docker](https://docker.io/)
+- [`cURL`](https://curl.se/download.html)
## Creating the Docker image
-To start setting up an NGINX web server on an ECS cluster, we need to create a Docker image that can be pushed to an ECR repository. We'll begin by creating a `Dockerfile` that defines the configuration for our NGINX web server.
+To start setting up an NGINX web server on an ECS cluster, we need to create a Docker image that can be pushed to an ECR repository.
+We'll begin by creating a `Dockerfile` that defines the configuration for our NGINX web server.
```dockerfile
FROM nginx
@@ -47,19 +53,23 @@ FROM nginx
ENV foo=bar
```
-The `Dockerfile` uses the official `nginx` image from Docker Hub, which allows us to serve the default index page. Before building our Docker image, we need to start LocalStack and create an ECR repository to push our Docker image. To start LocalStack with the `LOCALSTACK_AUTH_TOKEN` environment variable, run the following command:
+The `Dockerfile` uses the official `nginx` image from Docker Hub, which allows us to serve the default index page.
+Before building our Docker image, we need to start LocalStack and create an ECR repository to push our Docker image.
+To start LocalStack with the `LOCALSTACK_AUTH_TOKEN` environment variable, run the following command:
{{< command >}}
$ LOCALSTACK_AUTH_TOKEN= localstack start -d
{{< / command >}}
-Next, we will create an ECR repository to push our Docker image. We will use the `awslocal` CLI to create the repository.
+Next, we will create an ECR repository to push our Docker image.
+We will use the `awslocal` CLI to create the repository.
{{< command >}}
$ awslocal ecr create-repository --repository-name
{{< / command >}}
-Replace `` with your desired repository name. The output of this command will contain the `repositoryUri` value that we'll need in the next step:
+Replace `` with your desired repository name.
+The output of this command will contain the `repositoryUri` value that we'll need in the next step:
```json
{
@@ -86,19 +96,23 @@ Copy the `repositoryUri` value from the output and replace `` in
$ docker build -t .
{{< / command >}}
-This command will build the Docker image for our NGINX web server. After the build is complete, we'll push the Docker image to the ECR repository we created earlier using the following command:
+This command will build the Docker image for our NGINX web server.
+After the build is complete, we'll push the Docker image to the ECR repository we created earlier using the following command:
{{< command >}}
$ docker push
{{< / command >}}
-After a few seconds, the Docker image will be pushed to the local ECR repository. We can now create an ECS cluster and deploy our NGINX web server.
+After a few seconds, the Docker image will be pushed to the local ECR repository.
+We can now create an ECS cluster and deploy our NGINX web server.
## Creating the local ECS infrastructure
-LocalStack enables the deployment of ECS task definitions, services, and tasks, allowing us to deploy our ECR containers via the ECS Fargate launch type, which uses the local Docker engine to deploy containers locally. To create the necessary ECS infrastructure on our local machine before deploying our NGINX web server, we will use a CloudFormation template.
+LocalStack enables the deployment of ECS task definitions, services, and tasks, allowing us to deploy our ECR containers via the ECS Fargate launch type, which uses the local Docker engine to deploy containers locally.
+To create the necessary ECS infrastructure on our local machine before deploying our NGINX web server, we will use a CloudFormation template.
-You can create a new file named `ecs.infra.yml` inside a new `templates` directory, using a [publicly available CloudFormation template as a starting point](https://github.com/awslabs/aws-cloudformation-templates/blob/master/aws/services/ECS/FargateLaunchType/clusters/public-vpc.yml). To begin, we'll add the `Mappings` section and configure the subnet mask values, which define the range of internal IP addresses that can be assigned.
+You can create a new file named `ecs.infra.yml` inside a new `templates` directory, using a [publicly available CloudFormation template as a starting point](https://github.com/awslabs/aws-cloudformation-templates/blob/master/aws/services/ECS/FargateLaunchType/clusters/public-vpc.yml).
+To begin, we'll add the `Mappings` section and configure the subnet mask values, which define the range of internal IP addresses that can be assigned.
```yaml
AWSTemplateFormatVersion: '2010-09-09'
@@ -299,9 +313,11 @@ Resources:
Resource: '*'
```
-So far, we have set up the VPC where the containers will be networked and created networking resources for the public subnets. We have also added a security group for the container running in Fargate and an IAM role that authorizes ECS to manage resources in the VPC.
+So far, we have set up the VPC where the containers will be networked and created networking resources for the public subnets.
+We have also added a security group for the container running in Fargate and an IAM role that authorizes ECS to manage resources in the VPC.
-Next, we can configure the outputs generated by the CloudFormation template. These outputs are values generated during the creation of the CloudFormation stack and can be used by other resources or scripts in your application.
+Next, we can configure the outputs generated by the CloudFormation template.
+These outputs are values generated during the creation of the CloudFormation stack and can be used by other resources or scripts in your application.
To export the values as CloudFormation outputs, we can add the following to the end of our `ecs.infra.yml` file:
@@ -360,17 +376,21 @@ To deploy the CloudFormation template we created earlier, use the following comm
$ awslocal cloudformation create-stack --stack-name --template-body file://templates/ecs.infra.yml
{{< /command >}}
-Make sure to replace `` with a name of your choice. Wait until the stack status changes to `CREATE_COMPLETE` by running the following command:
+Make sure to replace `` with a name of your choice.
+Wait until the stack status changes to `CREATE_COMPLETE` by running the following command:
{{< command >}}
$ awslocal cloudformation wait stack-create-complete --stack-name
{{< /command >}}
-You can also check your deployed stack on the LocalStack Web Application by navigating to the [CloudFormation resource browser](https://app.localstack.cloud/resources/cloudformation/stacks). With the ECS infrastructure now in place, we can proceed to deploy our NGINX web server.
+You can also check your deployed stack on the LocalStack Web Application by navigating to the [CloudFormation resource browser](https://app.localstack.cloud/resources/cloudformation/stacks).
+With the ECS infrastructure now in place, we can proceed to deploy our NGINX web server.
## Deploying the ECS service
-To deploy the ECS service, we'll use another CloudFormation template. You can create a new file named `ecs.sample.yml` in the `templates` directory, based on the [publicly available CloudFormation template](https://github.com/awslabs/aws-cloudformation-templates/blob/master/aws/services/ECS/FargateLaunchType/services/public-service.yml). This template will deploy the ECS service on AWS Fargate and expose it via a public load balancer.
+To deploy the ECS service, we'll use another CloudFormation template.
+You can create a new file named `ecs.sample.yml` in the `templates` directory, based on the [publicly available CloudFormation template](https://github.com/awslabs/aws-cloudformation-templates/blob/master/aws/services/ECS/FargateLaunchType/services/public-service.yml).
+This template will deploy the ECS service on AWS Fargate and expose it via a public load balancer.
Before we proceed, let's declare the parameters for the CloudFormation template:
@@ -532,41 +552,50 @@ Next, let's deploy the CloudFormation template by running the following command:
$ awslocal cloudformation create-stack --stack-name --template-body file://templates/ecs.sample.yml --parameters ParameterKey=ImageUrl,ParameterValue=
{{< /command >}}
-Replace `` with a name of your choice and `` with the URI of the Docker image that you want to deploy. Wait for the stack to be created by running the following command:
+Replace `` with a name of your choice and `` with the URI of the Docker image that you want to deploy.
+Wait for the stack to be created by running the following command:
{{< command >}}
$ awslocal cloudformation wait stack-create-complete --stack-name
{{< /command >}}
-Now that the ECS service has been deployed successfully, let's access the application endpoint. First, let's list all the ECS clusters we have deployed in our local environment by running the following command to retrieve the cluster ARN:
+Now that the ECS service has been deployed successfully, let's access the application endpoint.
+First, let's list all the ECS clusters we have deployed in our local environment by running the following command to retrieve the cluster ARN:
{{< command >}}
$ awslocal ecs list-clusters | jq -r '.clusterArns[0]'
{{< /command >}}
-Save the output of the above command as `CLUSTER_ARN`, as we will use it to list the tasks running in the cluster. Next, run the following command to list the task ARN:
+Save the output of the above command as `CLUSTER_ARN`, as we will use it to list the tasks running in the cluster.
+Next, run the following command to list the task ARN:
{{< command >}}
$ awslocal ecs list-tasks --cluster | jq -r '.taskArns[0]'
{{< /command >}}
-Save the task ARN as `TASK_ARN`. Let us now list the port number on which the application is running. Run the following command:
+Save the task ARN as `TASK_ARN`.
+Let us now list the port number on which the application is running.
+Run the following command:
{{< command >}}
$ awslocal ecs describe-tasks --cluster --tasks | jq -r '.tasks[0].containers[0].networkBindings[0].hostPort'
{{< /command >}}
-Earlier, we configured the application to run on port `45139`, in our `HostPort` parameter. Let us now access the application endpoint. Run the following command to get the public IP address of the host:
+Earlier, we configured the application to run on port `45139`, in our `HostPort` parameter.
+Let us now access the application endpoint.
+Run the following command to get the public IP address of the host:
{{< command >}}
$ curl localhost:45139
{{< /command >}}
-Alternatively, in the address bar of your web browser, you can navigate to [`localhost:45139`](https://localhost:45139/). You should see the default index page of the NGINX web server.
+Alternatively, in the address bar of your web browser, you can navigate to [`localhost:45139`](https://localhost:45139/).
+You should see the default index page of the NGINX web server.
## Conclusion
-In this tutorial, we have demonstrated how to deploy a containerized service locally using Amazon ECS, ECR, and LocalStack. We have also shown how you can use CloudFormation templates with the awslocal CLI to deploy your local AWS infrastructure.
+In this tutorial, we have demonstrated how to deploy a containerized service locally using Amazon ECS, ECR, and LocalStack.
+We have also shown how you can use CloudFormation templates with the awslocal CLI to deploy your local AWS infrastructure.
With LocalStack, you can easily mount code from your host filesystem into the ECS container, allowing for a quicker debugging loop that doesn't require rebuilding and redeploying the task's Docker image for each change.
diff --git a/content/en/tutorials/elb-load-balancing/index.md b/content/en/tutorials/elb-load-balancing/index.md
index ea002dfd5a..f00d2357ac 100644
--- a/content/en/tutorials/elb-load-balancing/index.md
+++ b/content/en/tutorials/elb-load-balancing/index.md
@@ -24,13 +24,22 @@ pro: true
leadimage: "elb-load-balancing-featured-image.png"
---
-[Elastic Load Balancer (ELB)](https://aws.amazon.com/elasticloadbalancing/) is a service that distributes incoming application traffic across multiple targets, such as EC2 instances, containers, IP addresses, and Lambda functions. ELBs can be physical hardware or virtual software components. They accept incoming traffic and distribute it across multiple targets in one or more Availability Zones. Using ELB, you can quickly scale your load balancer to accommodate changes in traffic over time, ensuring optimal performance for your application and workloads running on the AWS infrastructure.
+[Elastic Load Balancer (ELB)](https://aws.amazon.com/elasticloadbalancing/) is a service that distributes incoming application traffic across multiple targets, such as EC2 instances, containers, IP addresses, and Lambda functions.
+ELBs can be physical hardware or virtual software components.
+They accept incoming traffic and distribute it across multiple targets in one or more Availability Zones.
+Using ELB, you can quickly scale your load balancer to accommodate changes in traffic over time, ensuring optimal performance for your application and workloads running on the AWS infrastructure.
ELB provides three types of load balancers: [Application Load Balancer](https://docs.aws.amazon.com/elasticloadbalancing/latest/application/introduction.html), [Network Load Balancer](https://docs.aws.amazon.com/elasticloadbalancing/latest/network/introduction.html), [Classic Load Balancer](https://docs.aws.amazon.com/elasticloadbalancing/latest/gateway/introduction.html), and [Application Load Balancer](https://docs.aws.amazon.com/elasticloadbalancing/latest/application/introduction.html).
-In this tutorial we focus on the Application Load Balancer (ALB), which operates at the Application layer of the OSI model and is specifically designed for load balancing HTTP and HTTPS traffic for web applications. ALB works at the request level, allowing advanced load-balancing features for HTTP and HTTPS requests. It also enables you to register Lambda functions as targets. You can configure a listener rule that forwards requests to a target group for your Lambda function, triggering its execution to process the request.
+In this tutorial we focus on the Application Load Balancer (ALB), which operates at the Application layer of the OSI model and is specifically designed for load balancing HTTP and HTTPS traffic for web applications.
+ALB works at the request level, allowing advanced load-balancing features for HTTP and HTTPS requests.
+It also enables you to register Lambda functions as targets.
+You can configure a listener rule that forwards requests to a target group for your Lambda function, triggering its execution to process the request.
-[LocalStack Pro](https://localstack.cloud) extends support for ELB Application Load Balancers and the configuration of target groups, including Lambda functions. This tutorial will guide you through setting up an ELB Application Load Balancer to configure Node.js Lambda functions as targets. We will utilize the [Serverless framework](http://serverless.com/) along with the [`serverless-localstack` plugin](https://www.serverless.com/plugins/serverless-localstack) to simplify the setup. Additionally, we will demonstrate how to set up ELB endpoints to efficiently forward requests to the target group associated with your Lambda functions.
+[LocalStack Pro](https://localstack.cloud) extends support for ELB Application Load Balancers and the configuration of target groups, including Lambda functions.
+This tutorial will guide you through setting up an ELB Application Load Balancer to configure Node.js Lambda functions as targets.
+We will utilize the [Serverless framework](http://serverless.com/) along with the [`serverless-localstack` plugin](https://www.serverless.com/plugins/serverless-localstack) to simplify the setup.
+Additionally, we will demonstrate how to set up ELB endpoints to efficiently forward requests to the target group associated with your Lambda functions.
## Prerequisites
@@ -42,13 +51,16 @@ In this tutorial we focus on the Application Load Balancer (ALB), which operates
## Setup a Serverless project
-Serverless is an open-source framework that enables you to build, package, and deploy serverless applications seamlessly across various cloud providers and platforms. With the Serverless framework, you can easily set up your serverless development environment, define your applications as functions and events, and deploy your entire infrastructure to the cloud using a single command. To start using the Serverless framework, install the Serverless framework globally by executing the following command using `npm`:
+Serverless is an open-source framework that enables you to build, package, and deploy serverless applications seamlessly across various cloud providers and platforms.
+With the Serverless framework, you can easily set up your serverless development environment, define your applications as functions and events, and deploy your entire infrastructure to the cloud using a single command.
+To start using the Serverless framework, install the Serverless framework globally by executing the following command using `npm`:
{{< command >}}
$ npm install -g serverless
{{< / command >}}
-The above command installs the Serverless framework globally on your machine. After the installation is complete, you can verify it by running the following command:
+The above command installs the Serverless framework globally on your machine.
+After the installation is complete, you can verify it by running the following command:
{{< command >}}
$ serverless --version
@@ -58,21 +70,27 @@ Plugin: 6.2.2
SDK: 4.3.2
{{< / command >}}
-This command displays the version numbers of the Serverless framework's core, plugins, and SDK you installed. Now, let's proceed with creating a new Serverless project using the `serverless` command:
+This command displays the version numbers of the Serverless framework's core, plugins, and SDK you installed.
+Now, let's proceed with creating a new Serverless project using the `serverless` command:
{{< command >}}
$ serverless create --template aws-nodejs --path serverless-elb
{{< / command >}}
-In this example, we use the `aws-nodejs` template to create our Serverless project. This template includes a simple Node.js Lambda function that returns a message when invoked. It also generates a `serverless.yml` file that contains the project's configuration.
+In this example, we use the `aws-nodejs` template to create our Serverless project.
+This template includes a simple Node.js Lambda function that returns a message when invoked.
+It also generates a `serverless.yml` file that contains the project's configuration.
-The `serverless.yml` file is where you configure your project. It includes information such as the service name, the provider (AWS in this case), the functions, and example events that trigger those functions. If you prefer to set up your project using a different template, refer to the [Serverless templates documentation](https://www.serverless.com/framework/docs/providers/aws/cli-reference/create/) for more options.
+The `serverless.yml` file is where you configure your project.
+It includes information such as the service name, the provider (AWS in this case), the functions, and example events that trigger those functions.
+If you prefer to set up your project using a different template, refer to the [Serverless templates documentation](https://www.serverless.com/framework/docs/providers/aws/cli-reference/create/) for more options.
Now that we have created our Serverless project, we can proceed to configure it to use LocalStack.
## Configure Serverless project to use LocalStack
-To configure your Serverless project to use LocalStack, you need to install the `serverless-localstack` plugin. Before that, let's initialize the project and install some dependencies:
+To configure your Serverless project to use LocalStack, you need to install the `serverless-localstack` plugin.
+Before that, let's initialize the project and install some dependencies:
{{< command >}}
$ npm init -y
@@ -81,9 +99,11 @@ $ npm install -D serverless serverless-localstack serverless-deployment-bucket
In the above commands, we use `npm init -y` to initialize a new Node.js project with default settings and then install the necessary dependencies, including `serverless`, `serverless-localstack`, and `serverless-deployment-bucket`, as dev dependencies.
-The `serverless-localstack` plugin enables your Serverless project to redirect AWS API calls to LocalStack, while the `serverless-deployment-bucket` plugin creates a deployment bucket in LocalStack. This bucket is responsible for storing the deployment artifacts and ensuring that old deployment buckets are properly cleaned up after each deployment.
+The `serverless-localstack` plugin enables your Serverless project to redirect AWS API calls to LocalStack, while the `serverless-deployment-bucket` plugin creates a deployment bucket in LocalStack.
+This bucket is responsible for storing the deployment artifacts and ensuring that old deployment buckets are properly cleaned up after each deployment.
-We have a `serverless.yml` file in the directory to define our Serverless project's configuration, which includes information such as the service name, the provider (AWS in this case), the functions, and example events that trigger those functions. To set up the plugins we installed earlier, you need to add the following properties to your `serverless.yml` file:
+We have a `serverless.yml` file in the directory to define our Serverless project's configuration, which includes information such as the service name, the provider (AWS in this case), the functions, and example events that trigger those functions.
+To set up the plugins we installed earlier, you need to add the following properties to your `serverless.yml` file:
```yaml
service: serverless-elb
@@ -111,7 +131,9 @@ custom:
To configure Serverless to use the LocalStack plugin specifically for the `local` stage and ensure that your Serverless project only deploys to LocalStack instead of the real AWS Cloud, you need to set the `--stage` flag when using the `serverless deploy` command and specify the flag variable as `local`.
-Configure a `deploy` script in your `package.json` file to simplify the deployment process. It lets you run the `serverless deploy` command directly over your local infrastructure. Update your `package.json` file to include the following:
+Configure a `deploy` script in your `package.json` file to simplify the deployment process.
+It lets you run the `serverless deploy` command directly over your local infrastructure.
+Update your `package.json` file to include the following:
```json
{
@@ -143,7 +165,8 @@ This will execute the `serverless deploy --stage local` command, deploying your
## Create Lambda functions & ELB Application Load Balancers
-Now, let's create two Lambda functions named `hello1` and `hello2` that will run on the Node.js 12.x runtime. Open the `handler.js` file and replace the existing code with the following:
+Now, let's create two Lambda functions named `hello1` and `hello2` that will run on the Node.js 12.x runtime.
+Open the `handler.js` file and replace the existing code with the following:
```js
'use strict';
@@ -175,7 +198,11 @@ module.exports.hello2 = async (event) => {
};
```
-We have defined the `hello1` and `hello2` Lambda functions in the updated code. Each function receives an event parameter and logs it to the console. The function then returns a response with a status code of 200 and a plain text body containing the respective `"Hello"` message. It's important to note that the `isBase64Encoded` property is not required for plain text responses. It is typically used when you need to include binary content in the response body and want to indicate that the content is Base64 encoded.
+We have defined the `hello1` and `hello2` Lambda functions in the updated code.
+Each function receives an event parameter and logs it to the console.
+The function then returns a response with a status code of 200 and a plain text body containing the respective `"Hello"` message.
+It's important to note that the `isBase64Encoded` property is not required for plain text responses.
+It is typically used when you need to include binary content in the response body and want to indicate that the content is Base64 encoded.
Let us now configure the `serverless.yml` file to create an Application Load Balancer (ALB) and attach the Lambda functions to it.
@@ -216,9 +243,13 @@ custom:
- local
```
-In the above configuration, we specify the service name (`serverless-elb` in this case) and set the provider to AWS with the Node.js 12.x runtime. We include the necessary plugins, `serverless-localstack` and `serverless-deployment-bucket`, for LocalStack support and deployment bucket management. Next, we define the `hello1` and `hello2` functions with their respective handlers and event triggers. In this example, both functions are triggered by HTTP GET requests to the `/hello1` and `/hello2` paths.
+In the above configuration, we specify the service name (`serverless-elb` in this case) and set the provider to AWS with the Node.js 12.x runtime.
+We include the necessary plugins, `serverless-localstack` and `serverless-deployment-bucket`, for LocalStack support and deployment bucket management.
+Next, we define the `hello1` and `hello2` functions with their respective handlers and event triggers.
+In this example, both functions are triggered by HTTP GET requests to the `/hello1` and `/hello2` paths.
-Lastly, let's create a VPC, a subnet, an Application Load Balancer, and an HTTP listener on the load balancer that redirects traffic to the target group. To do this, add the following resources to your `serverless.yml` file:
+Lastly, let's create a VPC, a subnet, an Application Load Balancer, and an HTTP listener on the load balancer that redirects traffic to the target group.
+To do this, add the following resources to your `serverless.yml` file:
```yaml
...
@@ -257,23 +288,28 @@ resources:
CidrBlock: 12.2.1.0/24
```
-With these resource definitions, you have completed the configuration of your Serverless project. Now you can create your local AWS infrastructure on LocalStack and deploy your Application Load Balancers with the two Lambda functions as targets.
+With these resource definitions, you have completed the configuration of your Serverless project.
+Now you can create your local AWS infrastructure on LocalStack and deploy your Application Load Balancers with the two Lambda functions as targets.
## Creating the infrastructure on LocalStack
-Now that we have completed the initial setup let's run LocalStack's AWS emulation on our local machine. Start LocalStack by running the following command:
+Now that we have completed the initial setup let's run LocalStack's AWS emulation on our local machine.
+Start LocalStack by running the following command:
{{< command >}}
$ LOCALSTACK_AUTH_TOKEN= localstack start -d
{{< / command >}}
-This command launches LocalStack in the background, enabling you to use the AWS services locally. Now, let's deploy our Serverless project and verify the resources created in LocalStack. Run the following command:
+This command launches LocalStack in the background, enabling you to use the AWS services locally.
+Now, let's deploy our Serverless project and verify the resources created in LocalStack.
+Run the following command:
{{< command >}}
$ npm run deploy
{{< / command >}}
-This command deploys your Serverless project using the "local" stage. The output will resemble the following:
+This command deploys your Serverless project using the "local" stage.
+The output will resemble the following:
```bash
> serverless-elb@1.0.0 deploy
@@ -293,7 +329,9 @@ functions:
hello2: test-elb-load-balancing-local-hello2 (157 kB)
```
-This output confirms the successful deployment of your Serverless service to the `local` stage in LocalStack. It also displays information about the deployed Lambda functions (`hello1` and `hello2`). You can run the following command to verify that the functions and the load balancers have been deployed:
+This output confirms the successful deployment of your Serverless service to the `local` stage in LocalStack.
+It also displays information about the deployed Lambda functions (`hello1` and `hello2`).
+You can run the following command to verify that the functions and the load balancers have been deployed:
{{< command >}}
$ awslocal lambda list-functions
@@ -334,13 +372,13 @@ $ awslocal elbv2 describe-load-balancers
}
{{< / command >}}
-
The ALB endpoints for the two Lambda functions, hello1 and hello2, are accessible at the following URLs:
- [`http://lb-test-1.elb.localhost.localstack.cloud:4566/hello1`](http://lb-test-1.elb.localhost.localstack.cloud:4566/hello1)
- [`http://lb-test-1.elb.localhost.localstack.cloud:4566/hello2`](http://lb-test-1.elb.localhost.localstack.cloud:4566/hello2)
-To test these endpoints, you can use the curl command along with the jq tool for better formatting. Run the following commands:
+To test these endpoints, you can use the curl command along with the jq tool for better formatting.
+Run the following commands:
{{< command >}}
$ curl http://lb-test-1.elb.localhost.localstack.cloud:4566/hello1 | jq
@@ -349,10 +387,15 @@ $ curl http://lb-test-1.elb.localhost.localstack.cloud:4566/hello2 | jq
"Hello 2"
{{< / command >}}
-Both commands send an HTTP GET request to the endpoints and uses `jq` to format the response. The expected outputs are `Hello 1` & `Hello 2`, representing the Lambda functions' response.
+Both commands send an HTTP GET request to the endpoints and uses `jq` to format the response.
+The expected outputs are `Hello 1` & `Hello 2`, representing the Lambda functions' response.
## Conclusion
-In this tutorial, we have learned how to create an Application Load Balancer (ALB) with two Lambda functions as targets using LocalStack. We have also explored creating, configuring, and deploying a Serverless project with LocalStack. This enables developers to develop and test Cloud and Serverless applications locally conveniently.
+In this tutorial, we have learned how to create an Application Load Balancer (ALB) with two Lambda functions as targets using LocalStack.
+We have also explored creating, configuring, and deploying a Serverless project with LocalStack.
+This enables developers to develop and test Cloud and Serverless applications locally conveniently.
-LocalStack offers integrations with various popular tools such as Terraform, Pulumi, Serverless Application Model (SAM), and more. For more information about LocalStack integrations, you can refer to our [Integration documentation]({{< ref "user-guide/integrations">}}). To further explore and experiment with the concepts covered in this tutorial, you can access the code and resources on our [LocalStack Pro samples over GitHub](https://github.com/localstack/localstack-pro-samples/tree/master/elb-load-balancing) along with a `Makefile` for step-by-step execution.
+LocalStack offers integrations with various popular tools such as Terraform, Pulumi, Serverless Application Model (SAM), and more.
+For more information about LocalStack integrations, you can refer to our [Integration documentation]({{< ref "user-guide/integrations">}}).
+To further explore and experiment with the concepts covered in this tutorial, you can access the code and resources on our [LocalStack Pro samples over GitHub](https://github.com/localstack/localstack-pro-samples/tree/master/elb-load-balancing) along with a `Makefile` for step-by-step execution.
diff --git a/content/en/tutorials/ephemeral-application-previews/index.md b/content/en/tutorials/ephemeral-application-previews/index.md
index 5765a410e7..f3c582d924 100644
--- a/content/en/tutorials/ephemeral-application-previews/index.md
+++ b/content/en/tutorials/ephemeral-application-previews/index.md
@@ -21,11 +21,17 @@ leadimage: "ephemeral-application-previews-banner.png"
## Introduction
-LocalStack's core cloud emulator allows you set up your cloud infrastructure on your local machine. You can access databases, queues, and other managed services without needing to connect to a remote cloud provider. This speeds up your Software Development Life Cycle (SDLC) by making development and testing more efficient. Despite this, you still need a staging environment to do final acceptance tests before deploying your application to production.
+LocalStack's core cloud emulator allows you set up your cloud infrastructure on your local machine.
+You can access databases, queues, and other managed services without needing to connect to a remote cloud provider.
+This speeds up your Software Development Life Cycle (SDLC) by making development and testing more efficient.
+Despite this, you still need a staging environment to do final acceptance tests before deploying your application to production.
-In many cases, staging environments are costly and deploying changes to them takes a lot of time. Also, teams can only use one staging environment at a time, which makes it difficult to test changes quickly.
+In many cases, staging environments are costly and deploying changes to them takes a lot of time.
+Also, teams can only use one staging environment at a time, which makes it difficult to test changes quickly.
-With LocalStack's Ephemeral Instances, you can create short-lived, self-contained deployments of LocalStack in the cloud. These Ephemeral Instances let you deploy your application on a remote LocalStack container, creating an Application Preview. This allows you to run end-to-end tests, preview features, and collaborate within your team or across teams asynchronously.
+With LocalStack's Ephemeral Instances, you can create short-lived, self-contained deployments of LocalStack in the cloud.
+These Ephemeral Instances let you deploy your application on a remote LocalStack container, creating an Application Preview.
+This allows you to run end-to-end tests, preview features, and collaborate within your team or across teams asynchronously.
This tutorial will show you how to use LocalStack's Ephemeral Instance feature to generate an Application Preview automatically for every new Pull Request (PR) using a GitHub Action workflow.
@@ -36,19 +42,21 @@ This tutorial will show you how to use LocalStack's Ephemeral Instance feature t
## Tutorial: Setting up Application Previews for your cloud application
-This tutorial uses a [public LocalStack sample](https://github.com/localstack-samples/sample-notes-app-dynamodb-lambda-apigateway) to showcase a simple note-taking application using the modular AWS SDK for JavaScript. The example application deploys several AWS resources including DynamoDB, Lambda, API Gateway, S3, Cognito, and CloudFront, functioning as follows:
+This tutorial uses a [public LocalStack sample](https://github.com/localstack-samples/sample-notes-app-dynamodb-lambda-apigateway) to showcase a simple note-taking application using the modular AWS SDK for JavaScript.
+The example application deploys several AWS resources including DynamoDB, Lambda, API Gateway, S3, Cognito, and CloudFront, functioning as follows:
-- Five Lambda functions handle basic CRUD functionality around note entities.
-- The frontend is built with React and served via Cloudfront and an S3 bucket.
-- DynamoDB is used as a persistence layer to store the notes.
-- API Gateway exposes the Lambda functions through HTTP APIs.
-- A Cognito User Pool is used for Authentication and Authorization.
+- Five Lambda functions handle basic CRUD functionality around note entities.
+- The frontend is built with React and served via Cloudfront and an S3 bucket.
+- DynamoDB is used as a persistence layer to store the notes.
+- API Gateway exposes the Lambda functions through HTTP APIs.
+- A Cognito User Pool is used for Authentication and Authorization.
This tutorial guides you through setting up a GitHub Action workflow to create an Application Preview of the sample application by deploying it on an ephemeral instance.
### Create the GitHub Action workflow
-GitHub Actions serves as a continuous integration and continuous delivery (CI/CD) platform, automating software development workflows directly from GitHub. It allows customization of actions and automation throughout the software development lifecycle.
+GitHub Actions serves as a continuous integration and continuous delivery (CI/CD) platform, automating software development workflows directly from GitHub.
+It allows customization of actions and automation throughout the software development lifecycle.
In this tutorial, you'll implement a workflow that:
@@ -56,13 +64,15 @@ In this tutorial, you'll implement a workflow that:
- Installs necessary dependencies.
- Deploys the application on a ephemeral LocalStack Instance using a GitHub Action Runner to generate a sharable application preview.
-To begin, fork the [LocalStack sample repository](https://github.com/localstack-samples/sample-notes-app-dynamodb-lambda-apigateway) on GitHub. If you're using GitHub's `gh` CLI, fork and clone the repository with this command:
+To begin, fork the [LocalStack sample repository](https://github.com/localstack-samples/sample-notes-app-dynamodb-lambda-apigateway) on GitHub.
+If you're using GitHub's `gh` CLI, fork and clone the repository with this command:
-```bash
+```bash
gh repo fork https://github.com/localstack-samples/sample-notes-app-dynamodb-lambda-apigateway
```
-After forking and cloning, navigate to the `.github/workflows` directory in your forked repository and open the `preview.yml` file. This file will contain the GitHub Action workflow configuration.
+After forking and cloning, navigate to the `.github/workflows` directory in your forked repository and open the `preview.yml` file.
+This file will contain the GitHub Action workflow configuration.
Now you're set to create your GitHub Action workflow, which will deploy your cloud application on an ephemeral instance using LocalStack.
@@ -70,13 +80,13 @@ Now you're set to create your GitHub Action workflow, which will deploy your clo
To achieve the goal, you can utilize a few prebuilt Actions:
-- [`actions/checkout`](https://github.com/actions/checkout): Checkout the application code with Git.
-- [`setup-localstack/ephemeral/startup`](https://github.com/localstack/setup-localstack): Configure the workflow to generate the application preview.
-- [`LocalStack/setup-localstack/finish`](https://github.com/localstack/setup-localstack): Add a comment to the PR, which includes a URL to the application preview.
+- [`actions/checkout`](https://github.com/actions/checkout): Checkout the application code with Git.
+- [`setup-localstack/ephemeral/startup`](https://github.com/localstack/setup-localstack): Configure the workflow to generate the application preview.
+- [`LocalStack/setup-localstack/finish`](https://github.com/localstack/setup-localstack): Add a comment to the PR, which includes a URL to the application preview.
You will find the following content to the `preview.yml` file that you opened earlier:
-```yaml
+```yaml
name: Create PR Preview
on:
@@ -88,7 +98,7 @@ This configuration ensures that every time a pull request is raised, the action
A new job named `preview` specifies the GitHub-hosted runner to execute our workflow steps, while checking out the code:
-```yaml
+```yaml
jobs:
preview:
permissions: write-all
@@ -103,13 +113,13 @@ jobs:
To deploy the application preview, you can utilize the `LocalStack/setup-localstack/ephemeral/startup` action, which requires the following parameters:
-- `github-token`: Automatically configured on the GitHub Action runner.
-- `localstack-api-key`: Configuration of a LocalStack CI key (`LOCALSTACK_API_KEY`) to activate licensed features in LocalStack.
-- `preview-cmd`: The set of commands necessary to deploy the application, including its infrastructure, on LocalStack.
+- `github-token`: Automatically configured on the GitHub Action runner.
+- `localstack-api-key`: Configuration of a LocalStack CI key (`LOCALSTACK_API_KEY`) to activate licensed features in LocalStack.
+- `preview-cmd`: The set of commands necessary to deploy the application, including its infrastructure, on LocalStack.
The following step sets up the dependencies and deploys the application preview on an ephemeral LocalStack instance:
-```yaml
+```yaml
- name: Deploy Preview
uses: LocalStack/setup-localstack/ephemeral/startup@v0.2.2
with:
@@ -131,14 +141,15 @@ The following step sets up the dependencies and deploys the application preview
In the provided workflow:
-- Dependencies such as `awslocal`, AWS CDK library, and the `cdklocal` wrapper are installed.
-- `Makefile` targets are employed to build the application, bootstrap the CDK stack, and deploy it.
-- Additionally, the frontend application is built and deployed on an S3 bucket served via a CloudFront distribution.
-- The application preview URL is provided by querying the CloudFront distribution ID using `awslocal`.
+- Dependencies such as `awslocal`, AWS CDK library, and the `cdklocal` wrapper are installed.
+- `Makefile` targets are employed to build the application, bootstrap the CDK stack, and deploy it.
+- Additionally, the frontend application is built and deployed on an S3 bucket served via a CloudFront distribution.
+- The application preview URL is provided by querying the CloudFront distribution ID using `awslocal`.
-To complete the process, the last step attaches the application preview URL to the Pull Request (PR) as a comment. This allows for quick access to the deployed URL for validating features or enhancements pushed to your application.
+To complete the process, the last step attaches the application preview URL to the Pull Request (PR) as a comment.
+This allows for quick access to the deployed URL for validating features or enhancements pushed to your application.
-```yaml
+```yaml
- name: Finalize PR comment
uses: LocalStack/setup-localstack/finish@v0.2.2
with:
@@ -149,24 +160,27 @@ To complete the process, the last step attaches the application preview URL to t
### Configure a CI key for GitHub Actions
-Before triggering your workflow, set up a continuous integration (CI) key for LocalStack. LocalStack requires a CI Key for usage in CI or similar automated environments to activate licensed features.
+Before triggering your workflow, set up a continuous integration (CI) key for LocalStack.
+LocalStack requires a CI Key for usage in CI or similar automated environments to activate licensed features.
Follow these steps to add your LocalStack CI key to your forked GitHub repository:
-- Navigate to the [LocalStack Web Application](https://app.localstack.cloud/) and access the [CI Keys](https://app.localstack.cloud/workspace/ci-keys) page.
-- Scroll down to the **Generate CI Key** card, where you can provide a name, and click **Generate CI Key** to receive a new key.
-- In your [GitHub repository secrets](https://docs.github.com/en/actions/security-guides/using-secrets-in-github-actions), set the **Name** as `LOCALSTACK_API_KEY` and the **Secret** as the CI Key.
+- Navigate to the [LocalStack Web Application](https://app.localstack.cloud/) and access the [CI Keys](https://app.localstack.cloud/workspace/ci-keys) page.
+- Scroll down to the **Generate CI Key** card, where you can provide a name, and click **Generate CI Key** to receive a new key.
+- In your [GitHub repository secrets](https://docs.github.com/en/actions/security-guides/using-secrets-in-github-actions), set the **Name** as `LOCALSTACK_API_KEY` and the **Secret** as the CI Key.
Now, you can commit and push your workflow to your forked GitHub repository.
### Run the GitHub Action workflow
-Now that the GitHub Action Workflow is set up, each pull request in your cloud application will undergo building, deployment, and packaging as an application preview running within an ephemeral instance. The workflow will automatically update the application preview whenever new commits are pushed to the pull request.
+Now that the GitHub Action Workflow is set up, each pull request in your cloud application will undergo building, deployment, and packaging as an application preview running within an ephemeral instance.
+The workflow will automatically update the application preview whenever new commits are pushed to the pull request.
-In case your deployment encounters issues and fails on LocalStack, you can troubleshoot by incorporating additional steps to generate a diagnostics report. After downloading, you can visualize logs and environment variables using a tool like [`diapretty`](https://github.com/silv-io/diapretty):
+In case your deployment encounters issues and fails on LocalStack, you can troubleshoot by incorporating additional steps to generate a diagnostics report.
+After downloading, you can visualize logs and environment variables using a tool like [`diapretty`](https://github.com/silv-io/diapretty):
```yaml
- name: Generate a Diagnostic Report
@@ -183,7 +197,8 @@ In case your deployment encounters issues and fails on LocalStack, you can troub
## Conclusion
-In this tutorial, you've learned how to utilize LocalStack's Ephemeral Instances to generate application previews for your cloud applications. You can explore additional use cases with Ephemeral Instances, including:
+In this tutorial, you've learned how to utilize LocalStack's Ephemeral Instances to generate application previews for your cloud applications.
+You can explore additional use cases with Ephemeral Instances, including:
- Injecting a pre-defined Cloud Pod into an ephemeral instance to rapidly spin up infrastructure.
- Running your automated end-to-end (E2E) test suite to conduct thorough testing before deploying to production.
diff --git a/content/en/tutorials/fault-injection-service-experiments/index.md b/content/en/tutorials/fault-injection-service-experiments/index.md
index 19b2541e7b..e53c89b207 100644
--- a/content/en/tutorials/fault-injection-service-experiments/index.md
+++ b/content/en/tutorials/fault-injection-service-experiments/index.md
@@ -28,14 +28,20 @@ leadimage: "fis-experiments.png"
## Introduction
-Fault Injection Simulator (FIS) is a service designed for conducting controlled chaos engineering tests on AWS infrastructure. Its purpose is to uncover vulnerabilities and improve system robustness. FIS offers a means to deliberately introduce failures and observe their impacts, helping developers to better equip their systems against actual outages. To read about the FIS service, refer to the dedicated [FIS documentation](https://docs.localstack.cloud/user-guide/aws/fis/).
+Fault Injection Simulator (FIS) is a service designed for conducting controlled chaos engineering tests on AWS infrastructure.
+Its purpose is to uncover vulnerabilities and improve system robustness.
+FIS offers a means to deliberately introduce failures and observe their impacts, helping developers to better equip their systems against actual outages.
+To read about the FIS service, refer to the dedicated [FIS documentation](https://docs.localstack.cloud/user-guide/aws/fis/).
## Getting started
This tutorial is designed for users new to the Fault Injection Simulator and assumes basic knowledge of the AWS CLI and our
-[`awslocal`](https://github.com/localstack/awscli-local) wrapper script. In this example, we will use the FIS to create controlled outages in a DynamoDB database. The aim is to test the software's behavior and error handling capabilities.
+[`awslocal`](https://github.com/localstack/awscli-local) wrapper script.
+In this example, we will use the FIS to create controlled outages in a DynamoDB database.
+The aim is to test the software's behavior and error handling capabilities.
-For this particular example, we'll be using a [sample application repository](https://github.com/localstack-samples/samples-chaos-engineering/tree/main/FIS-experiments). Clone the repository, and follow the instructions below to get started.
+For this particular example, we'll be using a [sample application repository](https://github.com/localstack-samples/samples-chaos-engineering/tree/main/FIS-experiments).
+Clone the repository, and follow the instructions below to get started.
### Prerequisites
@@ -45,7 +51,9 @@ The general prerequisites for this guide are:
- [AWS CLI]({{[}}) with the [`awslocal` wrapper]({{][}})
- [Docker](https://docs.docker.com/get-docker/) and [Docker Compose](https://docs.docker.com/compose/install/)
-Start LocalStack by using the `docker-compose.yml` file from the repository. Ensure to set your Auth Token as an environment variable during this process. The cloud resources will be automatically created upon the LocalStack start.
+Start LocalStack by using the `docker-compose.yml` file from the repository.
+Ensure to set your Auth Token as an environment variable during this process.
+The cloud resources will be automatically created upon the LocalStack start.
{{< command >}}
$ LOCALSTACK_AUTH_TOKEN=
@@ -60,7 +68,9 @@ The following diagram shows the architecture that this application builds and de
### Creating an experiment template
-Before starting any FIS experiments, it's important to verify that our application is functioning correctly. Start by creating an entity and saving it. To do this, use `cURL` to call the API Gateway endpoint for the POST method:
+Before starting any FIS experiments, it's important to verify that our application is functioning correctly.
+Start by creating an entity and saving it.
+To do this, use `cURL` to call the API Gateway endpoint for the POST method:
{{< command >}}
$ curl --location 'http://12345.execute-api.localhost.localstack.cloud:4566/dev/productApi' \
@@ -69,14 +79,16 @@ $ curl --location 'http://12345.execute-api.localhost.localstack.cloud:4566/dev/
"id": "prod-2004",
"name": "Ultimate Gadget",
"price": "49.99",
- "description": "The Ultimate Gadget is the perfect tool for tech enthusiasts looking for the next level in gadgetry. Compact, powerful, and loaded with features."
+ "description": "The Ultimate Gadget is the perfect tool for tech enthusiasts looking for the next level in gadgetry.
+Compact, powerful, and loaded with features."
}'
Product added/updated successfully.
{{< /command >}}
-You can use the file named `experiment-ddb.json` that contains the FIS experiment configuration. This file will be used in the upcoming call to the [`CreateExperimentTemplate`](https://docs.aws.amazon.com/fis/latest/APIReference/API_CreateExperimentTemplate.html) API within the FIS resource.
+You can use the file named `experiment-ddb.json` that contains the FIS experiment configuration.
+This file will be used in the upcoming call to the [`CreateExperimentTemplate`](https://docs.aws.amazon.com/fis/latest/APIReference/API_CreateExperimentTemplate.html) API within the FIS resource.
```bash
$ cat experiment-ddb.json
@@ -101,7 +113,8 @@ $ cat experiment-ddb.json
}
```
-This template is designed to target all APIs of the DynamoDB resource. While it's possible to specify particular operations like `PutItem` or `GetItem`, the objective here is to entirely disconnect the database.
+This template is designed to target all APIs of the DynamoDB resource.
+While it's possible to specify particular operations like `PutItem` or `GetItem`, the objective here is to entirely disconnect the database.
As a result, this configuration will cause all API calls to fail with a 100% failure rate, each resulting in an HTTP 500 status code and a `DynamoDbException`.
@@ -132,12 +145,13 @@ $ awslocal fis create-experiment-template --cli-input-json file://experiment-ddb
"creationTime": 1699308754.415716,
"lastUpdateTime": 1699308754.415716,
"roleArn": "arn:aws:iam:000000000000:role/ExperimentRole"
- }
+ }
}
{{ command >}}
-Take note of the `id` field in the response. This is the ID of the experiment template that will be used in the next step.
+Take note of the `id` field in the response.
+This is the ID of the experiment template that will be used in the next step.
### Starting the experiment
@@ -182,7 +196,9 @@ Replace the `` placeholder with the ID of the experiment
### Simulating an outage
-Once the experiment starts, the database becomes inaccessible. This means users cannot retrieve or add new products, resulting in the API Gateway returning an Internal Server Error. Downtime and data loss are critical issues to avoid in enterprise applications.
+Once the experiment starts, the database becomes inaccessible.
+This means users cannot retrieve or add new products, resulting in the API Gateway returning an Internal Server Error.
+Downtime and data loss are critical issues to avoid in enterprise applications.
Fortunately, encountering this issue early in the development phase allows developers to implement effective error handling and develop mechanisms to prevent data loss during a database outage.
@@ -192,7 +208,9 @@ It's important to note that this approach is not limited to DynamoDB; outages ca
{{< figure src="fis-experiment-2.png" width="800">}}
-A possible solution involves setting up an SNS topic, an SQS queue, and a Lambda function. The Lambda function will be responsible for retrieving queued items and attempting to re-execute the `PutItem` operation on the database. If DynamoDB remains unavailable, the item will be placed back in the queue for a later retry.
+A possible solution involves setting up an SNS topic, an SQS queue, and a Lambda function.
+The Lambda function will be responsible for retrieving queued items and attempting to re-execute the `PutItem` operation on the database.
+If DynamoDB remains unavailable, the item will be placed back in the queue for a later retry.
{{< command >}}
$ curl --location 'http://12345.execute-api.localhost.localstack.cloud:4566/dev/productApi' \
@@ -201,10 +219,12 @@ $ curl --location 'http://12345.execute-api.localhost.localstack.cloud:4566/dev/
"id": "prod-1003",
"name": "Super Widget",
"price": "29.99",
- "description": "A versatile widget that can be used for a variety of purposes. Durable, reliable, and affordable."
+ "description": "A versatile widget that can be used for a variety of purposes.
+Durable, reliable, and affordable."
}'
-
-A DynamoDB error occurred. Message sent to queue.
+
+A DynamoDB error occurred.
+Message sent to queue.
{{< /command >}}
@@ -263,7 +283,8 @@ $ awslocal fis stop-experiment --id
Replace the `` placeholder with the ID of the experiment that was created in the previous step.
-The experiment has been terminated, allowing the Product that initially failed to reach the database to finally be stored successfully. This can be confirmed by scanning the database.
+The experiment has been terminated, allowing the Product that initially failed to reach the database to finally be stored successfully.
+This can be confirmed by scanning the database.
{{< command >}}
$ awslocal dynamodb scan --table-name Products
@@ -275,7 +296,8 @@ $ awslocal dynamodb scan --table-name Products
"S": "Super Widget"
},
"description": {
- "S": "A versatile widget that can be used for a variety of purposes. Durable, reliable, and affordable."
+ "S": "A versatile widget that can be used for a variety of purposes.
+Durable, reliable, and affordable."
},
"id": {
"S": "prod-1003"
@@ -289,7 +311,8 @@ $ awslocal dynamodb scan --table-name Products
"S": "Ultimate Gadget"
},
"description": {
- "S": "The Ultimate Gadget is the perfect tool for tech enthusiasts looking for the next level in gadgetry. Compact, powerful, and loaded with features."
+ "S": "The Ultimate Gadget is the perfect tool for tech enthusiasts looking for the next level in gadgetry.
+Compact, powerful, and loaded with features."
},
"id": {
"S": "prod-2004"
@@ -329,6 +352,7 @@ The LocalStack FIS service can also introduce latency using the following experi
"roleArn": "arn:aws:iam:000000000000:role/ExperimentRole"
}
```
+
Save this template as `latency-experiment.json` and use it to create an experiment definition through the FIS service:
{{< command >}}
@@ -371,10 +395,10 @@ $ curl --location 'http://12345.execute-api.localhost.localstack.cloud:4566/dev/
"id": "prod-1088",
"name": "Super Widget",
"price": "29.99",
- "description": "A versatile widget that can be used for a variety of purposes. Durable, reliable, and affordable."
+ "description": "A versatile widget that can be used for a variety of purposes.
+Durable, reliable, and affordable."
}'
An error occurred (InternalError) when calling the GetResources operation (reached max retries: 4): Failing as per Fault Injection Simulator configuration
{{< /command >}}
-
diff --git a/content/en/tutorials/gitlab_ci_testcontainers/index.md b/content/en/tutorials/gitlab_ci_testcontainers/index.md
index a6c3cea14f..aeb69274bd 100644
--- a/content/en/tutorials/gitlab_ci_testcontainers/index.md
+++ b/content/en/tutorials/gitlab_ci_testcontainers/index.md
@@ -30,25 +30,29 @@ leadimage: "ls-gitlab-testcontainers.png"
Testcontainers is an open-source framework that provides lightweight APIs for bootstrapping local development and test dependencies
with real services wrapped in Docker containers.
Running tests with Testcontainers and LocalStack is crucial for AWS-powered applications because it ensures each test runs in a clean,
-isolated environment, providing consistency across all development and CI machines. LocalStack avoids AWS costs by emulating
+isolated environment, providing consistency across all development and CI machines.
+LocalStack avoids AWS costs by emulating
services locally, preventing exceeding AWS free tier limits, and eliminates reliance on potentially unstable external AWS services.
This allows for the simulation of difficult-to-reproduce scenarios, edge cases, and enables testing of the
-entire application stack in an integrated manner. Testing with LocalStack and Testcontainers also integrates
+entire application stack in an integrated manner.
+Testing with LocalStack and Testcontainers also integrates
seamlessly with CI/CD pipelines like GitLab CI or GitHub Actions, allowing developers to run automated tests without requiring AWS credentials or services.
## Prerequisites
For this tutorial, you will need:
-- [LocalStack Pro](https://docs.localstack.cloud/getting-started/auth-token/) to emulate the AWS services. If you don't have a subscription yet, you can just get a trial license for free.
+- [LocalStack Pro](https://docs.localstack.cloud/getting-started/auth-token/) to emulate the AWS services.
+ If you don't have a subscription yet, you can just get a trial license for free.
- [Docker](https://docker.io/)
- [A GitLab account](https://gitlab.com/)
## GitLab overview
GitLab is striving to be a complete tool for DevOps practices, offering not just source code management and continuous integration, but also features for
-monitoring, security, planning, deploying and more. By having your code and CI on the same platform, workflows are simplified and collaboration is enhanced.
-While Jenkins is still a very prominent CI/CD tool in the industry, it is up to the user to figure out where to host it and focuses
+monitoring, security, planning, deploying and more.
+By having your code and CI on the same platform, workflows are simplified and collaboration is enhanced.
+While Jenkins is still a very prominent CI/CD tool in the industry, it is up to the user to figure out where to host it and focuses
solely on CI/CD features.
## GitLab architecture
@@ -57,11 +61,13 @@ solely on CI/CD features.
]
-As users, we only interact directly with a GitLab instance which is responsible for hosting the application code and all the needed configurations, including the
-ones for pipelines. The instance is then in charge of running the pipelines and assigning runners to execute the defined jobs.
+As users, we only interact directly with a GitLab instance which is responsible for hosting the application code and all the needed configurations, including the
+ones for pipelines.
+The instance is then in charge of running the pipelines and assigning runners to execute the defined jobs.
-When running CI pipelines, you can choose to use [**GitLab-hosted runners**](https://docs.gitlab.com/ee/ci/runners/index.html), or provision and register
-[**self-managed runners**](https://docs.gitlab.com/runner/install/docker.html). This tutorial will cover both.
+When running CI pipelines, you can choose to use [**GitLab-hosted runners**](https://docs.gitlab.com/ee/ci/runners/index.html), or provision and register
+[**self-managed runners**](https://docs.gitlab.com/runner/install/docker.html).
+This tutorial will cover both.
### Runners hosted by GitLab
@@ -69,31 +75,34 @@ The GitLab documentation highlights some key aspects about the provided runners:
- They can run on Linux, Windows (beta) and MacOS (beta).
- They are enabled by default for all projects, with no configuration required.
-- Each job is executed by a newly provisioned VM.
+- Each job is executed by a newly provisioned VM.
- Job runs have `sudo` access without a password.
-- VMs are isolated between job executions.
+- VMs are isolated between job executions.
- Their storage is shared by the operating system, the image with pre-installed software,
and a copy of your cloned repository, meaning that the remaining disk space for jobs will be reduced.
-- The runners are configured to run in privileged mode to support Docker in Docker to build images natively or
+- The runners are configured to run in privileged mode to support Docker in Docker to build images natively or
run multiple containers within each job.
### Self-hosted runners
-Essentially, the architecture does not change, except the runners will be executing the jobs on a local machine. For developing locally,
+Essentially, the architecture does not change, except the runners will be executing the jobs on a local machine.
+For developing locally,
this approach is very convenient and there are several benefits:
-- **Customization**: you can configure the runners to suit your specific needs and environment.
-- **Performance**: improved performance and faster builds by leveraging your own hardware.
-- **Security**: enhanced control over your data and build environment, reducing exposure to external threats.
+- **Customization**: you can configure the runners to suit your specific needs and environment.
+- **Performance**: improved performance and faster builds by leveraging your own hardware.
+- **Security**: enhanced control over your data and build environment, reducing exposure to external threats.
- **Resource Management**: better management and allocation of resources to meet your project's demands.
- **Cost Efficiency**: depending on your alternatives, you can avoid usage fees associated with cloud-hosted runners.
-
## Application Overview
-Our sample backend application stores information about different types of coffee in files, with descriptions stored in an S3 bucket. It utilizes two
-Lambda functions to create/update and retrieve these descriptions, all accessible through an API Gateway. While we won't delve
-into the details of creating these AWS resources, we'll use AWS CLI to initialize them during container startup using init hooks. You can
+Our sample backend application stores information about different types of coffee in files, with descriptions stored in an S3 bucket.
+It utilizes two
+Lambda functions to create/update and retrieve these descriptions, all accessible through an API Gateway.
+While we won't delve
+into the details of creating these AWS resources, we'll use AWS CLI to initialize them during container startup using init hooks.
+You can
find the whole setup in the [init-resources.sh](https://gitlab.com/tinyg210/coffee-backend-localstack/-/blob/main/src/test/resources/init-resources.sh?ref_type=heads) file.
The following diagram visually explains the simple workflows that we want to check in our automated test in CI, using Testcontainers.
We'll need to make sure that the files are correctly created and named, that the validations and exceptions happen as expected.
@@ -107,7 +116,7 @@ We'll need to make sure that the files are correctly created and named, that the
To follow along, make changes to the code or run your own pipelines, you may fork the repository from the [coffee-backend-localstack sample](https://gitlab.com/tinyg210/coffee-backend-localstack).
-The application is developed, built and tested locally, the next step is to establish a quality gate in the pipeline, to make sure nothing breaks.
+The application is developed, built and tested locally, the next step is to establish a quality gate in the pipeline, to make sure nothing breaks.
The basis for the container used for testing looks like this:
@@ -140,7 +149,8 @@ Here's a breakdown of what's important:
- The image used for the test LocalStack instance is set to the latest Pro version (at the time of writing).
- In order to use the Pro image, a `LOCALSTACK_AUTH_TOKEN` variable needs to be set and read from the environment.
- There are two files copied to the container before startup: the JAR file for the Lambda functions and the script for provisioning
-all the necessary AWS resources. Both files are copied with read/write/execute permissions.
+all the necessary AWS resources.
+ Both files are copied with read/write/execute permissions.
- `DEBUG=1` enables a more verbose logging of LocalStack.
- `LAMBDA_DOCKER_FLAGS` sets specific Testcontainers labels to the Lambda containers, as a solution to be correctly managed by Ryuk.
Since the compute containers are created by LocalStack and not the Testcontainers framework, they do not receive the necessary tags.
@@ -149,11 +159,13 @@ Since the compute containers are created by LocalStack and not the Testcontainer
{{< alert title="Sidenote" >}}
-Ryuk is a component of Testcontainers that helps manage and clean up Docker resources created during testing. Specifically, Ryuk
+Ryuk is a component of Testcontainers that helps manage and clean up Docker resources created during testing.
+Specifically, Ryuk
ensures that any Docker containers, networks, volumes, and other resources are properly removed when they are no longer needed.
This prevents resource leaks and ensures that the testing environment remains clean and consistent between test runs.
-When Testcontainers starts, it typically launches a Ryuk container in the background. This container continuously monitors
+When Testcontainers starts, it typically launches a Ryuk container in the background.
+This container continuously monitors
the Docker resources created by Testcontainers and removes them once the test execution is complete or if they are no longer in use.
{{< /alert >}}
@@ -163,10 +175,14 @@ For this tutorial you don't really need to dive into the specifics of the tests,
### Setting up the pipeline configuration
The `.gitlab-ci.yml` file is a configuration file for defining GitLab CI/CD pipelines, which automate the process of building, testing,
-and deploying applications. It specifies stages (such as build, test, and deploy) and the jobs within each stage, detailing the commands
-to be executed. Jobs can define dependencies, artifacts, and environment variables. Pipelines are triggered by events like code pushes,
+and deploying applications.
+It specifies stages (such as build, test, and deploy) and the jobs within each stage, detailing the commands
+to be executed.
+Jobs can define dependencies, artifacts, and environment variables.
+Pipelines are triggered by events like code pushes,
merge requests, or schedules, and they are executed by runners.
-This file enables automated, consistent, and repeatable workflows for software development and deployment. In this example we will focus on
+This file enables automated, consistent, and repeatable workflows for software development and deployment.
+In this example we will focus on
just the building and testing parts.
Let's break down the `.gitlab-ci.yml` for this project:
@@ -223,17 +239,20 @@ test_job:
```
- `image: ubuntu:latest` - This specifies the base Docker image used for all jobs in the pipeline. `ubuntu:latest` is a popular and
-easy choice because it's a well-known, stable, and widely-supported Linux distribution. It ensures a consistent environment across
-all pipeline stages. Each job can define its own image (for example `maven` or `docker` images), but in this case a generic image with the
+easy choice because it's a well-known, stable, and widely-supported Linux distribution.
+ It ensures a consistent environment across
+all pipeline stages.
+ Each job can define its own image (for example `maven` or `docker` images), but in this case a generic image with the
necessary dependencies (curl, Java, maven, docker) installed covers the needs for both stages.
- `before_script` - these commands are run before any job script in the pipeline, on top of the Ubuntu image.
- The two stages are defined at the top: `build` and `test`.
-- `cache` - caches the Maven dependencies to speed up subsequent pipeline runs.
+- `cache` - caches the Maven dependencies to speed up subsequent pipeline runs.
- `.m2/repository` - this is the default location where Maven stores its local repository of dependencies.
- The `script` section - specifies the scripts that run for each job.
- `artifacts` - specifies the build artifacts (e.g., JAR files) to be preserved and passed to the next stages (the `target` folder).
- The build job runs only on the `main` branch.
-- `docker:26.1.2-dind` - specifies the service necessary to use Docker-in-Docker to run Docker commands inside the pipeline job. This is
+- `docker:26.1.2-dind` - specifies the service necessary to use Docker-in-Docker to run Docker commands inside the pipeline job.
+ This is
useful for integration testing with Docker containers.
- Variables:
- `DOCKER_HOST: tcp://docker:2375` - sets the Docker host to communicate with the Docker daemon inside the dind service.
@@ -243,26 +262,34 @@ useful for integration testing with Docker containers.
### Executors
-We mentioned in the beginning that each job runs in a newly provisioned VM. You can also notice that the pipeline configuration mentions
-a docker image, which is a template that contains instructions for creating a container. This might look confusing, but a runner is responsible
-for the execution of one job. This runner is installed on a machine and implements
-a certain [executor](https://docs.gitlab.com/runner/executors/). The executor determines the environment in which the job runs. By
-default, the GitLab-managed runners use a Docker Machine executor. Some other available executor options are: SSH, Shell, Parallels,
+We mentioned in the beginning that each job runs in a newly provisioned VM.
+You can also notice that the pipeline configuration mentions
+a docker image, which is a template that contains instructions for creating a container.
+This might look confusing, but a runner is responsible
+for the execution of one job.
+This runner is installed on a machine and implements
+a certain [executor](https://docs.gitlab.com/runner/executors/).
+The executor determines the environment in which the job runs.
+By
+default, the GitLab-managed runners use a Docker Machine executor.
+Some other available executor options are: SSH, Shell, Parallels,
VirtualBox, Docker, Docker Autoscaler, Kubernetes.
Sometimes visualizing the components of a pipeline can be tricky, so let's simplify this into a diagram:
{{< figure src="gitlab-ci-diagram.png" width="80%" height="auto">}}
-Basically, the `service` is an additional container that starts at the same time as the one running the `test_job`. The job container has
+Basically, the `service` is an additional container that starts at the same time as the one running the `test_job`.
+The job container has
a Docker client, and it communicates with the Docker daemon, running in the service container, in order to spin up more containers, in this
case for the Lambda functions.
-Don't forget to add your `LOCALSTACK_AUTH_TOKEN` as a masked variable in your CI/CD settings.
+Don't forget to add your `LOCALSTACK_AUTH_TOKEN` as a masked variable in your CI/CD settings.
```vue
Settings -> CI/CD -> Expand the Variables section -> Add variable
```
+
{{< figure src="ci-variable.png" width="80%" height="auto">}}
In the web interface, under the Jobs section, you can see the jobs that ran, and you can also filter them based on their status.
@@ -271,11 +298,13 @@ In the web interface, under the Jobs section, you can see the jobs that ran, and
## CI Pipeline Using Self-hosted Runners
-There are some cases when you want to run your pipelines locally and GitLab can provide that functionality.
-If you're new to the GitLab ecosystem, you need to be careful in configuring this setup, because it's easy to overlook an important field which
+There are some cases when you want to run your pipelines locally and GitLab can provide that functionality.
+If you're new to the GitLab ecosystem, you need to be careful in configuring this setup, because it's easy to overlook an important field which
can hinder your job runs.
-Let's get started by using the web interface. In your GitLab project, in the left-hand side panel, follow the path:
+Let's get started by using the web interface.
+In your GitLab project, in the left-hand side panel, follow the path:
+
```vue
Settings -> CI/CD -> Expand the Runners section -> Project runners -> New project runner
```
@@ -292,15 +321,22 @@ This dashboard may suffer changes and improvements over time, but the attributes
{{< figure src="create-runner-2.png" width="80%" height="auto">}}
-After selecting the Linux machine you're done with defining the runner. Now you need a place to execute this runner, which will be your local
-computer. Notice the token in the first step command and save it for later. Runner authentication tokens have the prefix `glrt-`.
+After selecting the Linux machine you're done with defining the runner.
+Now you need a place to execute this runner, which will be your local
+computer.
+Notice the token in the first step command and save it for later.
+Runner authentication tokens have the prefix `glrt-`.
-For simplicity, we'll use a GitLab Runner Docker image. The GitLab Runner Docker images are designed as wrappers around the standard
-`gitlab-runner` command, like if GitLab Runner was installed directly on the host. You can read more about it in the [GitLab documentation](https://docs.gitlab.com/runner/install/docker.html).
+For simplicity, we'll use a GitLab Runner Docker image.
+The GitLab Runner Docker images are designed as wrappers around the standard
+`gitlab-runner` command, like if GitLab Runner was installed directly on the host.
+You can read more about it in the [GitLab documentation](https://docs.gitlab.com/runner/install/docker.html).
-Make sure you have Docker installed. To verify your setup you can run the `docker info` command.
+Make sure you have Docker installed.
+To verify your setup you can run the `docker info` command.
-Now, you need to create a volume on the disk that holds the configuration for the runner. You can have different volumes that can be
+Now, you need to create a volume on the disk that holds the configuration for the runner.
+You can have different volumes that can be
used for different runners.
{{}}
@@ -352,14 +388,16 @@ Configuration loaded builds=0 max_builds=1
```
Let's look at the `config.toml` file and make the final adjustment before successfully running the pipeline.
-For running a job that does not require any additional containers to be created, you can stop here. However, since
-we need to run Docker commands in our CI/CD jobs, we must configure GitLab Runner to support those commands.
+For running a job that does not require any additional containers to be created, you can stop here.
+However, since
+we need to run Docker commands in our CI/CD jobs, we must configure GitLab Runner to support those commands.
This method requires `privileged` mode.
-Let's use the current running container to do that. Run the following:
+Let's use the current running container to do that.
+Run the following:
```commandline
-$ docker exec -it gitlab-runner bin/bash
+docker exec -it gitlab-runner bin/bash
```
Inside the container, let's run:
@@ -384,7 +422,8 @@ $ apt update && apt install nano
$ nano config.toml
{{}}
-The `privileged` field needs to be changed to `true`. Now the configurations should look like this:
+The `privileged` field needs to be changed to `true`.
+Now the configurations should look like this:
```toml
connection_max_age = "15m0s"
@@ -419,13 +458,16 @@ shutdown_timeout = 0
network_mtu = 0
```
-`[CTRL] + [X]` to save and exit the file. The runner is ready to use. You can now run your pipeline by pushing changes to your project
+`[CTRL] + [X]` to save and exit the file.
+The runner is ready to use.
+You can now run your pipeline by pushing changes to your project
or from the dashboard, by going to `Build -> Pipelines` and using the `Run pipeline` button.
## Conclusion
In this tutorial, we've covered setting up a CI pipeline with GitLab runners and configuring a local Docker container to run the pipeline
-using a self-configured GitLab runner. Overall, the GitLab platform is an intricate system that can be used for highly complex projects to serve
-a multitude of purposes. With the steps learnt in this article, you can efficiently run end-to-end tests for your application using Testcontainers
+using a self-configured GitLab runner.
+Overall, the GitLab platform is an intricate system that can be used for highly complex projects to serve
+a multitude of purposes.
+With the steps learnt in this article, you can efficiently run end-to-end tests for your application using Testcontainers
and LocalStack.
-
diff --git a/content/en/tutorials/iam-policy-stream/index.md b/content/en/tutorials/iam-policy-stream/index.md
index 0c6c97dc83..6a7bba1f4c 100644
--- a/content/en/tutorials/iam-policy-stream/index.md
+++ b/content/en/tutorials/iam-policy-stream/index.md
@@ -20,32 +20,45 @@ platform:
## Introduction
-When you're developing cloud and serverless applications, you need to grant access to various AWS resources like S3 buckets and RDS databases. To handle this, you create IAM roles and assign permissions through policies. However, configuring these policies can be challenging, especially if you want to ensure minimal access of all principals to your resources.
+When you're developing cloud and serverless applications, you need to grant access to various AWS resources like S3 buckets and RDS databases.
+To handle this, you create IAM roles and assign permissions through policies.
+However, configuring these policies can be challenging, especially if you want to ensure minimal access of all principals to your resources.
-[LocalStack IAM Policy Stream](https://app.localstack.cloud/policy-stream) automates the generation of IAM policies for your AWS API requests on your local machine. This stream helps you identify the necessary permissions for your cloud application and allows you to detect logical errors, such as unexpected actions in your policies.
+[LocalStack IAM Policy Stream](https://app.localstack.cloud/policy-stream) automates the generation of IAM policies for your AWS API requests on your local machine.
+This stream helps you identify the necessary permissions for your cloud application and allows you to detect logical errors, such as unexpected actions in your policies.
-This tutorial will guide you through setting up IAM Policy Stream for a locally running AWS application. We'll use a basic example involving an S3 bucket, an SQS queue, and a bucket notification configuration. You'll generate the policy for the bucket notification configuration and insert it into the SQS queue.
+This tutorial will guide you through setting up IAM Policy Stream for a locally running AWS application.
+We'll use a basic example involving an S3 bucket, an SQS queue, and a bucket notification configuration.
+You'll generate the policy for the bucket notification configuration and insert it into the SQS queue.
## Why use IAM Policy Stream?
-LocalStack enables you to create and enforce local IAM roles and policies using the [`ENFORCE_IAM` feature](https://docs.localstack.cloud/user-guide/security-testing/iam-enforcement/). However, users often struggle to figure out the necessary permissions for different actions. It's important to find a balance, avoiding giving too many permissions while making sure the right ones are granted.
+LocalStack enables you to create and enforce local IAM roles and policies using the [`ENFORCE_IAM` feature](https://docs.localstack.cloud/user-guide/security-testing/iam-enforcement/).
+However, users often struggle to figure out the necessary permissions for different actions.
+It's important to find a balance, avoiding giving too many permissions while making sure the right ones are granted.
-This challenge becomes more complex when dealing with AWS services that make requests not directly visible to users. For instance, if an SNS topic sends a message to an SQS queue and the underlying call fails, there might be no clear error message, causing confusion, especially for those less familiar with the services.
+This challenge becomes more complex when dealing with AWS services that make requests not directly visible to users.
+For instance, if an SNS topic sends a message to an SQS queue and the underlying call fails, there might be no clear error message, causing confusion, especially for those less familiar with the services.
-IAM Policy Stream simplifies this by automatically generating the needed policies and showing them to users. This makes it easier to integrate with resources, roles, and users, streamlining the development process. Additionally, it serves as a useful learning tool, helping users understand the permissions linked to various AWS calls and improving the onboarding experience for newcomers to AWS.
+IAM Policy Stream simplifies this by automatically generating the needed policies and showing them to users.
+This makes it easier to integrate with resources, roles, and users, streamlining the development process.
+Additionally, it serves as a useful learning tool, helping users understand the permissions linked to various AWS calls and improving the onboarding experience for newcomers to AWS.
## Prerequisites
-- [LocalStack CLI](https://docs.localstack.cloud/getting-started/installation/#localstack-cli) with [`LOCALSTACK_AUTH_TOKEN`](https://docs.localstack.cloud/getting-started/auth-token/)
-- [Docker](https://docs.docker.com/get-docker/)
-- [Terraform](https://developer.hashicorp.com/terraform/install) & [`tflocal` wrapper](https://github.com/localstack/terraform-local)
-- [AWS](https://docs.aws.amazon.com/cli/v1/userguide/cli-chap-install.html) CLI with [`awslocal` wrapper](https://github.com/localstack/awscli-local)
-- [LocalStack Web Application account](https://app.localstack.cloud/sign-up)
-- [`jq`](https://jqlang.github.io/jq/download/)
+- [LocalStack CLI](https://docs.localstack.cloud/getting-started/installation/#localstack-cli) with [`LOCALSTACK_AUTH_TOKEN`](https://docs.localstack.cloud/getting-started/auth-token/)
+- [Docker](https://docs.docker.com/get-docker/)
+- [Terraform](https://developer.hashicorp.com/terraform/install) & [`tflocal` wrapper](https://github.com/localstack/terraform-local)
+- [AWS](https://docs.aws.amazon.com/cli/v1/userguide/cli-chap-install.html) CLI with [`awslocal` wrapper](https://github.com/localstack/awscli-local)
+- [LocalStack Web Application account](https://app.localstack.cloud/sign-up)
+- [`jq`](https://jqlang.github.io/jq/download/)
## Tutorial: Configure an S3 bucket for event notifications using SQS
-In this tutorial, you will configure a LocalStack S3 bucket to send event notifications to an SQS queue. You will then use IAM Policy Stream to generate the necessary IAM policy for the SQS queue. You will use Terraform to create the resources and the AWS CLI to interact with them. With LocalStack's IAM enforcement enabled, you can thoroughly test your policy and ensure that the development setup mirrors the production environment.
+In this tutorial, you will configure a LocalStack S3 bucket to send event notifications to an SQS queue.
+You will then use IAM Policy Stream to generate the necessary IAM policy for the SQS queue.
+You will use Terraform to create the resources and the AWS CLI to interact with them.
+With LocalStack's IAM enforcement enabled, you can thoroughly test your policy and ensure that the development setup mirrors the production environment.
### Start your LocalStack container
@@ -57,12 +70,13 @@ $ DEBUG=1 IAM_SOFT_MODE=1 localstack start
In the above command:
-- `DEBUG=1` turns on detailed logging to check API calls and IAM violations.
-- `IAM_SOFT_MODE=1` lets you test IAM enforcement by logging violations without stopping the API calls.
+- `DEBUG=1` turns on detailed logging to check API calls and IAM violations.
+- `IAM_SOFT_MODE=1` lets you test IAM enforcement by logging violations without stopping the API calls.
### Create the Terraform configuration
-Create a new file called `main.tf` for the Terraform setup of an S3 bucket and an SQS queue. Start by using the `aws_sqs_queue` resource to create an SQS queue named `s3-event-notification-queue`.
+Create a new file called `main.tf` for the Terraform setup of an S3 bucket and an SQS queue.
+Start by using the `aws_sqs_queue` resource to create an SQS queue named `s3-event-notification-queue`.
```hcl
resource "aws_sqs_queue" "queue" {
@@ -93,14 +107,18 @@ resource "aws_s3_bucket_notification" "bucket_notification" {
### Deploy the Terraform configuration
-You can use `tflocal` to deploy your Terraform configuration within the LocalStack environment. Run the following commands to initialize and apply the Terraform configuration:
+You can use `tflocal` to deploy your Terraform configuration within the LocalStack environment.
+Run the following commands to initialize and apply the Terraform configuration:
{{< command >}}
$ tflocal init
$ tflocal apply
{{< /command >}}
-You will be prompted to confirm the changes. Type `yes` to continue. Since LocalStack is used, no real AWS resources are created. LocalStack will emulate ephemeral development resources that will be removed automatically once you stop the LocalStack container.
+You will be prompted to confirm the changes.
+Type `yes` to continue.
+Since LocalStack is used, no real AWS resources are created.
+LocalStack will emulate ephemeral development resources that will be removed automatically once you stop the LocalStack container.
After applying the Terraform configuration, the output will appear similar to this:
@@ -118,12 +136,14 @@ Apply complete! Resources: 3 added, 0 changed, 0 destroyed.
### Start the IAM Policy Stream
-Access the [LocalStack Web Application](https://app.localstack.cloud/) and go to the [IAM Policy Stream dashboard](https://app.localstack.cloud/policy-stream). This feature enables you to directly examine the generated policies, displaying the precise permissions required for each API call.
+Access the [LocalStack Web Application](https://app.localstack.cloud/) and go to the [IAM Policy Stream dashboard](https://app.localstack.cloud/policy-stream).
+This feature enables you to directly examine the generated policies, displaying the precise permissions required for each API call.
-You'll observe the Stream active status icon, indicating that making any local AWS API request will trigger the generation of an IAM Policy. Now, let's proceed to upload a file to the S3 bucket to trigger the event notification and generate the IAM policy.
+You'll observe the Stream active status icon, indicating that making any local AWS API request will trigger the generation of an IAM Policy.
+Now, let's proceed to upload a file to the S3 bucket to trigger the event notification and generate the IAM policy.
### Trigger the event notification
@@ -134,7 +154,8 @@ $ echo "Hello, LocalStack" > some-log-file.log
$ awslocal s3 cp some-log-file.log s3://s3-event-notification-bucket/
{{< /command >}}
-Uploading a file will activate an event notification, sending a message to the SQS queue. However, since the SQS queue lacks the necessary permissions, an IAM violation will appear in the [IAM Policy Stream dashboard](https://app.localstack.cloud/policy-stream).
+Uploading a file will activate an event notification, sending a message to the SQS queue.
+However, since the SQS queue lacks the necessary permissions, an IAM violation will appear in the [IAM Policy Stream dashboard](https://app.localstack.cloud/policy-stream).
@@ -151,12 +172,15 @@ You can also navigate to the LocalStack logs and observe the IAM violation messa
### Generate the IAM policy
-Go to the IAM Policy Stream dashboard and review the API calls such as `PutObject`, `SendMessage`, and `ReceiveMessage`. Notice that the `SendMessage` call was denied due to an IAM violation. Click on the **SQS.SendMessage** action to see the suggested IAM policy.
+Go to the IAM Policy Stream dashboard and review the API calls such as `PutObject`, `SendMessage`, and `ReceiveMessage`.
+Notice that the `SendMessage` call was denied due to an IAM violation.
+Click on the **SQS.SendMessage** action to see the suggested IAM policy.
-LocalStack automatically recommends a resource-based policy for the SQS queue `arn:aws:sqs:us-east-1:000000000000:s3-event-notification-queue`. Copy this policy and incorporate it into your Terraform configuration under the `aws_sqs_queue` resource by adding the `policy` attribute:
+LocalStack automatically recommends a resource-based policy for the SQS queue `arn:aws:sqs:us-east-1:000000000000:s3-event-notification-queue`.
+Copy this policy and incorporate it into your Terraform configuration under the `aws_sqs_queue` resource by adding the `policy` attribute:
```hcl
resource "aws_sqs_queue" "queue" {
@@ -194,7 +218,8 @@ Now, re-apply the Terraform configuration to update the SQS queue with the new p
$ tflocal apply
{{< /command >}}
-Next, trigger the event notification again by uploading a file to the S3 bucket. You can confirm that the S3 bucket is correctly set up for event notifications through the SQS queue by checking if the message is received in the SQS queue:
+Next, trigger the event notification again by uploading a file to the S3 bucket.
+You can confirm that the S3 bucket is correctly set up for event notifications through the SQS queue by checking if the message is received in the SQS queue:
{{< command >}}
$ awslocal sqs receive-message \
@@ -223,20 +248,29 @@ You can now check the IAM Policy Stream dashboard to confirm that there are no v
### Generate a comprehensive policy
-In scenarios where there are many AWS services, and every AWS API request generates a policy it might be cumbersome to analyze every policy. In such cases, you can generate one comprehensive policy for all your AWS resources together.
+In scenarios where there are many AWS services, and every AWS API request generates a policy it might be cumbersome to analyze every policy.
+In such cases, you can generate one comprehensive policy for all your AWS resources together.
-You can navigate to the **Summary Policy** tab on the IAM Policy Stream dashboard. This concatenates the policy per principle which the policy should be attached to. For the example above, you would be able to see the **Identity Policy** for the root user which has all the actions and resources inside one single policy file for the operations we performed.
+You can navigate to the **Summary Policy** tab on the IAM Policy Stream dashboard.
+This concatenates the policy per principle which the policy should be attached to.
+For the example above, you would be able to see the **Identity Policy** for the root user which has all the actions and resources inside one single policy file for the operations we performed.
-On the other hand, you have the **Resource Policy** for the SQS queue, where you can see the permission necessary for the subscription. For larger AWS applications, you would be able to find multiple roles and multiple resource-based policies depending on your scenario.
+On the other hand, you have the **Resource Policy** for the SQS queue, where you can see the permission necessary for the subscription.
+For larger AWS applications, you would be able to find multiple roles and multiple resource-based policies depending on your scenario.
## Conclusion
-IAM Policy Stream streamlines your development process by minimizing the manual creation of policies and confirming the necessity of granted permissions. However, it is advisable to manually confirm that your policy aligns with your intended actions. Your code may unintentionally make requests, and LocalStack considers all requests made during policy generation as valid.
+IAM Policy Stream streamlines your development process by minimizing the manual creation of policies and confirming the necessity of granted permissions.
+However, it is advisable to manually confirm that your policy aligns with your intended actions.
+Your code may unintentionally make requests, and LocalStack considers all requests made during policy generation as valid.
-A practical scenario is automating tests, such as integration or end-to-end testing, against your application using LocalStack. This setup allows LocalStack to automatically generate policies with the required permissions. However, it's important to note that these generated policies may not cover all possible requests, as only the requests made during testing are included. You can then review and customize the policies to meet your needs, ensuring that overly permissive policies don't find their way into production environments.
+A practical scenario is automating tests, such as integration or end-to-end testing, against your application using LocalStack.
+This setup allows LocalStack to automatically generate policies with the required permissions.
+However, it's important to note that these generated policies may not cover all possible requests, as only the requests made during testing are included.
+You can then review and customize the policies to meet your needs, ensuring that overly permissive policies don't find their way into production environments.
diff --git a/content/en/tutorials/java-notification-app/index.md b/content/en/tutorials/java-notification-app/index.md
index aadace22ee..b8f0f854eb 100644
--- a/content/en/tutorials/java-notification-app/index.md
+++ b/content/en/tutorials/java-notification-app/index.md
@@ -29,11 +29,18 @@ leadimage: "java-notification-app-featured-image.png"
---
Java is a popular platform for cloud applications that use Amazon Web Services.
-With the AWS Java SDK, Java developers can build applications that work with various AWS services, like Simple Email Service (SES), Simple Queue Service (SQS), Simple Notification Service (SNS), and more. Simple Email Service (SES) is a cloud-based email-sending service that enables developers to integrate email functionality into their applications running on AWS. SES allows developers to work without an on-prem Simple Mail Transfer Protocol (SMTP) system and send bulk emails to many recipients.
+With the AWS Java SDK, Java developers can build applications that work with various AWS services, like Simple Email Service (SES), Simple Queue Service (SQS), Simple Notification Service (SNS), and more.
+Simple Email Service (SES) is a cloud-based email-sending service that enables developers to integrate email functionality into their applications running on AWS.
+SES allows developers to work without an on-prem Simple Mail Transfer Protocol (SMTP) system and send bulk emails to many recipients.
-[LocalStack Pro](https://app.localstack.cloud/) supports SES along with a simple user interface to inspect email accounts and sent messages. LocalStack also supports sending SES messages through an actual SMTP email server. We will use SQS and SNS to process the emails. We would further employ a CloudFormation stack to configure the infrastructure and configure SNS & SQS subscriptions. AWS Java SDK would be employed to receive these SQS messages and to send these messages through SES further.
+[LocalStack Pro](https://app.localstack.cloud/) supports SES along with a simple user interface to inspect email accounts and sent messages.
+LocalStack also supports sending SES messages through an actual SMTP email server.
+We will use SQS and SNS to process the emails.
+We would further employ a CloudFormation stack to configure the infrastructure and configure SNS & SQS subscriptions.
+AWS Java SDK would be employed to receive these SQS messages and to send these messages through SES further.
-In this tutorial, we will build a Java Spring Boot application that uses locally emulated AWS infrastructure on LocalStack provisioned by CloudFormation, and that uses the Java AWS SDK to send SES, SQS, and SNS messages. We will further use [MailHog](https://github.com/mailhog/MailHog), a local SMTP server, to inspect the emails sent through SES via an intuitive user interface.
+In this tutorial, we will build a Java Spring Boot application that uses locally emulated AWS infrastructure on LocalStack provisioned by CloudFormation, and that uses the Java AWS SDK to send SES, SQS, and SNS messages.
+We will further use [MailHog](https://github.com/mailhog/MailHog), a local SMTP server, to inspect the emails sent through SES via an intuitive user interface.
## Prerequisites
@@ -48,7 +55,10 @@ For this tutorial, you will need:
## Project setup
-To get started, we will set up our Spring Boot project by implementing a single module named `example` that will house our application code. The module will contain the code required to set up our AWS configuration, notification service, and message application. We will have another directory called `resources` that will house our CloudFormation stack required to set up an SNS topic and an SQS queue. The project directory would look like this:
+To get started, we will set up our Spring Boot project by implementing a single module named `example` that will house our application code.
+The module will contain the code required to set up our AWS configuration, notification service, and message application.
+We will have another directory called `resources` that will house our CloudFormation stack required to set up an SNS topic and an SQS queue.
+The project directory would look like this:
```bash
├── pom.xml
@@ -145,11 +155,15 @@ In our root POM configuration, we will add the following dependencies:
```
-In the above POM file, we have added the AWS Java SDK dependencies for SES, SNS, SQS, and CloudFormation. We have also added the Spring Boot dependencies for our application. We can move on to the next step with the initial setup complete.
+In the above POM file, we have added the AWS Java SDK dependencies for SES, SNS, SQS, and CloudFormation.
+We have also added the Spring Boot dependencies for our application.
+We can move on to the next step with the initial setup complete.
## Setting up AWS configuration
-To get started, we will setup the AWS configuration, to be defined in `AwsConfiguration.java`, required for our Spring Boot application. We will create a configuration class to use the Spring Bean annotation to create two beans: `SesClient` and a `SqsClient`, to connect to the SES and SQS clients respectively. We will then create a bean to retrieve the `queueUrl` for the `email-notification-queue`:
+To get started, we will setup the AWS configuration, to be defined in `AwsConfiguration.java`, required for our Spring Boot application.
+We will create a configuration class to use the Spring Bean annotation to create two beans: `SesClient` and a `SqsClient`, to connect to the SES and SQS clients respectively.
+We will then create a bean to retrieve the `queueUrl` for the `email-notification-queue`:
```java
package com.example;
@@ -203,7 +217,8 @@ public class AwsConfiguration {
}
```
-In the above code, we have used the `@Autowired` annotation to autowrire the dependencies that are required for the application (`SqsClient` `SesClient`, and `notificationQueueUrl` in this case). Now that we have got the URL of the queue created in the previous step, we can move on to the next step.
+In the above code, we have used the `@Autowired` annotation to autowrire the dependencies that are required for the application (`SqsClient` `SesClient`, and `notificationQueueUrl` in this case).
+Now that we have got the URL of the queue created in the previous step, we can move on to the next step.
{{< callout "note" >}}
You can also use the pre-defined clients from the [localstack-utils](https://mvnrepository.com/artifact/cloud.localstack/localstack-utils) Maven project, as an alternative to creating the AWS SDK clients with endpoint overrides manually.
@@ -211,7 +226,8 @@ You can also use the pre-defined clients from the [localstack-utils](https://mvn
## Creating a Notification Service
-To get started with creating a Notification Service, we would need to create a `Notification` class to define the structure of the notification that we would be sending to the SQS queue. We will create a `Notification` class in the `Notification.java` file:
+To get started with creating a Notification Service, we would need to create a `Notification` class to define the structure of the notification that we would be sending to the SQS queue.
+We will create a `Notification` class in the `Notification.java` file:
```java
package com.example;
@@ -247,7 +263,9 @@ public class Notification {
}
```
-In the above code, we have defined three instance variables: `address`, `subject`, and `body`. We have also defined the getters and setters for the instance variables. Let's now create a `@Component` class to listen to a queue, receive and transform the notifications into emails, and send the emails transactionally:
+In the above code, we have defined three instance variables: `address`, `subject`, and `body`.
+We have also defined the getters and setters for the instance variables.
+Let's now create a `@Component` class to listen to a queue, receive and transform the notifications into emails, and send the emails transactionally:
```java
package com.example;
@@ -480,7 +498,9 @@ You can now build the application using the following command:
$ mvn clean install
{{< / command >}}
-If the build is successful, you will notice a `BUILD SUCCESS` message. Now that we have the application ready, let us setup the infrastructure using CloudFormation. Create a new file in ``src/main/resources` called `email-infra.yml` and add the following content:
+If the build is successful, you will notice a `BUILD SUCCESS` message.
+Now that we have the application ready, let us setup the infrastructure using CloudFormation.
+Create a new file in ``src/main/resources` called `email-infra.yml` and add the following content:
```yaml
AWSTemplateFormatVersion: 2010-09-09
@@ -502,11 +522,13 @@ Resources:
TopicArn: !GetAtt EmailTopic.TopicArn
```
-In the above code, we have created a queue called `email-notification-queue` and a topic called `email-notifications`. We have also created a subscription between the queue and the topic, allowing any message published to the topic to be sent to the queue.
+In the above code, we have created a queue called `email-notification-queue` and a topic called `email-notifications`.
+We have also created a subscription between the queue and the topic, allowing any message published to the topic to be sent to the queue.
## Creating the infrastructure
-Now that the initial coding is done, we can give it a try. Let's start LocalStack using a custom `docker-compose` setup, which includes MailHog to capture the emails sent by SES:
+Now that the initial coding is done, we can give it a try.
+Let's start LocalStack using a custom `docker-compose` setup, which includes MailHog to capture the emails sent by SES:
```yaml
version: "3.8"
@@ -534,7 +556,8 @@ services:
- "8025:8025"
```
-The above `docker-compose` file will start LocalStack and pull the MailHog image to start the SMTP server (if it doesn't exist yet!) on port `8025`. You can start LocalStack using the following command:
+The above `docker-compose` file will start LocalStack and pull the MailHog image to start the SMTP server (if it doesn't exist yet!) on port `8025`.
+You can start LocalStack using the following command:
{{< command >}}
$ LOCALSTACK_AUTH_TOKEN= docker-compose up -d
@@ -548,7 +571,8 @@ $ awslocal cloudformation deploy \
--stack-name email-infra
{{< / command >}}
-With our infrastructure ready, we can now start the Spring Boot application. We will set dummy AWS access credentials as environment variables in the command:
+With our infrastructure ready, we can now start the Spring Boot application.
+We will set dummy AWS access credentials as environment variables in the command:
{{< command >}}
$ AWS_ACCESS_KEY_ID=test AWS_SECRET_ACCESS_KEY=test mvn spring-boot:run
@@ -570,7 +594,8 @@ $ awslocal sns publish \
--message '{"subject":"hello", "address": "alice@example.com", "body": "hello world"}'
{{< / command >}}
-In the above command, we have published a message to the topic `email-notifications` with a generic message body. The output of the command should look like this:
+In the above command, we have published a message to the topic `email-notifications` with a generic message body.
+The output of the command should look like this:
```json
{
@@ -648,4 +673,5 @@ In this tutorial, we have demonstrated, how you can:
- Use CloudFormation to provision infrastructure for SNS & SQS subscriptions on LocalStack
- Use the AWS Java SDK and Spring Boot to build an application that sends SQS and SES messages.
-Using [LocalStack Pro](https://app.localstack.cloud), you can use our Web user interface to view the email messages sent by SES. The code for this tutorial can be found in our [LocalStack Pro samples over GitHub](https://github.com/localstack/localstack-pro-samples/tree/master/java-notification-app).
+Using [LocalStack Pro](https://app.localstack.cloud), you can use our Web user interface to view the email messages sent by SES.
+The code for this tutorial can be found in our [LocalStack Pro samples over GitHub](https://github.com/localstack/localstack-pro-samples/tree/master/java-notification-app).
diff --git a/content/en/tutorials/lambda-ecr-container-images/index.md b/content/en/tutorials/lambda-ecr-container-images/index.md
index ebb5e1c863..66a1f2da59 100644
--- a/content/en/tutorials/lambda-ecr-container-images/index.md
+++ b/content/en/tutorials/lambda-ecr-container-images/index.md
@@ -23,11 +23,19 @@ pro: true
leadimage: "lambda-ecr-container-images-featured-image.png"
---
-[Lambda](https://aws.amazon.com/lambda/) is a powerful serverless compute system that enables you to break down your application into smaller, independent functions. These functions can be deployed as individual units within the AWS ecosystem. Lambda offers seamless integration with various AWS services and supports multiple programming languages for different runtime environments. To deploy Lambda functions programmatically, you have two options: [uploading a ZIP file containing your code and dependencies](https://docs.aws.amazon.com/lambda/latest/dg/configuration-function-zip.html) or [packaging your code in a container image](https://docs.aws.amazon.com/lambda/latest/dg/gettingstarted-images.html) and deploying it through Elastic Container Registry (ECR).
+[Lambda](https://aws.amazon.com/lambda/) is a powerful serverless compute system that enables you to break down your application into smaller, independent functions.
+These functions can be deployed as individual units within the AWS ecosystem.
+Lambda offers seamless integration with various AWS services and supports multiple programming languages for different runtime environments.
+To deploy Lambda functions programmatically, you have two options: [uploading a ZIP file containing your code and dependencies](https://docs.aws.amazon.com/lambda/latest/dg/configuration-function-zip.html) or [packaging your code in a container image](https://docs.aws.amazon.com/lambda/latest/dg/gettingstarted-images.html) and deploying it through Elastic Container Registry (ECR).
-[ECR](https://aws.amazon.com/ecr/) is an AWS-managed registry that facilitates the storage and distribution of containerized software. With ECR, you can effectively manage your image lifecycles, versioning, and tagging, separate from your application. It seamlessly integrates with other AWS services like ECS, EKS, and Lambda, enabling you to deploy your container images effortlessly. Creating container images for your Lambda functions involves using Docker and implementing the Lambda Runtime API according to the Open Container Initiative (OCI) specifications.
+[ECR](https://aws.amazon.com/ecr/) is an AWS-managed registry that facilitates the storage and distribution of containerized software.
+With ECR, you can effectively manage your image lifecycles, versioning, and tagging, separate from your application.
+It seamlessly integrates with other AWS services like ECS, EKS, and Lambda, enabling you to deploy your container images effortlessly.
+Creating container images for your Lambda functions involves using Docker and implementing the Lambda Runtime API according to the Open Container Initiative (OCI) specifications.
-[LocalStack Pro](https://localstack.cloud) extends support for Lambda functions using container images through ECR. It enables you to deploy your Lambda functions locally using LocalStack. In this tutorial, we will explore creating a Lambda function using a container image and deploying it locally with the help of LocalStack.
+[LocalStack Pro](https://localstack.cloud) extends support for Lambda functions using container images through ECR.
+It enables you to deploy your Lambda functions locally using LocalStack.
+In this tutorial, we will explore creating a Lambda function using a container image and deploying it locally with the help of LocalStack.
## Prerequisites
@@ -41,14 +49,16 @@ Before diving into this tutorial, make sure you have the following prerequisites
## Creating a Lambda function
-To package and deploy a Lambda function as a container image, we'll create a Lambda function containing our code and a Dockerfile. Create a new directory for your lambda function and navigate to it:
+To package and deploy a Lambda function as a container image, we'll create a Lambda function containing our code and a Dockerfile.
+Create a new directory for your lambda function and navigate to it:
{{< command >}}
$ mkdir -p lambda-container-image
$ cd lambda-container-image
{{< / command >}}
-Initialize the directory by creating two files: `handler.py` and `Dockerfile`. Use the following commands to create the files:
+Initialize the directory by creating two files: `handler.py` and `Dockerfile`.
+Use the following commands to create the files:
{{< command >}}
$ touch handler.py Dockerfile
@@ -61,13 +71,18 @@ def handler(event, context):
print('Hello from LocalStack Lambda container image!')
```
-In the code above, the `handler` function is executed by the Lambda service whenever a trigger event occurs. It serves as the entry point for the Lambda function within the runtime environment and accepts `event` and `context` as parameters, providing information about the event and invocation properties, respectively.
+In the code above, the `handler` function is executed by the Lambda service whenever a trigger event occurs.
+It serves as the entry point for the Lambda function within the runtime environment and accepts `event` and `context` as parameters, providing information about the event and invocation properties, respectively.
-Following these steps, you have created the foundation for your Lambda function and defined its behaviour using Python code. In the following sections, we will package this code and its dependencies into a container image using the `Dockerfile`.
+Following these steps, you have created the foundation for your Lambda function and defined its behaviour using Python code.
+In the following sections, we will package this code and its dependencies into a container image using the `Dockerfile`.
## Building the image
-To package our Lambda function as a container image, we must create a Dockerfile containing the necessary instructions for building the image. Open the Dockerfile and add the following content. This Dockerfile uses the `python:3.8` base image provided by AWS for Lambda and copies the `handler.py` file into the image. It also specifies the function handler as `handler.handler` to ensure the Lambda runtime can locate it where the Lambda handler is available.
+To package our Lambda function as a container image, we must create a Dockerfile containing the necessary instructions for building the image.
+Open the Dockerfile and add the following content.
+This Dockerfile uses the `python:3.8` base image provided by AWS for Lambda and copies the `handler.py` file into the image.
+It also specifies the function handler as `handler.handler` to ensure the Lambda runtime can locate it where the Lambda handler is available.
```Dockerfile
FROM public.ecr.aws/lambda/python:3.8
@@ -78,7 +93,9 @@ CMD [ "handler.handler" ]
```
{{< callout "note">}}
-If your Lambda function has additional dependencies, create a file named `requirements.txt` in the same directory as the Dockerfile. List the required libraries in this file. You can install these dependencies in the `Dockerfile` under the `${LAMBDA_TASK_ROOT}` directory.
+If your Lambda function has additional dependencies, create a file named `requirements.txt` in the same directory as the Dockerfile.
+List the required libraries in this file.
+You can install these dependencies in the `Dockerfile` under the `${LAMBDA_TASK_ROOT}` directory.
{{< /callout >}}
With the Dockerfile prepared, you can now build the container image using the following command, to check if everything works as intended:
@@ -87,17 +104,22 @@ With the Dockerfile prepared, you can now build the container image using the fo
$ docker build .
{{< / command >}}
-By executing these steps, you have defined the Dockerfile that instructs Docker on how to build the container image for your Lambda function. The resulting image will contain your function code and any specified dependencies.
+By executing these steps, you have defined the Dockerfile that instructs Docker on how to build the container image for your Lambda function.
+The resulting image will contain your function code and any specified dependencies.
## Publishing the image to ECR
-Now that the initial setup is complete let's explore how to leverage LocalStack's AWS emulation by pushing our image to ECR and deploying the Lambda container image. Start LocalStack by executing the following command. Make sure to replace `` with your actual auth token:
+Now that the initial setup is complete let's explore how to leverage LocalStack's AWS emulation by pushing our image to ECR and deploying the Lambda container image.
+Start LocalStack by executing the following command.
+Make sure to replace `` with your actual auth token:
{{< command >}}
$ LOCALSTACK_AUTH_TOKEN= DEBUG=1 localstack start -d
{{< / command >}}
-Once the LocalStack container is running, we can create a new ECR repository to store our container image. Use the `awslocal` CLI to achieve this. Run the following command to create the repository, replacing `localstack-lambda-container-image` with the desired name for your repository:
+Once the LocalStack container is running, we can create a new ECR repository to store our container image.
+Use the `awslocal` CLI to achieve this.
+Run the following command to create the repository, replacing `localstack-lambda-container-image` with the desired name for your repository:
{{< command >}}
$ awslocal ecr create-repository --repository-name localstack-lambda-container-image
@@ -120,17 +142,20 @@ $ awslocal ecr create-repository --repository-name localstack-lambda-container-i
{{< / command >}}
{{< callout "note">}}
-To further customize the ECR repository, you can pass additional flags to the `create-repository` command. For more details on the available options, refer to the [AWS CLI documentation](https://docs.aws.amazon.com/cli/latest/reference/ecr/create-repository.html).
+To further customize the ECR repository, you can pass additional flags to the `create-repository` command.
+For more details on the available options, refer to the [AWS CLI documentation](https://docs.aws.amazon.com/cli/latest/reference/ecr/create-repository.html).
{{< /callout >}}
-Next, build the image and push it to the ECR repository. Execute the following commands:
+Next, build the image and push it to the ECR repository.
+Execute the following commands:
{{< command >}}
$ docker build -t localhost:4510/localstack-lambda-container-image .
$ docker push localhost:4510/localstack-lambda-container-image
{{< / command >}}
-In the above commands, we specify the `repositoryUri` as the image name to push the image to the ECR repository. After executing these commands, you can verify that the image is successfully pushed to the repository by using the `describe-images` command:
+In the above commands, we specify the `repositoryUri` as the image name to push the image to the ECR repository.
+After executing these commands, you can verify that the image is successfully pushed to the repository by using the `describe-images` command:
{{< command >}}
$ awslocal ecr describe-images --repository-name localstack-lambda-container-image
@@ -152,14 +177,17 @@ $ awslocal ecr describe-images --repository-name localstack-lambda-container-ima
}
{{< / command >}}
-By running this command, you can confirm that the image is now in the ECR repository. It ensures it is ready for deployment as a Lambda function using LocalStack's AWS emulation capabilities.
+By running this command, you can confirm that the image is now in the ECR repository.
+It ensures it is ready for deployment as a Lambda function using LocalStack's AWS emulation capabilities.
## Deploying the Lambda function
-To deploy the container image as a Lambda function, we will create a new Lambda function using the `create-function` command. Run the following command to create the function:
+To deploy the container image as a Lambda function, we will create a new Lambda function using the `create-function` command.
+Run the following command to create the function:
{{< callout "note">}}
-Before creating the lambda function, please double check under which architecture you have built your image. If your image is built as arm64, you need to specify the lambda architecture when deploying or set `LAMBDA_IGNORE_ARCHTIECTURE=1` when starting LocalStack.
+Before creating the lambda function, please double check under which architecture you have built your image.
+If your image is built as arm64, you need to specify the lambda architecture when deploying or set `LAMBDA_IGNORE_ARCHTIECTURE=1` when starting LocalStack.
More information can be found [in our documentation regarding ARM support.]({{< ref "arm64-support" >}})
{{< /callout >}}
@@ -203,13 +231,18 @@ $ awslocal lambda create-function \
}
{{< / command >}}
-The command provided includes several flags to create the Lambda function. Here's an explanation of each flag:
+The command provided includes several flags to create the Lambda function.
+Here's an explanation of each flag:
-- `ImageUri`: Specifies the image URI of the container image you pushed to the ECR repository (`localhost.localstack.cloud:4510/localstack-lambda-container-image` in this case. Use the return `repositoryUri` from the create-repository command).
+- `ImageUri`: Specifies the image URI of the container image you pushed to the ECR repository (`localhost.localstack.cloud:4510/localstack-lambda-container-image` in this case.
+ Use the return `repositoryUri` from the create-repository command).
- `package-type`: Sets the package type to Image to indicate that the Lambda function will be created using a container image.
- `function-name`: Specifies the name of the Lambda function you want to create.
-- `runtime`: Defines the runtime environment for the Lambda function. In this case, it's specified as provided, indicating that the container image will provide the runtime.
-- `role`: Sets the IAM role ARN that the Lambda function should assume. In the example, a mock role ARN is used. For an actual role, please refer to the [IAM documentation]({{< ref "user-guide/aws/iam" >}}).
+- `runtime`: Defines the runtime environment for the Lambda function.
+ In this case, it's specified as provided, indicating that the container image will provide the runtime.
+- `role`: Sets the IAM role ARN that the Lambda function should assume.
+ In the example, a mock role ARN is used.
+ For an actual role, please refer to the [IAM documentation]({{< ref "user-guide/aws/iam" >}}).
To invoke the Lambda function, you can use the `invoke` command:
@@ -221,7 +254,9 @@ $ awslocal lambda invoke --function-name localstack-lambda-container-image /tmp/
}
{{< / command >}}
-The command above will execute the Lambda function locally within the LocalStack environment. The response will include the StatusCode and ExecutedVersion. You can find the logs of the Lambda invocation in the Lambda container output:
+The command above will execute the Lambda function locally within the LocalStack environment.
+The response will include the StatusCode and ExecutedVersion.
+You can find the logs of the Lambda invocation in the Lambda container output:
{{< command >}}
Hello from LocalStack Lambda container image!
@@ -229,6 +264,9 @@ Hello from LocalStack Lambda container image!
## Conclusion
-In conclusion, the Lambda container image support enables you to use Docker to package your custom code and dependencies for Lambda functions. With the help of LocalStack, you can seamlessly package, deploy, and invoke Lambda functions locally. It empowers you to develop, debug, and test your Lambda functions with a wide range of AWS services. For more advanced usage patterns, you can explore features like [Lambda Hot Reloading]({{< ref "hot-reloading" >}}) and [Lambda Debugging]({{< ref "debugging" >}}).
+In conclusion, the Lambda container image support enables you to use Docker to package your custom code and dependencies for Lambda functions.
+With the help of LocalStack, you can seamlessly package, deploy, and invoke Lambda functions locally.
+It empowers you to develop, debug, and test your Lambda functions with a wide range of AWS services.
+For more advanced usage patterns, you can explore features like [Lambda Hot Reloading]({{< ref "hot-reloading" >}}) and [Lambda Debugging]({{< ref "debugging" >}}).
To further explore and experiment with the concepts covered in this tutorial, you can access the code and accompanying `Makefile` on our [LocalStack Pro samples over GitHub](https://github.com/localstack/localstack-pro-samples/tree/master/lambda-container-image).
diff --git a/content/en/tutorials/replicate-aws-resources-localstack-extension/index.md b/content/en/tutorials/replicate-aws-resources-localstack-extension/index.md
index eb94c593a3..aedbae5efe 100644
--- a/content/en/tutorials/replicate-aws-resources-localstack-extension/index.md
+++ b/content/en/tutorials/replicate-aws-resources-localstack-extension/index.md
@@ -34,20 +34,20 @@ In this tutorial, you will learn how to install the AWS Replicator extension and
## Prerequisites
-- [LocalStack CLI](https://docs.localstack.cloud/getting-started/installation/#localstack-cli) with [`LOCALSTACK_AUTH_TOKEN`](https://docs.localstack.cloud/getting-started/auth-token/)
-- [Docker](https://docs.localstack.cloud/getting-started/auth-token/)
-- [AWS CLI](https://docs.aws.amazon.com/cli/v1/userguide/cli-chap-install.html) with [`awslocal` wrapper](https://github.com/localstack/awscli-local)
-- [LocalStack Web Application account](https://app.localstack.cloud/sign-up)
-- [AWS Account](https://aws.amazon.com/) with an [`AWS_ACCESS_KEY_ID` & `AWS_SECRET_ACCESS_KEY`](https://docs.aws.amazon.com/IAM/latest/UserGuide/id_credentials_access-keys.html#Using_CreateAccessKey)
+- [LocalStack CLI](https://docs.localstack.cloud/getting-started/installation/#localstack-cli) with [`LOCALSTACK_AUTH_TOKEN`](https://docs.localstack.cloud/getting-started/auth-token/)
+- [Docker](https://docs.localstack.cloud/getting-started/auth-token/)
+- [AWS CLI](https://docs.aws.amazon.com/cli/v1/userguide/cli-chap-install.html) with [`awslocal` wrapper](https://github.com/localstack/awscli-local)
+- [LocalStack Web Application account](https://app.localstack.cloud/sign-up)
+- [AWS Account](https://aws.amazon.com/) with an [`AWS_ACCESS_KEY_ID` & `AWS_SECRET_ACCESS_KEY`](https://docs.aws.amazon.com/IAM/latest/UserGuide/id_credentials_access-keys.html#Using_CreateAccessKey)
## Install the AWS Replicator extension
To install the AWS Replicator Extension, follow these steps:
-1. Launch your LocalStack container using the `localstack` CLI, ensuring that `LOCALSTACK_AUTH_TOKEN` is available in the environment.
-2. Visit the [Extensions library](https://app.localstack.cloud/extensions/library) page on the LocalStack Web Application.
+1. Launch your LocalStack container using the `localstack` CLI, ensuring that `LOCALSTACK_AUTH_TOKEN` is available in the environment.
+2. Visit the [Extensions library](https://app.localstack.cloud/extensions/library) page on the LocalStack Web Application.
-3. Scroll down to find the **AWS replicator** card, then click on the **Install on Instance** button.
+3. Scroll down to find the **AWS replicator** card, then click on the **Install on Instance** button.
Once the installation is complete, you will notice that your LocalStack container has restarted with the AWS Replicator extension successfully installed.
@@ -71,11 +71,12 @@ After verifying the successful installation, you can shut down the LocalStack co
In this tutorial, you will set up a basic example consisting of:
-- A Lambda function named `func1` that prints a simple statement when invoked.
-- An SQS queue named `test-queue` where messages are sent.
-- An event source mapping that triggers the Lambda function when a message is sent to the SQS queue.
+- A Lambda function named `func1` that prints a simple statement when invoked.
+- An SQS queue named `test-queue` where messages are sent.
+- An event source mapping that triggers the Lambda function when a message is sent to the SQS queue.
-The basic architecture for the scenario is outlined in the figure below. It shows the relationship between the resources deployed in the LocalStack container, the LocalStack AWS Proxy, and the remote AWS account.
+The basic architecture for the scenario is outlined in the figure below.
+It shows the relationship between the resources deployed in the LocalStack container, the LocalStack AWS Proxy, and the remote AWS account.
@@ -93,12 +94,12 @@ localstack start
In the above command:
-- The `EXTRA_CORS_ALLOWED_ORIGINS` variable allows the AWS Replicator extension's web interface to connect with the LocalStack container.
-- The `DEBUG` variable enables verbose logging allowing you to see the printed statements from the Lambda function.
+- The `EXTRA_CORS_ALLOWED_ORIGINS` variable allows the AWS Replicator extension's web interface to connect with the LocalStack container.
+- The `DEBUG` variable enables verbose logging allowing you to see the printed statements from the Lambda function.
Next, create a file named `testlambda.py` and add the following Python code to it:
-```python
+```python
def handler(*args, **kwargs):
print("Debug output from Lambda function")
```
@@ -117,7 +118,7 @@ $ awslocal lambda create-function \
Once the Lambda function is successfully created, you will see output similar to this:
-```bash
+```bash
{
"FunctionName": "func1",
"FunctionArn": "arn:aws:lambda:us-east-1:000000000000:function:func1",
@@ -139,13 +140,13 @@ $ awslocal sqs create-queue --queue-name test-queue
The output will display the Queue URL:
-```bash
+```bash
{
"QueueUrl": "http://sqs.us-east-1.localhost.localstack.cloud:4566/000000000000/test-queue"
}
```
-Additionally, you can create the remote SQS queue on the real AWS cloud to test invocation after starting the AWS Replicator extension.
+Additionally, you can create the remote SQS queue on the real AWS cloud to test invocation after starting the AWS Replicator extension.
Use the following command to set up the SQS queue on AWS:
@@ -153,9 +154,10 @@ Use the following command to set up the SQS queue on AWS:
$ aws sqs create-queue --queue-name test-queue
{{< /command >}}
-### Invoke the Lambda function
+### Invoke the Lambda function
-Before invoking, set up an event source mapping between the SQS queue and the Lambda function. Configure the queue for Lambda using the following command:
+Before invoking, set up an event source mapping between the SQS queue and the Lambda function.
+Configure the queue for Lambda using the following command:
{{< command >}}
$ awslocal lambda create-event-source-mapping \
@@ -166,7 +168,7 @@ $ awslocal lambda create-event-source-mapping \
The following output would be retrieved:
-```bash
+```bash
{
...
"MaximumBatchingWindowInSeconds": 0,
@@ -187,7 +189,7 @@ awslocal sqs send-message \
Upon successful execution, you will receive a message ID and MD5 hash of the message body.
-```bash
+```bash
{
"MD5OfMessageBody": "99914b932bd37a50b983c5e7c90ae93b",
"MessageId": "64e8297c-f0b2-4b68-a482-6cd3317f5096"
@@ -206,28 +208,32 @@ In the LocalStack logs, you will see confirmation of the Lambda function invocat
To run the AWS Replicator extension:
-- Access [`https://aws-replicator.localhost.localstack.cloud:4566`](https://aws-replicator.localhost.localstack.cloud:4566/) via your web browser.
+- Access [`https://aws-replicator.localhost.localstack.cloud:4566`](https://aws-replicator.localhost.localstack.cloud:4566/) via your web browser.
-- Provide your AWS Credentials: `AWS_ACCESS_KEY_ID`, `AWS_SECRET_ACCESS_KEY`, and optionally `AWS_SESSION_TOKEN`.
-- Add a new YAML-based Proxy configuration to proxy requests for specific resources to AWS. For this scenario, configure it to proxy requests for the SQS queue created earlier.
- ```yaml
+- Provide your AWS Credentials: `AWS_ACCESS_KEY_ID`, `AWS_SECRET_ACCESS_KEY`, and optionally `AWS_SESSION_TOKEN`.
+- Add a new YAML-based Proxy configuration to proxy requests for specific resources to AWS.
+ For this scenario, configure it to proxy requests for the SQS queue created earlier.
+
+ ```yaml
services:
sqs:
resources:
- '.*:test-queue'
```
-- Save the configuration to enable the AWS Replicator extension. Once enabled, you will see the proxy status as **enabled**.
+
+- Save the configuration to enable the AWS Replicator extension.
+ Once enabled, you will see the proxy status as **enabled**.
To invoke the local Lambda function with the remote SQS queue:
-- Navigate to your AWS Management Console and access **Simple Queue Service**.
-- Select the **test-queue** queue.
-- Send a message with a body (e.g., `Hello LocalStack`) by clicking **Send Message**.
+- Navigate to your AWS Management Console and access **Simple Queue Service**.
+- Select the **test-queue** queue.
+- Send a message with a body (e.g., `Hello LocalStack`) by clicking **Send Message**.
You will observe the local Lambda function being invoked once again, with corresponding debug messages visible in the logs.
-```bash
+```bash
2024-03-26T07:45:16.524 DEBUG --- [db58fad602e5] l.s.l.i.version_manager : [func1-ed938bb0-e1ee-41fb-a844-db58fad602e5] START RequestId: ed938bb0-e1ee-41fb-a844-db58fad602e5 Version: $LATEST
2024-03-26T07:45:16.524 DEBUG --- [db58fad602e5] l.s.l.i.version_manager : [func1-ed938bb0-e1ee-41fb-a844-db58fad602e5] Debug output from Lambda function
2024-03-26T07:45:16.524 DEBUG --- [db58fad602e5] l.s.l.i.version_manager : [func1-ed938bb0-e1ee-41fb-a844-db58fad602e5] END RequestId: ed938bb0-e1ee-41fb-a844-db58fad602e5
@@ -239,12 +245,12 @@ Upon completion, you can click **Disable** on the AWS Replicator extension web i
Additionally, you can delete the remote SQS queue to avoid AWS billing for long-running resources.
To remove local resources, stop the LocalStack container to clear the local Lambda function and SQS queue.
-## Conclusion
+## Conclusion
In this tutorial, you've discovered how the AWS Replicator extension bridges the gap between local and remote cloud resources by mirroring resources from real AWS accounts into your LocalStack instance.
You can explore additional use-cases with the AWS Replicator extension, such as:
-- Developing a local Lambda function that interacts with a remote DynamoDB table
-- Executing a local Athena SQL query in LocalStack, accessing files in a real S3 bucket on AWS
-- Testing a local Terraform script with SSM parameters from a real AWS account
-- And many more!
+- Developing a local Lambda function that interacts with a remote DynamoDB table
+- Executing a local Athena SQL query in LocalStack, accessing files in a real S3 bucket on AWS
+- Testing a local Terraform script with SSM parameters from a real AWS account
+- And many more!
diff --git a/content/en/tutorials/reproducible-machine-learning-cloud-pods/index.md b/content/en/tutorials/reproducible-machine-learning-cloud-pods/index.md
index 6191f1ed0d..6ea7b97197 100644
--- a/content/en/tutorials/reproducible-machine-learning-cloud-pods/index.md
+++ b/content/en/tutorials/reproducible-machine-learning-cloud-pods/index.md
@@ -24,11 +24,16 @@ pro: true
leadimage: "reproducible-machine-learning-cloud-pods-featured-image.png"
---
-[LocalStack Cloud Pods]({{< ref "user-guide/state-management/cloud-pods" >}}) enable you to create persistent state snapshots of your LocalStack instance, which can then be versioned, shared, and restored. It allows next-generation state management and team collaboration for your local cloud development environment, which you can utilize to create persistent shareable cloud sandboxes. Cloud Pods works directly with the [LocalStack CLI]({{< ref "getting-started/installation#localstack-cli" >}}) to save, merge, and restore snapshots of your LocalStack state. You can always tear down your LocalStack instance and restore it from a snapshot at any point in time.
+[LocalStack Cloud Pods]({{< ref "user-guide/state-management/cloud-pods" >}}) enable you to create persistent state snapshots of your LocalStack instance, which can then be versioned, shared, and restored.
+It allows next-generation state management and team collaboration for your local cloud development environment, which you can utilize to create persistent shareable cloud sandboxes.
+Cloud Pods works directly with the [LocalStack CLI]({{< ref "getting-started/installation#localstack-cli" >}}) to save, merge, and restore snapshots of your LocalStack state.
+You can always tear down your LocalStack instance and restore it from a snapshot at any point in time.
-Cloud Pods is supported in [LocalStack Team](https://app.localstack.cloud/). With LocalStack Team, you can utilize the Cloud Pods CLI that allows you to inspect your Cloud Pods, version them using tags, and push them to the LocalStack platform for storage and collaboration.
+Cloud Pods is supported in [LocalStack Team](https://app.localstack.cloud/).
+With LocalStack Team, you can utilize the Cloud Pods CLI that allows you to inspect your Cloud Pods, version them using tags, and push them to the LocalStack platform for storage and collaboration.
-In this tutorial, we will use [LocalStack Pro]({{< ref "getting-started/auth-token" >}}) to train a simple machine-learning model that recognizes handwritten digits on an image. We will rely on Cloud Pods to create a reproducible sample by using:
+In this tutorial, we will use [LocalStack Pro]({{< ref "getting-started/auth-token" >}}) to train a simple machine-learning model that recognizes handwritten digits on an image.
+We will rely on Cloud Pods to create a reproducible sample by using:
- S3 to create a bucket to host our training data
- Lambda to create a function to train and save the model to an S3 bucket
@@ -47,11 +52,16 @@ For this tutorial, you will need the following:
- [awslocal]({{< ref "aws-cli#localstack-aws-cli-awslocal" >}})
- [Optical recognition of handwritten digits dataset](https://archive.ics.uci.edu/ml/datasets/Optical+Recognition+of+Handwritten+Digits)
-If you don't have a subscription to LocalStack Pro, you can request a trial license upon sign-up. For this tutorial to work, you must have the LocalStack CLI installed, which must be version 1.3 or higher. The Cloud Pods CLI is shipped with the LocalStack CLI, so you don't need to install it separately.
+If you don't have a subscription to LocalStack Pro, you can request a trial license upon sign-up.
+For this tutorial to work, you must have the LocalStack CLI installed, which must be version 1.3 or higher.
+The Cloud Pods CLI is shipped with the LocalStack CLI, so you don't need to install it separately.
## Training the machine learning model
-We will use the [Optical Recognition of Handwritten Digits Data Set](https://archive.ics.uci.edu/ml/datasets/Optical+Recognition+of+Handwritten+Digits) to train a simple machine-learning model to recognise handwritten texts. It contains images of individual digits, represented as arrays of pixel values, along with their corresponding labels, indicating the correct digit that each image represents. You can download the dataset from UCI's Machine Learning Repository (linked above) or from our [samples repository](https://github.com/localstack/localstack-pro-samples/tree/master/reproducible-ml). To train our model, we will upload our dataset on a local S3 bucket and use a Lambda function to train the model.
+We will use the [Optical Recognition of Handwritten Digits Data Set](https://archive.ics.uci.edu/ml/datasets/Optical+Recognition+of+Handwritten+Digits) to train a simple machine-learning model to recognise handwritten texts.
+It contains images of individual digits, represented as arrays of pixel values, along with their corresponding labels, indicating the correct digit that each image represents.
+You can download the dataset from UCI's Machine Learning Repository (linked above) or from our [samples repository](https://github.com/localstack/localstack-pro-samples/tree/master/reproducible-ml).
+To train our model, we will upload our dataset on a local S3 bucket and use a Lambda function to train the model.
Create a new file named `train.py` and import the required libraries:
@@ -66,7 +76,9 @@ from joblib import dump, load
import io
```
-We will now create a separate function named `load_digits` to load the dataset from the S3 bucket and return it as a `Bunch` object. The `Bunch` object is a container object that allows us to access the dataset's attributes as dictionary keys. It is similar to a Python dictionary but provides attribute-style access and can be used to store the dataset and its attributes.
+We will now create a separate function named `load_digits` to load the dataset from the S3 bucket and return it as a `Bunch` object.
+The `Bunch` object is a container object that allows us to access the dataset's attributes as dictionary keys.
+It is similar to a Python dictionary but provides attribute-style access and can be used to store the dataset and its attributes.
```python
def load_digits(*, n_class=10, return_X_y=False, as_frame=False):
@@ -106,7 +118,10 @@ def load_digits(*, n_class=10, return_X_y=False, as_frame=False):
images=images)
```
-The above code uses the `boto3` library to download the data file from an S3 bucket. The file is then loaded into a NumPy array using the `numpy.loadtxt` function, and the target values (i.e. the labels corresponding to each image) are extracted from the last column of the array. The images are then reshaped into 2-dimensional arrays, and the function has been configured to return only a subset of the available classes by filtering the target values. Finally, the function returns an object containing the data, target values, and metadata.
+The above code uses the `boto3` library to download the data file from an S3 bucket.
+The file is then loaded into a NumPy array using the `numpy.loadtxt` function, and the target values (i.e. the labels corresponding to each image) are extracted from the last column of the array.
+The images are then reshaped into 2-dimensional arrays, and the function has been configured to return only a subset of the available classes by filtering the target values.
+Finally, the function returns an object containing the data, target values, and metadata.
Let us now define a `handler` function that would be executed by the Lambda every time a trigger event occurs.
In this case, we would like to use the above function to load the dataset and train a model using the [Support Vector Machine (SVM)](https://scikit-learn.org/stable/modules/svm.html) algorithm.
@@ -143,13 +158,16 @@ def handler(event, context):
s3_client.put_object(Body=f, Bucket="pods-test", Key="test-set.npy")
```
-First, we loaded the images and flattened them into 1-dimensional arrays. Then, we created a training and a test set using the `train_test_split` function from the `sklearn.model_selection` module.
+First, we loaded the images and flattened them into 1-dimensional arrays.
+Then, we created a training and a test set using the `train_test_split` function from the `sklearn.model_selection` module.
-We trained an SVM classifier on the training set using the `fit` method. Finally, we uploaded the trained model, together with the test set, to an S3 bucket for later usage.
+We trained an SVM classifier on the training set using the `fit` method.
+Finally, we uploaded the trained model, together with the test set, to an S3 bucket for later usage.
## Perform predictions with the model
-Now, we will create a new file called `infer.py` which will contain a second handler function. This function will be used to perform predictions on new data with the model we trained previously.
+Now, we will create a new file called `infer.py` which will contain a second handler function.
+This function will be used to perform predictions on new data with the model we trained previously.
```python
def handler(event, context):
@@ -167,17 +185,20 @@ def handler(event, context):
print("--> prediction result:", predicted)
```
-To perform inference on the test set, we will download both the trained SVN model and the test set that we previously uploaded to the S3 bucket. Using these resources, we will predict the values of the digits in the test set.
+To perform inference on the test set, we will download both the trained SVN model and the test set that we previously uploaded to the S3 bucket.
+Using these resources, we will predict the values of the digits in the test set.
## Deploying the Lambda functions
-Before creating our Lambda functions, let us start LocalStack to use emulated S3 and Lambda services to deploy and train our model. Let's start LocalStack:
+Before creating our Lambda functions, let us start LocalStack to use emulated S3 and Lambda services to deploy and train our model.
+Let's start LocalStack:
{{< command >}}
$ DEBUG=1 LOCALSTACK_AUTH_TOKEN= localstack start -d
{{< / command >}}
-We have specified `DEBUG=1` to get the printed LocalStack logs from our Lambda invocation in the console. We can now create an S3 bucket to upload our Lambda functions and the dataset:
+We have specified `DEBUG=1` to get the printed LocalStack logs from our Lambda invocation in the console.
+We can now create an S3 bucket to upload our Lambda functions and the dataset:
{{< command >}}
$ zip lambda.zip train.py
@@ -187,7 +208,10 @@ $ awslocal s3 cp lambda.zip s3://reproducible-ml/lambda.zip
$ awslocal s3 cp digits.csv.gz s3://reproducible-ml/digits.csv.gz
{{< / command >}}
-In the above commands, we first create two zip files for our Lambda functions: lambda.zip and infer.zip. These zip files contain the code for training the machine learning model and do predictions with it, respectively. Next, we create an S3 bucket called `reproducible-ml` and upload the zip files and the dataset to it. Finally, we use the `awslocal` CLI to create the two Lambda functions
+In the above commands, we first create two zip files for our Lambda functions: lambda.zip and infer.zip.
+These zip files contain the code for training the machine learning model and do predictions with it, respectively.
+Next, we create an S3 bucket called `reproducible-ml` and upload the zip files and the dataset to it.
+Finally, we use the `awslocal` CLI to create the two Lambda functions
{{< command >}}
$ awslocal lambda create-function --function-name ml-train \
@@ -207,7 +231,9 @@ $ awslocal lambda create-function --function-name ml-predict \
--layers arn:aws:lambda:us-east-1:446751924810:layer:python-3-8-scikit-learn-0-23-1:2
{{< / command >}}
-For each function, we provide the function name, runtime (`python3.8`), handler function (`train.handler` and `infer.handler`, respectively), and the location of the `zip` files in the S3 bucket. We have also specified the `python-3-8-scikit-learn-0-23-1` layer to be used by the Lambda function. This layer includes the scikit-learn library and its dependencies.
+For each function, we provide the function name, runtime (`python3.8`), handler function (`train.handler` and `infer.handler`, respectively), and the location of the `zip` files in the S3 bucket.
+We have also specified the `python-3-8-scikit-learn-0-23-1` layer to be used by the Lambda function.
+This layer includes the scikit-learn library and its dependencies.
We can now invoke the first Lambda function using the `awslocal` CLI:
@@ -215,7 +241,8 @@ We can now invoke the first Lambda function using the `awslocal` CLI:
$ awslocal lambda invoke --function-name ml-train /tmp/test.tmp
{{< / command >}}
-The first Lambda function will train the model and upload it to the S3 bucket. Finally, we can invoke the second Lambda function to do predictions with the model.
+The first Lambda function will train the model and upload it to the S3 bucket.
+Finally, we can invoke the second Lambda function to do predictions with the model.
{{< command >}}
$ awslocal lambda invoke --function-name ml-predict /tmp/test.tmp
@@ -235,7 +262,8 @@ null
## Creating a Cloud Pod
-After deploying the Lambda functions, we can create a Cloud Pod to share our local infrastructure and instance state with other LocalStack users in the organization. To save the current state of our LocalStack instance, we can use the `save` command:
+After deploying the Lambda functions, we can create a Cloud Pod to share our local infrastructure and instance state with other LocalStack users in the organization.
+To save the current state of our LocalStack instance, we can use the `save` command:
{{< command >}}
$ localstack pod save reproducible-ml
@@ -244,13 +272,15 @@ Cloud Pod reproducible-ml successfully created
{{< / command >}}
{{< callout "note" >}}
-You can also export a Cloud Pod locally by specifying a file URI as an argument. To export on a local path, run the following command:
+You can also export a Cloud Pod locally by specifying a file URI as an argument.
+To export on a local path, run the following command:
{{< command >}}
$ localstack pod save file:///
{{< / command >}}
-The output of the above command will be a `` zip file in the specified directory. We can restore it at any time with the `load` command.
+The output of the above command will be a `` zip file in the specified directory.
+We can restore it at any time with the `load` command.
{{< /callout >}}
To list available the Cloud Pods you can use the `list` command:
@@ -272,34 +302,48 @@ You can also inspect the contents of a Cloud Pod using the `inspect` command:
$ localstack pod inspect reproducible-ml
{{< / command >}}
-While you save a Cloud Pod, it is automatically published on the LocalStack platform and can be shared with other users in your organization. While saving an already existing Cloud Pod, we would create a new version, which is eventually uploaded to the LocalStack platform.
+While you save a Cloud Pod, it is automatically published on the LocalStack platform and can be shared with other users in your organization.
+While saving an already existing Cloud Pod, we would create a new version, which is eventually uploaded to the LocalStack platform.
{{< callout "note" >}}
-You can optionally set the visibility of a Cloud Pod to `private` or `public` using the `--visibility` flag. By default, the visibility of a Cloud Pod is set to `private`. To set a Cloud Pod to `public`, you can use the following command:
+You can optionally set the visibility of a Cloud Pod to `private` or `public` using the `--visibility` flag.
+By default, the visibility of a Cloud Pod is set to `private`.
+To set a Cloud Pod to `public`, you can use the following command:
{{< command >}}
$ localstack pod save --name --visibility public
{{< / command >}}
The above command does not create a new version and requires a version already registered with the platform.
{{< /callout >}}
-You can also attach an optional message and a list of services to a Cloud Pod using the `--message` and `--services` flags. You can check all the Cloud Pods in your organization over the [LocalStack Web Application](https://app.localstack.cloud/pods). Now that we have created a Cloud Pod, we can ask one of our team members to start LocalStack and load the Cloud Pod using the `load` command.
+You can also attach an optional message and a list of services to a Cloud Pod using the `--message` and `--services` flags.
+You can check all the Cloud Pods in your organization over the [LocalStack Web Application](https://app.localstack.cloud/pods).
+Now that we have created a Cloud Pod, we can ask one of our team members to start LocalStack and load the Cloud Pod using the `load` command.
{{< command >}}
$ localstack pod load reproducible-ml
{{< / command >}}
-The `load` command will retrieve the content of our Cloud Pod named `reproducible-ml` from the LocalStack platform and inject it into our running LocalStack instance. Upon successfully loading the Cloud Pod, the Lambda function can be invoked again, and the log output should be the same as before.
+The `load` command will retrieve the content of our Cloud Pod named `reproducible-ml` from the LocalStack platform and inject it into our running LocalStack instance.
+Upon successfully loading the Cloud Pod, the Lambda function can be invoked again, and the log output should be the same as before.
-LocalStack Cloud Pods also feature different merge strategies to merge the state of a Cloud Pod with the current LocalStack instance. You can use the `--merge` flag to specify the merge strategy. The available merge strategies are:
+LocalStack Cloud Pods also feature different merge strategies to merge the state of a Cloud Pod with the current LocalStack instance.
+You can use the `--merge` flag to specify the merge strategy.
+The available merge strategies are:
-- **Load with overwrite**: This is the default merge strategy. It will load the state of the Cloud Pod into the current LocalStack instance and overwrite the existing state.
+- **Load with overwrite**: This is the default merge strategy.
+ It will load the state of the Cloud Pod into the current LocalStack instance and overwrite the existing state.
- **Load with basic merge**: This merge strategy will load the state of the Cloud Pod into the current LocalStack instance and merge the existing state with the state of the Cloud Pod.
-- **Load with deep merge**: This merge strategy will load the state of the Cloud Pod into the current LocalStack instance and merge the existing state with the state of the Cloud Pod. It will also merge the existing state with the state of the Cloud Pod recursively.
+- **Load with deep merge**: This merge strategy will load the state of the Cloud Pod into the current LocalStack instance and merge the existing state with the state of the Cloud Pod.
+ It will also merge the existing state with the state of the Cloud Pod recursively.
{{< figure src="cloud-pods-state-merge-mechanisms.png" width="80%" alt="State Merge mechanisms with LocalStack Cloud Pods">}}
## Conclusion
-In conclusion, LocalStack Cloud Pods facilitate collaboration and debugging among team members by allowing the sharing of local cloud infrastructure and instance state. These Cloud Pods can be used to create reproducible environments for various purposes, including machine learning. By using Cloud Pods, teams can work together to create a reproducible environment for their application and share it with other team members. Additionally, Cloud Pods can be used to pre-seed continuous integration (CI) pipelines with the necessary instance state to bootstrap testing environments or to troubleshoot failures in the CI pipeline.
+In conclusion, LocalStack Cloud Pods facilitate collaboration and debugging among team members by allowing the sharing of local cloud infrastructure and instance state.
+These Cloud Pods can be used to create reproducible environments for various purposes, including machine learning.
+By using Cloud Pods, teams can work together to create a reproducible environment for their application and share it with other team members.
+Additionally, Cloud Pods can be used to pre-seed continuous integration (CI) pipelines with the necessary instance state to bootstrap testing environments or to troubleshoot failures in the CI pipeline.
-For more information about LocalStack Cloud Pods, refer to the documentation provided. The code for this tutorial, including a Makefile to execute it step-by-step, is available in the [LocalStack Pro samples repository](https://github.com/localstack/localstack-pro-samples/tree/master/reproducible-ml) on GitHub.
+For more information about LocalStack Cloud Pods, refer to the documentation provided.
+The code for this tutorial, including a Makefile to execute it step-by-step, is available in the [LocalStack Pro samples repository](https://github.com/localstack/localstack-pro-samples/tree/master/reproducible-ml) on GitHub.
diff --git a/content/en/tutorials/route53-failover-with-fis/index.md b/content/en/tutorials/route53-failover-with-fis/index.md
index b678e2712e..a6b491d094 100644
--- a/content/en/tutorials/route53-failover-with-fis/index.md
+++ b/content/en/tutorials/route53-failover-with-fis/index.md
@@ -29,24 +29,30 @@ leadimage: "route-53-failover.png"
## Introduction
-LocalStack allows you to integrate & test [Fault Injection Simulator (FIS)](https://docs.localstack.cloud/user-guide/aws/fis/) with [Route53](https://docs.localstack.cloud/user-guide/aws/route53/) to automatically divert users to
-a healthy secondary zone if the primary region fails, ensuring system availability and responsiveness. Route53's health checks and
+LocalStack allows you to integrate & test [Fault Injection Simulator (FIS)](https://docs.localstack.cloud/user-guide/aws/fis/) with [Route53](https://docs.localstack.cloud/user-guide/aws/route53/) to automatically divert users to
+a healthy secondary zone if the primary region fails, ensuring system availability and responsiveness.
+Route53's health checks and
traffic redirection enhance architecture resilience and ensure service continuity during regional outages, crucial for uninterrupted
user experiences.
{{< callout "note">}}
-Route53 Failover with FIS is currently available as part of the **LocalStack Enterprise** plan. If you'd like to try it out,
+Route53 Failover with FIS is currently available as part of the **LocalStack Enterprise** plan.
+If you'd like to try it out,
please [contact us](https://www.localstack.cloud/demo) to request access.
{{< /callout >}}
## Getting started
-This tutorial is designed for users new to the Route53 and FIS services. In this example, there's an active-primary and
-passive-standby configuration. Route53 routes traffic to the primary region, which processes product-related requests through
-API Gateway and Lambda functions, with data stored in DynamoDB. If the primary region fails, Route53 redirects to the standby
+This tutorial is designed for users new to the Route53 and FIS services.
+In this example, there's an active-primary and
+passive-standby configuration.
+Route53 routes traffic to the primary region, which processes product-related requests through
+API Gateway and Lambda functions, with data stored in DynamoDB.
+If the primary region fails, Route53 redirects to the standby
region, maintained in sync by a replication Lambda function.
-For this particular example, we'll be using a [sample application repository](https://github.com/localstack-samples/samples-chaos-engineering/tree/main/route53-failover). Clone the repository, and follow the
+For this particular example, we'll be using a [sample application repository](https://github.com/localstack-samples/samples-chaos-engineering/tree/main/route53-failover).
+Clone the repository, and follow the
instructions below to get started.
### Prerequisites
@@ -59,7 +65,8 @@ The general prerequisites for this guide are:
- [Python-3](https://www.python.org/downloads/)
- `dig`
-Start LocalStack by using the `docker-compose.yml` file from the repository. Ensure to set your Auth Token as an environment variable
+Start LocalStack by using the `docker-compose.yml` file from the repository.
+Ensure to set your Auth Token as an environment variable
during this process.
{{< command >}}
@@ -75,17 +82,21 @@ The following diagram shows the architecture that this application builds and de
### Creating the resources
-To begin, deploy the same services in both `us-west-1` and `us-east-1` regions. The resources specified in the `init-resources.sh`
+To begin, deploy the same services in both `us-west-1` and `us-east-1` regions.
+The resources specified in the `init-resources.sh`
file will be created when the LocalStack container starts, using Initialization Hooks and the `awslocal` CLI tool.
-The objective is to have a backup system in case of a regional outage in the primary availability zone (`us-west-1`). We'll focus
+The objective is to have a backup system in case of a regional outage in the primary availability zone (`us-west-1`).
+We'll focus
on this region to examine the existing resilience mechanisms.
{{< figure src="route53-failover-2.png" width="800">}}
-- The primary API Gateway includes a health check endpoint that returns a 200 HTTP status code, serving as a basic check for its availability.
-- Data synchronization across regions can be achieved with AWS-native tools like DynamoDB Streams and AWS Lambda. Here, any changes to the
-primary table trigger a Lambda function, replicating these changes to a secondary table. This configuration is essential for high availability
+- The primary API Gateway includes a health check endpoint that returns a 200 HTTP status code, serving as a basic check for its availability.
+- Data synchronization across regions can be achieved with AWS-native tools like DynamoDB Streams and AWS Lambda.
+ Here, any changes to the
+primary table trigger a Lambda function, replicating these changes to a secondary table.
+ This configuration is essential for high availability
and disaster recovery.
### Configuring a Route53 hosted zone
@@ -113,11 +124,12 @@ awslocal route53 create-health-check \
)
{{< /command >}}
-This command creates a Route 53 health check for an HTTP endpoint (`12345.execute-api.localhost.localstack.cloud:4566/dev/healthcheck`)
-with a 10-second request interval and captures the health check's ID. The caller reference identifier in AWS resource creation or updates
+This command creates a Route 53 health check for an HTTP endpoint (`12345.execute-api.localhost.localstack.cloud:4566/dev/healthcheck`)
+with a 10-second request interval and captures the health check's ID.
+The caller reference identifier in AWS resource creation or updates
prevents accidental duplication if requests are repeated.
-To update DNS records in the specified Route53 hosted zone (`$HOSTED_ZONE_ID`), add two CNAME records: `12345.$HOSTED_ZONE_NAME`
+To update DNS records in the specified Route53 hosted zone (`$HOSTED_ZONE_ID`), add two CNAME records: `12345.$HOSTED_ZONE_NAME`
pointing to `12345.execute-api.localhost.localstack.cloud`, and `67890.$HOSTED_ZONE_NAME` pointing to `67890.execute-api.localhost.localstack.cloud`.
Set a TTL (Time to Live) of 60 seconds for these records.
@@ -152,9 +164,12 @@ $ awslocal route53 change-resource-record-sets \
}'
{{< /command >}}
-Finally, we'll update the DNS records in the Route53 hosted zone identified by **`$HOSTED_ZONE_ID`**. We're adding two CNAME records
-for the subdomain `test.$HOSTED_ZONE_NAME`. The first record points to `12345.$HOSTED_ZONE_NAME` and is linked with the earlier created
-health check, designated as the primary failover target. The second record points to `67890.$HOSTED_ZONE_NAME` and is set as the secondary
+Finally, we'll update the DNS records in the Route53 hosted zone identified by **`$HOSTED_ZONE_ID`**.
+We're adding two CNAME records
+for the subdomain `test.$HOSTED_ZONE_NAME`.
+The first record points to `12345.$HOSTED_ZONE_NAME` and is linked with the earlier created
+health check, designated as the primary failover target.
+The second record points to `67890.$HOSTED_ZONE_NAME` and is set as the secondary
failover target.
{{< command >}}
@@ -196,27 +211,33 @@ $ awslocal route53 change-resource-record-sets \
{{< /command >}}
This setup represents the basic failover configuration where traffic is redirected to different endpoints based on their health check
-status. To confirm that the CNAME record for `test.hello-localstack.com` points to `12345.execute-api.localhost.localstack.cloud`,
+status.
+To confirm that the CNAME record for `test.hello-localstack.com` points to `12345.execute-api.localhost.localstack.cloud`,
you can use the following `dig` command:
{{< command >}}
$ dig @localhost test.hello-localstack.com CNAME
-
+
.....
;; QUESTION SECTION:
-;test.hello-localstack.com. IN CNAME
+;test.hello-localstack.com.
+IN CNAME
;; ANSWER SECTION:
-test.hello-localstack.com. 300 IN CNAME 12345.execute-api.localhost.localstack.cloud.
+test.hello-localstack.com.
+300 IN CNAME 12345.execute-api.localhost.localstack.cloud.
.....
{{< /command >}}
### Creating a controlled outage
-Our setup is now complete and ready for testing. To mimic a regional outage in the `us-west-1` region, we'll conduct an experiment that
-halts all service invocations in this region, including the health check function. Once the primary region becomes non-functional,
-Route 53's health checks will fail. This failure will activate the failover policy, redirecting traffic to the corresponding services
+Our setup is now complete and ready for testing.
+To mimic a regional outage in the `us-west-1` region, we'll conduct an experiment that
+halts all service invocations in this region, including the health check function.
+Once the primary region becomes non-functional,
+Route 53's health checks will fail.
+This failure will activate the failover policy, redirecting traffic to the corresponding services
in the secondary region, thus maintaining service continuity.
{{< command >}}
@@ -236,7 +257,7 @@ $ cat region-outage-experiment.json
"stopConditions": [],
"roleArn": "arn:aws:iam:000000000000:role/ExperimentRole"
}
-
+
{{< /command >}}
This Fault Injection Simulator (FIS) experiment template is set up to mimic a `Service Unavailable` (503 error) in the `us-west-1` region.
@@ -276,18 +297,22 @@ $ awslocal fis start-experiment --experiment-template-id
{{< /command >}}
-Replace `` with the ID of the experiment template created in the previous step. When the experiment is active,
-Route 53's health checks will detect the failure and redirect traffic to the standby region as per the failover setup. Confirm this redirection with:
+Replace `` with the ID of the experiment template created in the previous step.
+When the experiment is active,
+Route 53's health checks will detect the failure and redirect traffic to the standby region as per the failover setup.
+Confirm this redirection with:
{{< command >}}
$ dig @localhost test.hello-localstack.com CNAME
-
+
.....
;; QUESTION SECTION:
-;test.hello-localstack.com. IN CNAME
+;test.hello-localstack.com.
+IN CNAME
;; ANSWER SECTION:
-test.hello-localstack.com. 300 IN CNAME 67890.execute-api.localhost.localstack.cloud.
+test.hello-localstack.com.
+300 IN CNAME 67890.execute-api.localhost.localstack.cloud.
.....
{{< /command >}}
diff --git a/content/en/tutorials/s3-static-website-terraform/index.md b/content/en/tutorials/s3-static-website-terraform/index.md
index be2a472577..459250aa21 100644
--- a/content/en/tutorials/s3-static-website-terraform/index.md
+++ b/content/en/tutorials/s3-static-website-terraform/index.md
@@ -22,13 +22,22 @@ pro: false
leadimage: "s3-static-website-terraform-featured-image.png"
---
-[AWS Simple Storage Service (S3)](https://aws.amazon.com/s3/) is a proprietary object storage solution that can store an unlimited number of objects for many use cases. S3 is a highly scalable, durable and reliable service that we can use for various use cases: hosting a static site, handling big data analytics, managing application logs, storing web assets and much more!
+[AWS Simple Storage Service (S3)](https://aws.amazon.com/s3/) is a proprietary object storage solution that can store an unlimited number of objects for many use cases.
+S3 is a highly scalable, durable and reliable service that we can use for various use cases: hosting a static site, handling big data analytics, managing application logs, storing web assets and much more!
-With S3, you have unlimited storage with your data stored in buckets. A bucket refers to a directory, while an object is just another term for a file. Every object (file) stores the name of the file (key), the contents (value), a version ID and the associated metadata. You can also use S3 to host a static website, to serve static content. It might include HTML, CSS, JavaScript, images, and other assets that make up your website.
+With S3, you have unlimited storage with your data stored in buckets.
+A bucket refers to a directory, while an object is just another term for a file.
+Every object (file) stores the name of the file (key), the contents (value), a version ID and the associated metadata.
+You can also use S3 to host a static website, to serve static content.
+It might include HTML, CSS, JavaScript, images, and other assets that make up your website.
-LocalStack supports the S3 API, which means you can use the same API calls to interact with S3 in LocalStack as you would with AWS. Using LocalStack, you can create and manage S3 buckets and objects locally, use AWS SDKs and third-party integrations to work with S3, and test your applications without making any significant alterations. LocalStack also supports the creation of S3 buckets with static website hosting enabled.
+LocalStack supports the S3 API, which means you can use the same API calls to interact with S3 in LocalStack as you would with AWS.
+Using LocalStack, you can create and manage S3 buckets and objects locally, use AWS SDKs and third-party integrations to work with S3, and test your applications without making any significant alterations.
+LocalStack also supports the creation of S3 buckets with static website hosting enabled.
-In this tutorial, we will deploy a static website using an S3 bucket over a locally emulated AWS infrastructure on LocalStack. We will use Terraform to automate the creation & management of AWS resources by declaring them in the HashiCorp Configuration Language (HCL). We will also learn about `tflocal`, a CLI wrapper created by LocalStack, that allows you to run Terraform locally against LocalStack.
+In this tutorial, we will deploy a static website using an S3 bucket over a locally emulated AWS infrastructure on LocalStack.
+We will use Terraform to automate the creation & management of AWS resources by declaring them in the HashiCorp Configuration Language (HCL).
+We will also learn about `tflocal`, a CLI wrapper created by LocalStack, that allows you to run Terraform locally against LocalStack.
## Prerequisites
@@ -40,9 +49,13 @@ For this tutorial, you will need:
## Creating a static website
-We will create a simple static website using plain HTML to get started. To create a static website deployed over S3, we need to create an index document and a custom error document. We will name our index document `index.html` and our error document `error.html`. Optionally, you can create a folder called `assets` to store images and other assets.
+We will create a simple static website using plain HTML to get started.
+To create a static website deployed over S3, we need to create an index document and a custom error document.
+We will name our index document `index.html` and our error document `error.html`.
+Optionally, you can create a folder called `assets` to store images and other assets.
-Let's create a directory named `s3-static-website-localstack` where we'll store our static website files. If you don't have an `index.html` file, you can use the following code to create one:
+Let's create a directory named `s3-static-website-localstack` where we'll store our static website files.
+If you don't have an `index.html` file, you can use the following code to create one:
```html
@@ -58,7 +71,9 @@ Let's create a directory named `s3-static-website-localstack` where we'll store