This repository provides some example CloudFormation templates for running Kong Mesh on ECS + Fargate.
It provisions all the necessary AWS infrastructure for running a standalone Kong Mesh zone with a postgres backend (on AWS Aurora) and runs the Kuma counter demo.
The example deployment consists of CloudFormation stacks for setting up the mesh:
- VPC & ECS cluster stack
- Kong Mesh CP stack
and two stacks for launching the demo.
The kuma-dp
container will use the identity of the ECS task to
authenticate with the Kuma control plane.
To enable this functionality, note that
we set the following kuma-cp
options via environment variables:
- Name: KUMA_DP_SERVER_AUTHN_DP_PROXY_TYPE
Value: aws-iam
- Name: KUMA_DP_SERVER_AUTHN_ZONE_PROXY_TYPE
Value: aws-iam
- Name: KUMA_DP_SERVER_AUTHN_ENABLE_RELOADABLE_TOKENS
Value: "true"
We also add the following to tell the CP to only allow identities for certain accounts:
- Name: KMESH_AWSIAM_AUTHORIZEDACCOUNTIDS
Value: !Ref AWS::AccountId # this tells the CP which accounts can be used by DPs to authenticate
The kuma-cp
task role also needs permissions to call iam:GetRole
on any kuma-dp
task roles. Add the following to your kuma-cp
task role policy:
- PolicyName: get-dataplane-roles
PolicyDocument:
Statement:
- Effect: Allow
Action:
- iam:GetRole
Resource:
- *
and we add the following option to the kuma-dp
container command:
- --auth-type=aws
In these examples, the ECS task IAM role has the kuma.io/service
tag set
to the name of the service the workload is running under:
Tags:
- Key: kuma.io/service
Value: !FindInMap [Config, Workload, Name]
You'll need to have the Kong Mesh CLI (kumactl
)
installed as well as the AWS
CLI setup on the machine you're deploying from.
Check the example IAM policy in this repo for permissions sufficient to deploy everything in this repository.
The VPC stack sets up our VPC, adds subnets, sets up routing and private DNS and creates a load balancer. It also provisions the ECS cluster and corresponding IAM roles.
aws cloudformation deploy \
--capabilities CAPABILITY_IAM \
--stack-name ecs-demo-vpc \
--template-file deploy/vpc.yaml
The Kong Mesh CP stack launches Kong Mesh in standalone mode with an Aurora backend, fronted by an AWS Network Load Balancer.
The first step is to add your Kong Mesh license to AWS Secrets Manager. Assuming
your license file is at license.json
:
LICENSE_SECRET=$(
aws secretsmanager create-secret \
--name ecs-demo/KongMeshLicense --description "Secret containing Kong Mesh license" \
--secret-string file://license.json \
| jq -r .ARN)
We need to provision TLS certificates for the control plane to use for both external
traffic (port 5682
) and proxy to control plane traffic (port 5678
), both of
which are protected by TLS.
In a production scenario, you'd have a static domain name to point to the load balancer and a PKI or AWS Certificate Manager for managing TLS certificates.
In this walkthrough, we'll use the DNS name provisioned for the load balancer by AWS and use
kumactl
to generate some TLS certificates.
The load balancer's DNS name is exported from our VPC stack and the HTTPS (5682
) endpoints
are exposed:
CP_ADDR=$(aws cloudformation describe-stacks --stack-name ecs-demo-vpc \
| jq -r '.Stacks[0].Outputs[] | select(.OutputKey == "ExternalCPAddress") | .OutputValue')
kumactl
provides a utility command for generating certificates. The
certificates will have two SANs. One is the DNS name of our load balancer and
the other is the internally-routable, static name we provision via ECS Service
Discovery for our data planes.
kumactl generate tls-certificate --type=server --hostname ${CP_ADDR} --hostname controlplane.kongmesh
We now have a key.pem
and cert.pem
and we'll save both of them as AWS secrets.
TLS_KEY=$(
aws secretsmanager create-secret \
--name ecs-demo/CPTLSKey \
--description "Secret containing TLS private key for serving control plane traffic" \
--secret-string file://key.pem \
| jq -r .ARN)
TLS_CERT=$(
aws secretsmanager create-secret \
--name ecs-demo/CPTLSCert \
--description "Secret containing TLS certificate for serving control plane traffic" \
--secret-string file://cert.pem \
| jq -r .ARN)
If you are deploying a zone that connects to a global control plane on Konnect, please switch Environment
to Universal in the zone creation wizard, extract and copy these items:
- the KDS sync endpoint of your global control plane (from section Connect Zone, under field path
multizone.zone.globalAddress
) - the id of your global control plane (from section Connect Zone, under field path
kmesh.multizone.zone.konnect.cpId
) - the authentication token (from section Save token, line 2)
Export them to variables and files:
# sample value: grpcs://us.mesh.sync.konghq.com:443
KDS_ADDR=<your global KDS endpoints>
# sample value: 61e5904f-bc3e-401e-9144-d4aa3983a921
CP_ID=<your CP ID here>
# sample value: spat_7J9SN9TKaeg6Uf3fr7Ms1sCuJ9NUbF4AwXCJlfA7QXJzxM7wg
echo "<your auth token here>" > konnect-cp-token
CP_TOKEN_SECRET=$(
aws secretsmanager create-secret \
--name ecs-demo/global-cp-token --description "Secret holding the global control plane token on Konnect" \
--secret-string file://konnect-cp-token \
| jq -r .ARN)
And make sure you attached the exported variables to the stack deployment command below.
aws cloudformation deploy \
--capabilities CAPABILITY_IAM \
--stack-name ecs-demo-kong-mesh-cp \
--parameter-overrides VPCStackName=ecs-demo-vpc \
LicenseSecret=${LICENSE_SECRET} \
ServerKeySecret=${TLS_KEY} \
ServerCertSecret=${TLS_CERT} \
--template-file deploy/controlplane.yaml
If you are deploying a zone that connects to Konnect, please also attach the following parameters:
ZoneName=ecs-demo \
GlobalKDSAddress=${KDS_ADDR} \
GlobalCPTokenSecret=${CP_TOKEN_SECRET} \
KonnectCPId=${CP_ID} \
The ECS task fetches an admin API token and saves it in an AWS secret.
Let's fetch the admin token from that secret:
TOKEN_SECRET_ARN=$(aws cloudformation describe-stacks --stack-name ecs-demo-kong-mesh-cp \
| jq -r '.Stacks[0].Outputs[] | select(.OutputKey == "APITokenSecret") | .OutputValue')
Using those two pieces of information we can fetch the admin token and set up kumactl
:
TOKEN=$(aws secretsmanager get-secret-value --secret-id ${TOKEN_SECRET_ARN} \
| jq -r .SecretString)
kumactl config control-planes add \
--name=ecs --address=https://${CP_ADDR}:5682 --overwrite --auth-type=tokens \
--auth-conf token=${TOKEN} \
--ca-cert-file cert.pem
If you are deploying a zone that connects to Konnect, please follow instructions on Konnect to connect your kumactl to the global control plane.
We can also open the Kong Mesh GUI at https://${CP_ADDR}:5682/gui
(you'll need to
force the browser to accept the self-signed certificate).
We now have our control plane running and can begin deploying applications!
We can now launch our app components and the workload identity feature will handle authentication with the control plane.
aws cloudformation deploy \
--capabilities CAPABILITY_IAM \
--stack-name ecs-demo-redis \
--parameter-overrides VPCStackName=ecs-demo-vpc CPStackName=ecs-demo-kong-mesh-cp \
--template-file deploy/counter-demo/redis.yaml
aws cloudformation deploy \
--capabilities CAPABILITY_IAM \
--stack-name ecs-demo-demo-app \
--parameter-overrides VPCStackName=ecs-demo-vpc CPStackName=ecs-demo-kong-mesh-cp \
--template-file deploy/counter-demo/demo-app.yaml
See below under Usage for more about how communcation between these two services works and how to configure it.
The demo-app
stack exposes the server on port 80
of the NLB so
our app is now running and accessible http://${CP_ADDR}:80
.
To cleanup the resources we created you can execute the following:
aws cloudformation delete-stack --stack-name ecs-demo-demo-app
aws cloudformation delete-stack --stack-name ecs-demo-demo-redis
aws cloudformation delete-stack --stack-name ecs-demo-kong-mesh-cp
aws secretsmanager delete-secret --secret-id ${TLS_CERT}
aws secretsmanager delete-secret --secret-id ${TLS_KEY}
aws secretsmanager delete-secret --secret-id ${LICENSE_SECRET}
aws cloudformation delete-stack --stack-name ecs-demo-vpc
The control plane ECS task saves the generated admin token to an AWS secret. After we have accessed the secret, we can remove the final two containers in our control plane task.
When running Kong Mesh on ECS + Fargate, you'll need to list the services in the mesh that your task communicates with. These are called outbounds.
This entails editing the Dataplane
template in the CloudFormation template
used to deploy your application.
We can see this in the demo-app
template parameter DPTemplate
:
outbound:
- port: 6379
tags:
kuma.io/service: redis
Here we're telling Kong Mesh that our demo-app
will communicate with the redis
service. The sidecar is then configured to route requests to redis:6379
to our
redis
service.
This repository includes a GitHub Workflow that executes the above steps and tests that the demo works every night.
You can use ECS exec to get a shell to one of the containers to debug issues. Given a task ARN and a cluster name:
aws ecs execute-command --cluster ${ECS_CLUSTER_NAME} \
--task ${ECS_TASK_ARN} \
--container workload \
--interactive \
--command "/bin/sh"
Note that if the job fails, any CloudFormation stacks created during the failed
run are not deleted. The next GH workflow run will not succeed unless all stacks
from previous runs are deleted. This means any ecs-ci-*
stacks need to be
manually deleted in the nightly AWS account in the event of a workflow run
failure.
In case of failure, check the Events of the failed Cloudformation stack. For example if an ECS service fails to create, you can look at the failed/deleted ECS tasks for more information.