Skip to content

Infrastructure

Efe Karakus edited this page Nov 14, 2019 · 12 revisions

The ecs-cli spins up infrastructure on your behalf in your account to help you get your containers running on ECS and AWS. In this section, we'll go over the different infrastructure that's set up for you, why we set it up, and what it looks like.

Environments

When you create your applications, you deploy them to a particular environment. Each environment has its own networking stack, load balancer, and ECS Cluster.

When you create a new environment, through archer env init (or through the first app init experience), we'll set up an environment for you. The basic infrastructure of an environment looks like this:

Environment Stack

VPC and Networking

Each environment gets its own multi-AZ VPC. Your VPC is the network boundary of your environment, allowing the traffic you expect in and out, and blocking the rest. The VPCs we create are spread across two availability zones to help balance availability and cost - with each AZ getting a public and private subnet.

We partition your VPC into a public and private subnet. When we launch your applications, we launch them into your private subnet so that they can't be reached from the internet, unless through your load balancer. In order to route traffic from your private subnets to the internet (when your service makes a request, for example), we also spin up a NAT Gateway and Elastic IP.

Load Balancers and DNS

If you set up any application using one of the Load Balanced application types, we'll create a load balancer. In the case of a Load Balanced Web App, we'll create an Application Load Balancer, specifically. All applications within a particular environment will share a load balancer by creating app specific listeners on it. Your load balancer is whitelisted to communicate with services in your VPC.

Optionally, when you set up a project, you can provide a domain name that you own and is registered in Route 53. If you provide us a domain name, each time you spin up an environment, we'll create a subdomain environment-name.your-domain.com, provision an ACM cert, and bind it to your Application Load Balancer so it can use HTTPS. You don't need to provide a domain name, but if you don't you'll have to use HTTP connections to your application load balancer.

Applications

After you've created an environment, you can deploy an application into it. The application stack contains the actual ECS service for your app as well as all the configuration to get requests from your existing load balancer (set up by the environment) to your service.

App Lodabalanced Webapp(2)

Load Balancer Config

For load balanced webapps, we create a Listener Rule on the Application Load Balancer. This rule will match certain URL paths and forward them to your service.

For example, if you have a load balancer for the ecs-cli which fronts ecs-cli.aws and a front-end application, we'll create a listener on the ecs-cli load balancer to match ecs-cli.aws/front-end, and forward requests to that URL to our service. Another service, back-end, might have a listener on the same load balancer that forwards requests to the URL ecs-cli.aws/back-end to its service.

What these listeners actually forward traffic to is our Target Group. Our Target Group knows all the running tasks (containers) in our application's service. The Target Group will pick one of these tasks and forward the request to it.

Fargate Service

The final piece of the puzzle is our actual ECS Service. We provision an ECS service using Fargate as the launch type - that way you don't have to manage any EC2 instances. This is where your application code runs.

Deploying your application also generates a new Task Definition - but we'll talk more about that in the manifest section.

Clone this wiki locally