Skip to content

This component is responsible for provisioning an end-to-end EKS Cluster, including managed node groups and Fargate profiles

Notifications You must be signed in to change notification settings

cloudposse-terraform-components/aws-eks-cluster

Folders and files

NameName
Last commit message
Last commit date

Latest commit

Β 

History

80 Commits
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 

Repository files navigation

Project Banner

Latest ReleaseSlack Community

This component is responsible for provisioning an end-to-end EKS Cluster, including managed node groups and Fargate profiles.

Note

Windows not supported

This component has not been tested with Windows worker nodes of any launch type. Although upstream modules support Windows nodes, there are likely issues around incorrect or insufficient IAM permissions or other configuration that would need to be resolved for this component to properly configure the upstream modules for Windows nodes. If you need Windows nodes, please experiment and be on the lookout for issues, and then report any issues to Cloud Posse.

Usage

Stack Level: Regional

Here's an example snippet for how to use this component.

This example expects the Cloud Posse Reference Architecture Identity and Network designs deployed for mapping users to EKS service roles and granting access in a private network. In addition, this example has the GitHub OIDC integration added and makes use of Karpenter to dynamically scale cluster nodes.

For more on these requirements, see Identity Reference Architecture, Network Reference Architecture, the GitHub OIDC component, and the Karpenter component.

Mixin pattern for Kubernetes version

We recommend separating out the Kubernetes and related addons versions into a separate mixin (one per Kubernetes minor version), to make it easier to run different versions in different environments, for example while testing a new version.

We also recommend leaving "resolve conflicts" settings unset and therefore using the default "OVERWRITE" setting because any custom configuration that you would want to preserve should be managed by Terraform configuring the add-ons directly.

For example, create catalog/eks/cluster/mixins/k8s-1-29.yaml with the following content:

components:
  terraform:
    eks/cluster:
      vars:
        cluster_kubernetes_version: "1.29"

        # You can set all the add-on versions to `null` to use the latest version,
        # but that introduces drift as new versions are released. As usual, we recommend
        # pinning the versions to a specific version and upgrading when convenient.

        # Determine the latest version of the EKS add-ons for the specified Kubernetes version
        #  EKS_K8S_VERSION=1.29 # replace with your cluster version
        #  ADD_ON=vpc-cni # replace with the add-on name
        #  echo "${ADD_ON}:" && aws eks describe-addon-versions --kubernetes-version $EKS_K8S_VERSION --addon-name $ADD_ON \
        #  --query 'addons[].addonVersions[].{Version: addonVersion, Defaultversion: compatibilities[0].defaultVersion}' --output table

        # To see versions for all the add-ons, wrap the above command in a for loop:
        #   for ADD_ON in vpc-cni kube-proxy coredns aws-ebs-csi-driver aws-efs-csi-driver; do
        #     echo "${ADD_ON}:" && aws eks describe-addon-versions --kubernetes-version $EKS_K8S_VERSION --addon-name $ADD_ON \
        #     --query 'addons[].addonVersions[].{Version: addonVersion, Defaultversion: compatibilities[0].defaultVersion}' --output table
        #   done

        # To see the custom configuration schema for an add-on, run the following command:
        #   aws eks describe-addon-configuration --addon-name aws-ebs-csi-driver \
        #   --addon-version v1.20.0-eksbuild.1 | jq '.configurationSchema | fromjson'
        # See the `coredns` configuration below for an example of how to set a custom configuration.

        # https://docs.aws.amazon.com/eks/latest/userguide/eks-add-ons.html
        # https://docs.aws.amazon.com/eks/latest/userguide/managing-add-ons.html#creating-an-add-on
        addons:
          # https://docs.aws.amazon.com/eks/latest/userguide/cni-iam-role.html
          # https://docs.aws.amazon.com/eks/latest/userguide/managing-vpc-cni.html
          # https://docs.aws.amazon.com/eks/latest/userguide/cni-iam-role.html#cni-iam-role-create-role
          # https://aws.github.io/aws-eks-best-practices/networking/vpc-cni/#deploy-vpc-cni-managed-add-on
          vpc-cni:
            addon_version: "v1.16.0-eksbuild.1" # set `addon_version` to `null` to use the latest version
          # https://docs.aws.amazon.com/eks/latest/userguide/managing-kube-proxy.html
          kube-proxy:
            addon_version: "v1.29.0-eksbuild.1" # set `addon_version` to `null` to use the latest version
          # https://docs.aws.amazon.com/eks/latest/userguide/managing-coredns.html
          coredns:
            addon_version: "v1.11.1-eksbuild.4" # set `addon_version` to `null` to use the latest version
            ## override default replica count of 2. In very large clusters, you may want to increase this.
            configuration_values: '{"replicaCount": 3}'

          # https://docs.aws.amazon.com/eks/latest/userguide/csi-iam-role.html
          # https://aws.amazon.com/blogs/containers/amazon-ebs-csi-driver-is-now-generally-available-in-amazon-eks-add-ons
          # https://docs.aws.amazon.com/eks/latest/userguide/managing-ebs-csi.html#csi-iam-role
          # https://github.com/kubernetes-sigs/aws-ebs-csi-driver
          aws-ebs-csi-driver:
            addon_version: "v1.27.0-eksbuild.1" # set `addon_version` to `null` to use the latest version
            # If you are not using [volume snapshots](https://kubernetes.io/blog/2020/12/10/kubernetes-1.20-volume-snapshot-moves-to-ga/#how-to-use-volume-snapshots)
            # (and you probably are not), disable the EBS Snapshotter
            # See https://github.com/aws/containers-roadmap/issues/1919
            configuration_values: '{"sidecars":{"snapshotter":{"forceEnable":false}}}'

          aws-efs-csi-driver:
            addon_version: "v1.7.7-eksbuild.1" # set `addon_version` to `null` to use the latest version
            # Set a short timeout in case of conflict with an existing efs-controller deployment
            create_timeout: "7m"

Common settings for all Kubernetes versions

In your main stack configuration, you can then set the Kubernetes version by importing the appropriate mixin:

#
import:
  - catalog/eks/cluster/mixins/k8s-1-29

components:
  terraform:
    eks/cluster:
      vars:
        enabled: true
        name: eks
        vpc_component_name: "vpc"
        eks_component_name: "eks/cluster"

        # Your choice of availability zones or availability zone ids
        # availability_zones: ["us-east-1a", "us-east-1b", "us-east-1c"]
        aws_ssm_agent_enabled: true
        allow_ingress_from_vpc_accounts:
          - tenant: core
            stage: auto
          - tenant: core
            stage: corp
          - tenant: core
            stage: network

        public_access_cidrs: []
        allowed_cidr_blocks: []
        allowed_security_groups: []

        enabled_cluster_log_types:
          # Caution: enabling `api` log events may lead to a substantial increase in Cloudwatch Logs expenses.
          - api
          - audit
          - authenticator
          - controllerManager
          - scheduler

        oidc_provider_enabled: true

        # Allows GitHub OIDC role
        github_actions_iam_role_enabled: true
        github_actions_iam_role_attributes: ["eks"]
        github_actions_allowed_repos:
          - acme/infra

        # We recommend, at a minimum, deploying 1 managed node group,
        # with the same number of instances as availability zones (typically 3).
        managed_node_groups_enabled: true
        node_groups: # for most attributes, setting null here means use setting from node_group_defaults
          main:
            # availability_zones = null will create one autoscaling group
            # in every private subnet in the VPC
            availability_zones: null

            # Tune the desired and minimum group size according to your baseload requirements.
            # We recommend no autoscaling for the main node group, so it will
            # stay at the specified desired group size, with additional
            # capacity provided by Karpenter. Nevertheless, we recommend
            # deploying enough capacity in the node group to handle your
            # baseload requirements, and in production, we recommend you
            # have a large enough node group to handle 3/2 (1.5) times your
            # baseload requirements, to handle the loss of a single AZ.
            desired_group_size: 3 # number of instances to start with, should be >= number of AZs
            min_group_size: 3 # must be  >= number of AZs
            max_group_size: 3

            # Can only set one of ami_release_version or kubernetes_version
            # Leave both null to use latest AMI for Cluster Kubernetes version
            kubernetes_version: null # use cluster Kubernetes version
            ami_release_version: null # use latest AMI for Kubernetes version

            attributes: []
            create_before_destroy: true
            cluster_autoscaler_enabled: true
            instance_types:
              # Tune the instance type according to your baseload requirements.
              - c7a.medium
            ami_type: AL2_x86_64 # use "AL2_x86_64" for standard instances, "AL2_x86_64_GPU" for GPU instances
            node_userdata:
              # WARNING: node_userdata is alpha status and will likely change in the future.
              #          Also, it is only supported for AL2 and some Windows AMIs, not BottleRocket or AL2023.
              # Kubernetes docs: https://kubernetes.io/docs/tasks/administer-cluster/reserve-compute-resources/
              kubelet_extra_args: >-
                --kube-reserved cpu=100m,memory=0.6Gi,ephemeral-storage=1Gi --system-reserved
                cpu=100m,memory=0.2Gi,ephemeral-storage=1Gi --eviction-hard
                memory.available<200Mi,nodefs.available<10%,imagefs.available<15%
            block_device_map:
              # EBS volume for local ephemeral storage
              # IGNORED if legacy `disk_encryption_enabled` or `disk_size` are set!
              # Use "/dev/xvda" for most of the instances (without local NVMe)
              # using most of the Linuxes, "/dev/xvdb" for BottleRocket
              "/dev/xvda":
                ebs:
                  volume_size: 100 # number of GB
                  volume_type: gp3

            kubernetes_labels: {}
            kubernetes_taints: {}
            resources_to_tag:
              - instance
              - volume
            tags: null

        # The abbreviation method used for Availability Zones in your project.
        # Used for naming resources in managed node groups.
        # Either "short" or "fixed".
        availability_zone_abbreviation_type: fixed

        cluster_private_subnets_only: true
        cluster_encryption_config_enabled: true
        cluster_endpoint_private_access: true
        cluster_endpoint_public_access: false
        cluster_log_retention_period: 90

        # List of `aws-team-roles` (in the account where the EKS cluster is deployed) to map to Kubernetes RBAC groups
        # You cannot set `system:*` groups here, except for `system:masters`.
        # The `idp:*` roles referenced here are created by the `eks/idp-roles` component.
        # While set here, the `idp:*` roles will have no effect until after
        # the `eks/idp-roles` component is applied, which must be after the
        # `eks/cluster` component is deployed.
        aws_team_roles_rbac:
          - aws_team_role: admin
            groups:
              - system:masters
          - aws_team_role: poweruser
            groups:
              - idp:poweruser
          - aws_team_role: observer
            groups:
              - idp:observer
          - aws_team_role: planner
            groups:
              - idp:observer
          - aws_team: terraform
            groups:
              - system:masters

        # Permission sets from AWS SSO allowing cluster access
        # See `aws-sso` component.
        aws_sso_permission_sets_rbac:
          - aws_sso_permission_set: PowerUserAccess
            groups:
              - idp:poweruser

        # Set to false if you are not using Karpenter
        karpenter_iam_role_enabled: true

        # All Fargate Profiles will use the same IAM Role when `legacy_fargate_1_role_per_profile_enabled` is set to false.
        # Recommended for all new clusters, but will damage existing clusters provisioned with the legacy component.
        legacy_fargate_1_role_per_profile_enabled: false
        # While it is possible to deploy add-ons to Fargate Profiles, it is not recommended. Use a managed node group instead.
        deploy_addons_to_fargate: false

Amazon EKS End-of-Life Dates

When picking a Kubernetes version, be sure to review the end-of-life dates for Amazon EKS. Refer to the chart below:

cycle release latest latest release eol extended support
1.29 2024-01-23 1.29-eks-6 2024-04-18 2025-03-23 2026-03-23
1.28 2023-09-26 1.28-eks-12 2024-04-18 2024-11-26 2025-11-26
1.27 2023-05-24 1.27-eks-16 2024-04-18 2024-07-24 2025-07-24
1.26 2023-04-11 1.26-eks-17 2024-04-18 2024-06-11 2025-06-11
1.25 2023-02-21 1.25-eks-18 2024-04-18 2024-05-01 2025-05-01
1.24 2022-11-15 1.24-eks-21 2024-04-18 2024-01-31 2025-01-31
1.23 2022-08-11 1.23-eks-23 2024-04-18 2023-10-11 2024-10-11
1.22 2022-04-04 1.22-eks-14 2023-06-30 2023-06-04 2024-09-01
1.21 2021-07-19 1.21-eks-18 2023-06-09 2023-02-16 2024-07-15
1.20 2021-05-18 1.20-eks-14 2023-05-05 2022-11-01 False
1.19 2021-02-16 1.19-eks-11 2022-08-15 2022-08-01 False
1.18 2020-10-13 1.18-eks-13 2022-08-15 2022-08-15 False

* This Chart was generated 2024-05-12 with the eol tool. Install it with python3 -m pip install --upgrade norwegianblue and create a new table by running eol --md amazon-eks locally, or view the information by visiting the endoflife website.

You can also view the release and support timeline for the Kubernetes project itself.

Using Addons

EKS clusters support β€œAddons” that can be automatically installed on a cluster. Install these addons with the var.addons input.

Tip

Run the following command to see all available addons, their type, and their publisher. You can also see the URL for addons that are available through the AWS Marketplace. Replace 1.27 with the version of your cluster. See Creating an addon for more details.

EKS_K8S_VERSION=1.29 # replace with your cluster version
aws eks describe-addon-versions --kubernetes-version $EKS_K8S_VERSION \
  --query 'addons[].{MarketplaceProductUrl: marketplaceInformation.productUrl, Name: addonName, Owner: owner Publisher: publisher, Type: type}' --output table

Tip

You can see which versions are available for each addon by executing the following commands. Replace 1.29 with the version of your cluster.

EKS_K8S_VERSION=1.29 # replace with your cluster version
echo "vpc-cni:" && aws eks describe-addon-versions --kubernetes-version $EKS_K8S_VERSION --addon-name vpc-cni \
  --query 'addons[].addonVersions[].{Version: addonVersion, Defaultversion: compatibilities[0].defaultVersion}' --output table

echo "kube-proxy:" && aws eks describe-addon-versions --kubernetes-version $EKS_K8S_VERSION --addon-name kube-proxy \
  --query 'addons[].addonVersions[].{Version: addonVersion, Defaultversion: compatibilities[0].defaultVersion}' --output table

echo "coredns:" && aws eks describe-addon-versions --kubernetes-version $EKS_K8S_VERSION --addon-name coredns \
  --query 'addons[].addonVersions[].{Version: addonVersion, Defaultversion: compatibilities[0].defaultVersion}' --output table

echo "aws-ebs-csi-driver:" && aws eks describe-addon-versions --kubernetes-version $EKS_K8S_VERSION --addon-name aws-ebs-csi-driver \
  --query 'addons[].addonVersions[].{Version: addonVersion, Defaultversion: compatibilities[0].defaultVersion}' --output table

echo "aws-efs-csi-driver:" && aws eks describe-addon-versions --kubernetes-version $EKS_K8S_VERSION --addon-name aws-efs-csi-driver \
  --query 'addons[].addonVersions[].{Version: addonVersion, Defaultversion: compatibilities[0].defaultVersion}' --output table

Some add-ons accept additional configuration. For example, the vpc-cni addon accepts a disableNetworking parameter. View the available configuration options (as JSON Schema) via the aws eks describe-addon-configuration command. For example:

aws eks describe-addon-configuration \
  --addon-name aws-ebs-csi-driver \
  --addon-version v1.20.0-eksbuild.1 | jq '.configurationSchema | fromjson'

You can then configure the add-on via the configuration_values input. For example:

aws-ebs-csi-driver:
  configuration_values: '{"node": {"loggingFormat": "json"}}'

Configure the addons like the following example:

# https://docs.aws.amazon.com/eks/latest/userguide/eks-add-ons.html
# https://docs.aws.amazon.com/eks/latest/userguide/managing-add-ons.html#creating-an-add-on
# https://aws.amazon.com/blogs/containers/amazon-eks-add-ons-advanced-configuration/
addons:
  # https://docs.aws.amazon.com/eks/latest/userguide/cni-iam-role.html
  # https://docs.aws.amazon.com/eks/latest/userguide/managing-vpc-cni.html
  # https://docs.aws.amazon.com/eks/latest/userguide/cni-iam-role.html#cni-iam-role-create-role
  # https://aws.github.io/aws-eks-best-practices/networking/vpc-cni/#deploy-vpc-cni-managed-add-on
  vpc-cni:
    addon_version: "v1.12.2-eksbuild.1" # set `addon_version` to `null` to use the latest version
  # https://docs.aws.amazon.com/eks/latest/userguide/managing-kube-proxy.html
  kube-proxy:
    addon_version: "v1.25.6-eksbuild.1" # set `addon_version` to `null` to use the latest version
  # https://docs.aws.amazon.com/eks/latest/userguide/managing-coredns.html
  coredns:
    addon_version: "v1.9.3-eksbuild.2" # set `addon_version` to `null` to use the latest version
    # Override default replica count of 2, to have one in each AZ
    configuration_values: '{"replicaCount": 3}'
  # https://docs.aws.amazon.com/eks/latest/userguide/csi-iam-role.html
  # https://aws.amazon.com/blogs/containers/amazon-ebs-csi-driver-is-now-generally-available-in-amazon-eks-add-ons
  # https://docs.aws.amazon.com/eks/latest/userguide/managing-ebs-csi.html#csi-iam-role
  # https://github.com/kubernetes-sigs/aws-ebs-csi-driver
  aws-ebs-csi-driver:
    addon_version: "v1.19.0-eksbuild.2" # set `addon_version` to `null` to use the latest version
    # If you are not using [volume snapshots](https://kubernetes.io/blog/2020/12/10/kubernetes-1.20-volume-snapshot-moves-to-ga/#how-to-use-volume-snapshots)
    # (and you probably are not), disable the EBS Snapshotter with:
    configuration_values: '{"sidecars":{"snapshotter":{"forceEnable":false}}}'

Some addons, such as CoreDNS, require at least one node to be fully provisioned first. See issue #170 for more details. Set var.addons_depends_on to true to require the Node Groups to be provisioned before addons.

addons_depends_on: true
addons:
  coredns:
    addon_version: "v1.8.7-eksbuild.1"

Warning

Addons may not be suitable for all use-cases! For example, if you are deploying Karpenter to Fargate and using Karpenter to provision all nodes, these nodes will never be available before the cluster component is deployed if you are using the CoreDNS addon (for example).

This is one of the reasons we recommend deploying a managed node group: to ensure that the addons will become fully functional during deployment of the cluster.

For more information on upgrading EKS Addons, see "How to Upgrade EKS Cluster Addons"

Adding and Configuring a new EKS Addon

The component already supports all the EKS addons shown in the configurations above. To add a new EKS addon, not supported by the cluster, add it to the addons map (addons variable):

addons:
  my-addon:
    addon_version: "..."

If the new addon requires an EKS IAM Role for Kubernetes Service Account, perform the following steps:

  • Add a file addons-custom.tf to the eks/cluster folder if not already present

  • In the file, add an IAM policy document with the permissions required for the addon, and use the eks-iam-role module to provision an IAM Role for Kubernetes Service Account for the addon:

      data "aws_iam_policy_document" "my_addon" {
        statement {
          sid       = "..."
          effect    = "Allow"
          resources = ["..."]
    
          actions = [
            "...",
            "..."
          ]
        }
      }
    
      module "my_addon_eks_iam_role" {
        source  = "cloudposse/eks-iam-role/aws"
        version = "2.1.0"
    
        eks_cluster_oidc_issuer_url = local.eks_cluster_oidc_issuer_url
    
        service_account_name      = "..."
        service_account_namespace = "..."
    
        aws_iam_policy_document = [one(data.aws_iam_policy_document.my_addon[*].json)]
    
        context = module.this.context
      }

    For examples of how to configure the IAM role and IAM permissions for EKS addons, see addons.tf.

  • Add a file additional-addon-support_override.tf to the eks/cluster folder if not already present

  • In the file, add the IAM Role for Kubernetes Service Account for the addon to the overridable_additional_addon_service_account_role_arn_map map:

      locals {
        overridable_additional_addon_service_account_role_arn_map = {
          my-addon = module.my_addon_eks_iam_role.service_account_role_arn
        }
      }
  • This map will override the default map in the additional-addon-support.tf file, and will be merged into the final map together with the default EKS addons vpc-cni and aws-ebs-csi-driver (which this component configures and creates IAM Roles for Kubernetes Service Accounts)

  • Follow the instructions in the additional-addon-support.tf file if the addon may need to be deployed to Fargate, or has dependencies that Terraform cannot detect automatically.

Requirements

Name Version
terraform >= 1.3.0
aws >= 4.9.0
random >= 3.0

Providers

Name Version
aws >= 4.9.0
random >= 3.0

Modules

Name Source Version
aws_ebs_csi_driver_eks_iam_role cloudposse/eks-iam-role/aws 2.2.1
aws_ebs_csi_driver_fargate_profile cloudposse/eks-fargate-profile/aws 1.3.0
aws_efs_csi_driver_eks_iam_role cloudposse/eks-iam-role/aws 2.2.1
coredns_fargate_profile cloudposse/eks-fargate-profile/aws 1.3.0
eks_cluster cloudposse/eks-cluster/aws 4.6.0
fargate_pod_execution_role cloudposse/eks-fargate-profile/aws 1.3.0
fargate_profile cloudposse/eks-fargate-profile/aws 1.3.0
iam_arns ../../account-map/modules/roles-to-principals n/a
iam_roles ../../account-map/modules/iam-roles n/a
karpenter_label cloudposse/label/null 0.25.0
region_node_group ./modules/node_group_by_region n/a
this cloudposse/label/null 0.25.0
utils cloudposse/utils/aws 1.4.0
vpc cloudposse/stack-config/yaml//modules/remote-state 1.8.0
vpc_cni_eks_iam_role cloudposse/eks-iam-role/aws 2.2.1
vpc_ingress cloudposse/stack-config/yaml//modules/remote-state 1.8.0

Resources

Name Type
aws_iam_instance_profile.default resource
aws_iam_policy.ipv6_eks_cni_policy resource
aws_iam_role.karpenter resource
aws_iam_role_policy_attachment.amazon_ec2_container_registry_readonly resource
aws_iam_role_policy_attachment.amazon_eks_worker_node_policy resource
aws_iam_role_policy_attachment.amazon_ssm_managed_instance_core resource
aws_iam_role_policy_attachment.aws_ebs_csi_driver resource
aws_iam_role_policy_attachment.aws_efs_csi_driver resource
aws_iam_role_policy_attachment.ipv6_eks_cni_policy resource
aws_iam_role_policy_attachment.vpc_cni resource
random_pet.camel_case_warning resource
aws_availability_zones.default data source
aws_iam_policy_document.assume_role data source
aws_iam_policy_document.ipv6_eks_cni_policy data source
aws_iam_policy_document.vpc_cni_ipv6 data source
aws_iam_roles.sso_roles data source
aws_partition.current data source

Inputs

Name Description Type Default Required
access_config Access configuration for the EKS cluster
object({
authentication_mode = optional(string, "API")
bootstrap_cluster_creator_admin_permissions = optional(bool, false)
})
{} no
additional_tag_map Additional key-value pairs to add to each map in tags_as_list_of_maps. Not added to tags or id.
This is for some rare cases where resources want additional configuration of tags
and therefore take a list of maps with tag key, value, and additional configuration.
map(string) {} no
addons Manages EKS addons resources
map(object({
enabled = optional(bool, true)
addon_version = optional(string, null)
# configuration_values is a JSON string, such as '{"computeType": "Fargate"}'.
configuration_values = optional(string, null)
# Set default resolve_conflicts to OVERWRITE because it is required on initial installation of
# add-ons that have self-managed versions installed by default (e.g. vpc-cni, coredns), and
# because any custom configuration that you would want to preserve should be managed by Terraform.
resolve_conflicts_on_create = optional(string, "OVERWRITE")
resolve_conflicts_on_update = optional(string, "OVERWRITE")
service_account_role_arn = optional(string, null)
create_timeout = optional(string, null)
update_timeout = optional(string, null)
delete_timeout = optional(string, null)
}))
{} no
addons_depends_on If set true (recommended), all addons will depend on managed node groups provisioned by this component and therefore not be installed until nodes are provisioned.
See issue #170 for more details.
bool true no
allow_ingress_from_vpc_accounts List of account contexts to pull VPC ingress CIDR and add to cluster security group.

e.g.

{
environment = "ue2",
stage = "auto",
tenant = "core"
}
any [] no
allowed_cidr_blocks List of CIDR blocks to be allowed to connect to the EKS cluster list(string) [] no
allowed_security_groups List of Security Group IDs to be allowed to connect to the EKS cluster list(string) [] no
attributes ID element. Additional attributes (e.g. workers or cluster) to add to id,
in the order they appear in the list. New attributes are appended to the
end of the list. The elements of the list are joined by the delimiter
and treated as a single ID element.
list(string) [] no
availability_zone_abbreviation_type Type of Availability Zone abbreviation (either fixed or short) to use in names. See https://github.com/cloudposse/terraform-aws-utils for details. string "fixed" no
availability_zone_ids List of Availability Zones IDs where subnets will be created. Overrides availability_zones.
Can be the full name, e.g. use1-az1, or just the part after the AZ ID region code, e.g. -az1,
to allow reusable values across regions. Consider contention for resources and spot pricing in each AZ when selecting.
Useful in some regions when using only some AZs and you want to use the same ones across multiple accounts.
list(string) [] no
availability_zones AWS Availability Zones in which to deploy multi-AZ resources.
Ignored if availability_zone_ids is set.
Can be the full name, e.g. us-east-1a, or just the part after the region, e.g. a to allow reusable values across regions.
If not provided, resources will be provisioned in every zone with a private subnet in the VPC.
list(string) [] no
aws_ssm_agent_enabled Set true to attach the required IAM policy for AWS SSM agent to each EC2 instance's IAM Role bool false no
aws_sso_permission_sets_rbac (Not Recommended): AWS SSO (IAM Identity Center) permission sets in the EKS deployment account to add to aws-auth ConfigMap.
Unfortunately, aws-auth ConfigMap does not support SSO permission sets, so we map the generated
IAM Role ARN corresponding to the permission set at the time Terraform runs. This is subject to change
when any changes are made to the AWS SSO configuration, invalidating the mapping, and requiring a
terraform apply in this project to update the aws-auth ConfigMap and restore access.
list(object({
aws_sso_permission_set = string
groups = list(string)
}))
[] no
aws_team_roles_rbac List of aws-team-roles (in the target AWS account) to map to Kubernetes RBAC groups.
list(object({
aws_team_role = string
groups = list(string)
}))
[] no
cluster_encryption_config_enabled Set to true to enable Cluster Encryption Configuration bool true no
cluster_encryption_config_kms_key_deletion_window_in_days Cluster Encryption Config KMS Key Resource argument - key deletion windows in days post destruction number 10 no
cluster_encryption_config_kms_key_enable_key_rotation Cluster Encryption Config KMS Key Resource argument - enable kms key rotation bool true no
cluster_encryption_config_kms_key_id KMS Key ID to use for cluster encryption config string "" no
cluster_encryption_config_kms_key_policy Cluster Encryption Config KMS Key Resource argument - key policy string null no
cluster_encryption_config_resources Cluster Encryption Config Resources to encrypt, e.g. ["secrets"] list(string)
[
"secrets"
]
no
cluster_endpoint_private_access Indicates whether or not the Amazon EKS private API server endpoint is enabled. Default to AWS EKS resource and it is false bool false no
cluster_endpoint_public_access Indicates whether or not the Amazon EKS public API server endpoint is enabled. Default to AWS EKS resource and it is true bool true no
cluster_kubernetes_version Desired Kubernetes master version. If you do not specify a value, the latest available version is used string null no
cluster_log_retention_period Number of days to retain cluster logs. Requires enabled_cluster_log_types to be set. See https://docs.aws.amazon.com/en_us/eks/latest/userguide/control-plane-logs.html. number 0 no
cluster_private_subnets_only Whether or not to enable private subnets or both public and private subnets bool false no
color The cluster stage represented by a color; e.g. blue, green string "" no
context Single object for setting entire context at once.
See description of individual variables for details.
Leave string and numeric variables as null to use default value.
Individual variable settings (non-null) override settings in context object,
except for attributes, tags, and additional_tag_map, which are merged.
any
{
"additional_tag_map": {},
"attributes": [],
"delimiter": null,
"descriptor_formats": {},
"enabled": true,
"environment": null,
"id_length_limit": null,
"label_key_case": null,
"label_order": [],
"label_value_case": null,
"labels_as_tags": [
"unset"
],
"name": null,
"namespace": null,
"regex_replace_chars": null,
"stage": null,
"tags": {},
"tenant": null
}
no
delimiter Delimiter to be used between ID elements.
Defaults to - (hyphen). Set to "" to use no delimiter at all.
string null no
deploy_addons_to_fargate Set to true (not recommended) to deploy addons to Fargate instead of initial node pool bool false no
descriptor_formats Describe additional descriptors to be output in the descriptors output map.
Map of maps. Keys are names of descriptors. Values are maps of the form
{<br/> format = string<br/> labels = list(string)<br/>}
(Type is any so the map values can later be enhanced to provide additional options.)
format is a Terraform format string to be passed to the format() function.
labels is a list of labels, in order, to pass to format() function.
Label values will be normalized before being passed to format() so they will be
identical to how they appear in id.
Default is {} (descriptors output will be empty).
any {} no
enabled Set to false to prevent the module from creating any resources bool null no
enabled_cluster_log_types A list of the desired control plane logging to enable. For more information, see https://docs.aws.amazon.com/en_us/eks/latest/userguide/control-plane-logs.html. Possible values [api, audit, authenticator, controllerManager, scheduler] list(string) [] no
environment ID element. Usually used for region e.g. 'uw2', 'us-west-2', OR role 'prod', 'staging', 'dev', 'UAT' string null no
fargate_profile_iam_role_kubernetes_namespace_delimiter Delimiter for the Kubernetes namespace in the IAM Role name for Fargate Profiles string "-" no
fargate_profile_iam_role_permissions_boundary If provided, all Fargate Profiles IAM roles will be created with this permissions boundary attached string null no
fargate_profiles Fargate Profiles config
map(object({
kubernetes_namespace = string
kubernetes_labels = map(string)
}))
{} no
id_length_limit Limit id to this many characters (minimum 6).
Set to 0 for unlimited length.
Set to null for keep the existing setting, which defaults to 0.
Does not affect id_full.
number null no
karpenter_iam_role_enabled Flag to enable/disable creation of IAM role for EC2 Instance Profile that is attached to the nodes launched by Karpenter bool false no
label_key_case Controls the letter case of the tags keys (label names) for tags generated by this module.
Does not affect keys of tags passed in via the tags input.
Possible values: lower, title, upper.
Default value: title.
string null no
label_order The order in which the labels (ID elements) appear in the id.
Defaults to ["namespace", "environment", "stage", "name", "attributes"].
You can omit any of the 6 labels ("tenant" is the 6th), but at least one must be present.
list(string) null no
label_value_case Controls the letter case of ID elements (labels) as included in id,
set as tag values, and output by this module individually.
Does not affect values of tags passed in via the tags input.
Possible values: lower, title, upper and none (no transformation).
Set this to title and set delimiter to "" to yield Pascal Case IDs.
Default value: lower.
string null no
labels_as_tags Set of labels (ID elements) to include as tags in the tags output.
Default is to include all labels.
Tags with empty values will not be included in the tags output.
Set to [] to suppress all generated tags.
Notes:
The value of the name tag, if included, will be the id, not the name.
Unlike other null-label inputs, the initial setting of labels_as_tags cannot be
changed in later chained modules. Attempts to change it will be silently ignored.
set(string)
[
"default"
]
no
legacy_do_not_create_karpenter_instance_profile Obsolete: The issues this was meant to mitigate were fixed in AWS Terraform Provider v5.43.0
and Karpenter v0.33.0. This variable will be removed in a future release.
Remove this input from your configuration and leave it at default.
Old description: When true (the default), suppresses creation of the IAM Instance Profile
for nodes launched by Karpenter, to preserve the legacy behavior of
the eks/karpenter component creating it.
Set to false to enable creation of the IAM Instance Profile, which
ensures that both the role and the instance profile have the same lifecycle,
and avoids AWS Provider issue #32671.
Use in conjunction with eks/karpenter component legacy_create_karpenter_instance_profile.
bool true no
legacy_fargate_1_role_per_profile_enabled Set to false for new clusters to create a single Fargate Pod Execution role for the cluster.
Set to true for existing clusters to preserve the old behavior of creating
a Fargate Pod Execution role for each Fargate Profile.
bool true no
managed_node_groups_enabled Set false to prevent the creation of EKS managed node groups. bool true no
map_additional_iam_roles Additional IAM roles to grant access to the cluster.
WARNING: Full Role ARN, including path, is required for rolearn.
In earlier versions (with aws-auth ConfigMap), only the path
had to be removed from the Role ARN. The path is now required.
username is now ignored. This input is planned to be replaced
in a future release with a more flexible input structure that consolidates
map_additional_iam_roles and map_additional_iam_users.
list(object({
rolearn = string
username = optional(string)
groups = list(string)
}))
[] no
map_additional_iam_users Additional IAM roles to grant access to the cluster.
username is now ignored. This input is planned to be replaced
in a future release with a more flexible input structure that consolidates
map_additional_iam_roles and map_additional_iam_users.
list(object({
userarn = string
username = optional(string)
groups = list(string)
}))
[] no
map_additional_worker_roles (Deprecated) AWS IAM Role ARNs of unmanaged Linux worker nodes to grant access to the EKS cluster.
In earlier versions, this could be used to grant access to worker nodes of any type
that were not managed by the EKS cluster. Now EKS requires that unmanaged worker nodes
be classified as Linux or Windows servers, in this input is temporarily retained
with the assumption that all worker nodes are Linux servers. (It is likely that
earlier versions did not work properly with Windows worker nodes anyway.)
This input is deprecated and will be removed in a future release.
In the future, this component will either have a way to separate Linux and Windows worker nodes,
or drop support for unmanaged worker nodes entirely.
list(string) [] no
name ID element. Usually the component or solution name, e.g. 'app' or 'jenkins'.
This is the only ID element not also included as a tag.
The "name" tag is set to the full id string. There is no tag with the value of the name input.
string null no
namespace ID element. Usually an abbreviation of your organization name, e.g. 'eg' or 'cp', to help ensure generated IDs are globally unique string null no
node_group_defaults Defaults for node groups in the cluster
object({
ami_release_version = optional(string, null)
ami_type = optional(string, null)
attributes = optional(list(string), null)
availability_zones = optional(list(string)) # set to null to use var.availability_zones
cluster_autoscaler_enabled = optional(bool, null)
create_before_destroy = optional(bool, null)
desired_group_size = optional(number, null)
instance_types = optional(list(string), null)
kubernetes_labels = optional(map(string), {})
kubernetes_taints = optional(list(object({
key = string
value = string
effect = string
})), [])
node_userdata = optional(object({
before_cluster_joining_userdata = optional(string)
bootstrap_extra_args = optional(string)
kubelet_extra_args = optional(string)
after_cluster_joining_userdata = optional(string)
}), {})
kubernetes_version = optional(string, null) # set to null to use cluster_kubernetes_version
max_group_size = optional(number, null)
min_group_size = optional(number, null)
resources_to_tag = optional(list(string), null)
tags = optional(map(string), null)

# block_device_map copied from cloudposse/terraform-aws-eks-node-group
# Keep in sync via copy and paste, but make optional
# Most of the time you want "/dev/xvda". For BottleRocket, use "/dev/xvdb".
block_device_map = optional(map(object({
no_device = optional(bool, null)
virtual_name = optional(string, null)
ebs = optional(object({
delete_on_termination = optional(bool, true)
encrypted = optional(bool, true)
iops = optional(number, null)
kms_key_id = optional(string, null)
snapshot_id = optional(string, null)
throughput = optional(number, null) # for gp3, MiB/s, up to 1000
volume_size = optional(number, 50) # disk size in GB
volume_type = optional(string, "gp3")

# Catch common camel case typos. These have no effect, they just generate better errors.
# It would be nice to actually use these, but volumeSize in particular is a number here
# and in most places it is a string with a unit suffix (e.g. 20Gi)
# Without these defined, they would be silently ignored and the default values would be used instead,
# which is difficult to debug.
deleteOnTermination = optional(any, null)
kmsKeyId = optional(any, null)
snapshotId = optional(any, null)
volumeSize = optional(any, null)
volumeType = optional(any, null)
}))
})), null)

# DEPRECATED: disk_encryption_enabled is DEPRECATED, use block_device_map instead.
disk_encryption_enabled = optional(bool, null)
# DEPRECATED: disk_size is DEPRECATED, use block_device_map instead.
disk_size = optional(number, null)
})
{
"block_device_map": {
"/dev/xvda": {
"ebs": {
"encrypted": true,
"volume_size": 20,
"volume_type": "gp2"
}
}
},
"desired_group_size": 1,
"instance_types": [
"t3.medium"
],
"kubernetes_version": null,
"max_group_size": 100
}
no
node_groups List of objects defining a node group for the cluster
map(object({
# EKS AMI version to use, e.g. "1.16.13-20200821" (no "v").
ami_release_version = optional(string, null)
# Type of Amazon Machine Image (AMI) associated with the EKS Node Group
ami_type = optional(string, null)
# Additional attributes (e.g. 1) for the node group
attributes = optional(list(string), null)
# will create 1 auto scaling group in each specified availability zone
# or all AZs with subnets if none are specified anywhere
availability_zones = optional(list(string), null)
# Whether to enable Node Group to scale its AutoScaling Group
cluster_autoscaler_enabled = optional(bool, null)
# True to create new node_groups before deleting old ones, avoiding a temporary outage
create_before_destroy = optional(bool, null)
# Desired number of worker nodes when initially provisioned
desired_group_size = optional(number, null)
# Set of instance types associated with the EKS Node Group. Terraform will only perform drift detection if a configuration value is provided.
instance_types = optional(list(string), null)
# Key-value mapping of Kubernetes labels. Only labels that are applied with the EKS API are managed by this argument. Other Kubernetes labels applied to the EKS Node Group will not be managed
kubernetes_labels = optional(map(string), null)
# List of objects describing Kubernetes taints.
kubernetes_taints = optional(list(object({
key = string
value = string
effect = string
})), null)
node_userdata = optional(object({
before_cluster_joining_userdata = optional(string)
bootstrap_extra_args = optional(string)
kubelet_extra_args = optional(string)
after_cluster_joining_userdata = optional(string)
}), {})
# Desired Kubernetes master version. If you do not specify a value, the latest available version is used
kubernetes_version = optional(string, null)
# The maximum size of the AutoScaling Group
max_group_size = optional(number, null)
# The minimum size of the AutoScaling Group
min_group_size = optional(number, null)
# List of auto-launched resource types to tag
resources_to_tag = optional(list(string), null)
tags = optional(map(string), null)

# block_device_map copied from cloudposse/terraform-aws-eks-node-group
# Keep in sync via copy and paste, but make optional.
# Most of the time you want "/dev/xvda". For BottleRocket, use "/dev/xvdb".
block_device_map = optional(map(object({
no_device = optional(bool, null)
virtual_name = optional(string, null)
ebs = optional(object({
delete_on_termination = optional(bool, true)
encrypted = optional(bool, true)
iops = optional(number, null)
kms_key_id = optional(string, null)
snapshot_id = optional(string, null)
throughput = optional(number, null) # for gp3, MiB/s, up to 1000
volume_size = optional(number, 20) # Disk size in GB
volume_type = optional(string, "gp3")

# Catch common camel case typos. These have no effect, they just generate better errors.
# It would be nice to actually use these, but volumeSize in particular is a number here
# and in most places it is a string with a unit suffix (e.g. 20Gi)
# Without these defined, they would be silently ignored and the default values would be used instead,
# which is difficult to debug.
deleteOnTermination = optional(any, null)
kmsKeyId = optional(any, null)
snapshotId = optional(any, null)
volumeSize = optional(any, null)
volumeType = optional(any, null)
}))
})), null)

# DEPRECATED:
# Enable disk encryption for the created launch template (if we aren't provided with an existing launch template)
# DEPRECATED: disk_encryption_enabled is DEPRECATED, use block_device_map instead.
disk_encryption_enabled = optional(bool, null)
# Disk size in GiB for worker nodes. Terraform will only perform drift detection if a configuration value is provided.
# DEPRECATED: disk_size is DEPRECATED, use block_device_map instead.
disk_size = optional(number, null)

}))
{} no
oidc_provider_enabled Create an IAM OIDC identity provider for the cluster, then you can create IAM roles to associate with a service account in the cluster, instead of using kiam or kube2iam. For more information, see https://docs.aws.amazon.com/eks/latest/userguide/enable-iam-roles-for-service-accounts.html bool true no
public_access_cidrs Indicates which CIDR blocks can access the Amazon EKS public API server endpoint when enabled. EKS defaults this to a list with 0.0.0.0/0. list(string)
[
"0.0.0.0/0"
]
no
regex_replace_chars Terraform regular expression (regex) string.
Characters matching the regex will be removed from the ID elements.
If not set, "/[^a-zA-Z0-9-]/" is used to remove all characters other than hyphens, letters and digits.
string null no
region AWS Region string n/a yes
stage ID element. Usually used to indicate role, e.g. 'prod', 'staging', 'source', 'build', 'test', 'deploy', 'release' string null no
subnet_type_tag_key The tag used to find the private subnets to find by availability zone. If null, will be looked up in vpc outputs. string null no
tags Additional tags (e.g. {'BusinessUnit': 'XYZ'}).
Neither the tag keys nor the tag values will be modified by this module.
map(string) {} no
tenant ID element _(Rarely used, not included by default)_. A customer identifier, indicating who this instance of a resource is for string null no
vpc_component_name The name of the vpc component string "vpc" no

Outputs

Name Description
availability_zones Availability Zones in which the cluster is provisioned
eks_addons_versions Map of enabled EKS Addons names and versions
eks_auth_worker_roles List of worker IAM roles that were included in the auth-map ConfigMap.
eks_cluster_arn The Amazon Resource Name (ARN) of the cluster
eks_cluster_certificate_authority_data The Kubernetes cluster certificate authority data
eks_cluster_endpoint The endpoint for the Kubernetes API server
eks_cluster_id The name of the cluster
eks_cluster_identity_oidc_issuer The OIDC Identity issuer for the cluster
eks_cluster_managed_security_group_id Security Group ID that was created by EKS for the cluster. EKS creates a Security Group and applies it to ENI that is attached to EKS Control Plane master nodes and to any managed workloads
eks_cluster_version The Kubernetes server version of the cluster
eks_managed_node_workers_role_arns List of ARNs for workers in managed node groups
eks_node_group_arns List of all the node group ARNs in the cluster
eks_node_group_count Count of the worker nodes
eks_node_group_ids EKS Cluster name and EKS Node Group name separated by a colon
eks_node_group_role_names List of worker nodes IAM role names
eks_node_group_statuses Status of the EKS Node Group
fargate_profile_role_arns Fargate Profile Role ARNs
fargate_profile_role_names Fargate Profile Role names
fargate_profiles Fargate Profiles
karpenter_iam_role_arn Karpenter IAM Role ARN
karpenter_iam_role_name Karpenter IAM Role name
vpc_cidr The CIDR of the VPC where this cluster is deployed.

Related How-to Guides

References

Tip

πŸ‘½ Use Atmos with Terraform

Cloud Posse uses atmos to easily orchestrate multiple environments using Terraform.
Works with Github Actions, Atlantis, or Spacelift.

Watch demo of using Atmos with Terraform
Example of running atmos to manage infrastructure from our Quick Start tutorial.

Related Projects

Check out these related projects.

  • Cloud Posse Terraform Modules - Our collection of reusable Terraform modules used by our reference architectures.
  • Atmos - Atmos is like docker-compose but for your infrastructure

Tip

Use Terraform Reference Architectures for AWS

Use Cloud Posse's ready-to-go terraform architecture blueprints for AWS to get up and running quickly.

βœ… We build it together with your team.
βœ… Your team owns everything.
βœ… 100% Open Source and backed by fanatical support.

Request Quote

πŸ“š Learn More

Cloud Posse is the leading DevOps Accelerator for funded startups and enterprises.

Your team can operate like a pro today.

Ensure that your team succeeds by using Cloud Posse's proven process and turnkey blueprints. Plus, we stick around until you succeed.

Day-0: Your Foundation for Success

  • Reference Architecture. You'll get everything you need from the ground up built using 100% infrastructure as code.
  • Deployment Strategy. Adopt a proven deployment strategy with GitHub Actions, enabling automated, repeatable, and reliable software releases.
  • Site Reliability Engineering. Gain total visibility into your applications and services with Datadog, ensuring high availability and performance.
  • Security Baseline. Establish a secure environment from the start, with built-in governance, accountability, and comprehensive audit logs, safeguarding your operations.
  • GitOps. Empower your team to manage infrastructure changes confidently and efficiently through Pull Requests, leveraging the full power of GitHub Actions.

Request Quote

Day-2: Your Operational Mastery

  • Training. Equip your team with the knowledge and skills to confidently manage the infrastructure, ensuring long-term success and self-sufficiency.
  • Support. Benefit from a seamless communication over Slack with our experts, ensuring you have the support you need, whenever you need it.
  • Troubleshooting. Access expert assistance to quickly resolve any operational challenges, minimizing downtime and maintaining business continuity.
  • Code Reviews. Enhance your team’s code quality with our expert feedback, fostering continuous improvement and collaboration.
  • Bug Fixes. Rely on our team to troubleshoot and resolve any issues, ensuring your systems run smoothly.
  • Migration Assistance. Accelerate your migration process with our dedicated support, minimizing disruption and speeding up time-to-value.
  • Customer Workshops. Engage with our team in weekly workshops, gaining insights and strategies to continuously improve and innovate.

Request Quote

✨ Contributing

This project is under active development, and we encourage contributions from our community.

Many thanks to our outstanding contributors:

For πŸ› bug reports & feature requests, please use the issue tracker.

In general, PRs are welcome. We follow the typical "fork-and-pull" Git workflow.

  1. Review our Code of Conduct and Contributor Guidelines.
  2. Fork the repo on GitHub
  3. Clone the project to your own machine
  4. Commit changes to your own branch
  5. Push your work back up to your fork
  6. Submit a Pull Request so that we can review your changes

NOTE: Be sure to merge the latest changes from "upstream" before making a pull request!

🌎 Slack Community

Join our Open Source Community on Slack. It's FREE for everyone! Our "SweetOps" community is where you get to talk with others who share a similar vision for how to rollout and manage infrastructure. This is the best place to talk shop, ask questions, solicit feedback, and work together as a community to build totally sweet infrastructure.

πŸ“° Newsletter

Sign up for our newsletter and join 3,000+ DevOps engineers, CTOs, and founders who get insider access to the latest DevOps trends, so you can always stay in the know. Dropped straight into your Inbox every week β€” and usually a 5-minute read.

πŸ“† Office Hours

Join us every Wednesday via Zoom for your weekly dose of insider DevOps trends, AWS news and Terraform insights, all sourced from our SweetOps community, plus a live Q&A that you can’t find anywhere else. It's FREE for everyone!

License

License

Preamble to the Apache License, Version 2.0

Licensed to the Apache Software Foundation (ASF) under one
or more contributor license agreements.  See the NOTICE file
distributed with this work for additional information
regarding copyright ownership.  The ASF licenses this file
to you under the Apache License, Version 2.0 (the
"License"); you may not use this file except in compliance
with the License.  You may obtain a copy of the License at

  https://www.apache.org/licenses/LICENSE-2.0

Unless required by applicable law or agreed to in writing,
software distributed under the License is distributed on an
"AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
KIND, either express or implied.  See the License for the
specific language governing permissions and limitations
under the License.

Trademarks

All other trademarks referenced herein are the property of their respective owners.


Copyright Β© 2017-2025 Cloud Posse, LLC

README footer

Beacon

About

This component is responsible for provisioning an end-to-end EKS Cluster, including managed node groups and Fargate profiles

Topics

Resources

Code of conduct

Security policy

Stars

Watchers

Forks

Sponsor this project

 

Packages

No packages published