Releases: SPHTech-Platform/terraform-aws-eks
v0.20.2
Bug Fix
- Set the variable
enable_cluster_creator_admin_permissions
totrue
by default to support newly created clusters. However, during the migration process, this variable should be set tofalse
because EKS itself adds admin records to access entries.
by @uchinda-sph in #138
Full Changelog: v0.20.1...v0.20.2
v0.20.1
Bug Fix
- Introduced a new variable
enable_pod_identity_for_eks_addons
to disable pod identity for EKS add-ons. The default value is set tofalse
because the Terraform AWS provider still doesn't support pod identity associations for EKS add-ons. by @uchinda-sph in #137
Full Changelog: v0.20.0...v0.20.1
v0.20.0
BREAKING CHANGES
- Replace the
kubectl
providergavinbunney/kubectl
withalekc/kubectl
:- Use the
state replace-provider
command to update all existing resources in your state: terraform state replace-provider gavinbunney/kubectl alekc/kubectl
- Use the
- Replace the use of the
aws-auth
ConfigMap with the EKS cluster access entry. - Replace the use of
irsa
for thevpc_cni
andebs_csi
add-ons withpod_identity
. These are currently the only two add-ons supported withpod_identity
.- Note that
pod_identity
will NOT enable default settings for Karpenter due to its deployment on Fargate. pod_identity
DOES NOT work with Fargate Pods.
- Note that
- Replace the use of
terraform-kubectl-helm-crds
to installkarpenter-crd
with thekarpenter-crd
Helm chart. - Support for cluster access management has been added with the default authentication mode set to
API
. This will break current clusters with authentication mode set toCONFIG_MAP
. You need to gradually move fromCONFIG_MAP
toAPI_AND_CONFIG_MAP
, and fromAPI_AND_CONFIG_MAP
toAPI
. Please follow the migration process for a smooth transition. - Karpenter NodePool’s ConsolidationPolicy
WhenUnderutilized
is now renamed toWhenEmptyOrUnderutilized
Backwards Compatible Changes
-
aws-auth
ConfigMap- The
aws-auth
ConfigMap resources have been moved to a standalone sub-module in the community module, and theaws-auth
sub-module will be removed entirely from the project in the next major version release. - If you wish to use the
aws-auth
ConfigMap, you will need to setauthentication_mode = "CONFIG_MAP"
explicitly. - Module currently handling the
aws-auth
ConfigMap to support a smooth transition ofauthentication_mode
fromCONFIG_MAP
, and this portion will be dropped in the next major release.
- The
-
Karpenter CRD Installation
- With the change in CRD installation to the Helm chart
karpenter-crd
, the currentterraform-kubectl-helm-crds
will be removed. This will interrupt current workloads, as createdNodePool
,EC2NodeClass
, andNodeClaims
will be removed, disrupting the worker nodes. - If Karpenter has been installed from the 0.19.x module version, set the
karpenter_crd_helm_install
variable tofalse
for an uninterrupted upgrade:karpenter_crd_helm_install = false
.
- With the change in CRD installation to the Helm chart
Additional Changes
- Disabled the Bottlerocket operator by setting
brupop_enabled = false
whenkarpenter
is used as the cluster autoscaler. - Updated Karpenter API from
karpenter.sh/v1beta1
tokarpenter.sh/v1
. - Introduced a new variable
enable_pod_identity_for_eks_addons
to disable pod identity for EKS add-ons. The default value is set tofalse
because the Terraform AWS provider still doesn't support pod identity associations for EKS add-ons. - Set the variable
enable_cluster_creator_admin_permissions
totrue
by default to support newly created clusters. However, during the migration process, this variable should be set tofalse
because EKS itself adds admin records to access entries.
Variable Changes
- Added variables:
authentication_mode
karpenter_crd_helm_install
karpenter_crd_chart_version
enable_v1_permissions_for_karpenter
karpenter_upgrade
enable_pod_identity_for_eks_addons
enable_pod_identity_for_karpenter
access_entries
enable_cluster_creator_admin_permissions
migrate_aws_auth_to_access_entry
-
Karpenter
karpenter_crd_helm_install
karpenter_crd_namespace
karpenter_crd_release_name
karpenter_crd_chart_name
karpenter_crd_chart_repository
karpenter_crd_chart_version
enable_v1_permissions
cluster_ip_family
enable_irsa
oidc_provider_arn
create_pod_identity_association
enable_pod_identity
create_access_entry
access_entry_type
Upgrade from v0.19.x to v0.20.x
All the changes you need to make in the parent module which uses this module as a sub-module.
-
Set eks and eks essentials module versions to
0.20.x
:module "eks" { source = "SPHTech-Platform/eks/aws" version = ">= 0.20.0" } module "eks_essentials" { source = "SPHTech-Platform/eks/aws//modules/essentials" version = ">= 0.20.0" }
-
Change the
kubectl
Terraform provider toalekc/kubectl
fromgavinbunney/kubectl
:terraform { required_version = ">= 1.4" required_providers { aws = { source = "hashicorp/aws" version = ">= 5.26" } helm = { source = "hashicorp/helm" version = ">= 2.11" } kubernetes = { source = "hashicorp/kubernetes" version = ">= 2.23" } kubectl = { - source = "gavinbunney/kubectl" - version = ">= 1.14" + source = "alekc/kubectl" + version = ">= 2.0" } } }
-
Update all existing resources in your state created by the old
kubectl
provider:terraform state replace-provider gavinbunney/kubectl alekc/kubectl
-
Add values to the variables as follows in the eks module:
module "eks" { source = "SPHTech-Platform/eks/aws" version = ">= 0.20.0" + authentication_mode = "API_AND_CONFIG_MAP" + karpenter_crd_helm_install = false + migrate_aws_auth_to_access_entry = true + enable_cluster_creator_admin_permissions = false + + access_entries = { + admin = { + principal_arn = one(data.aws_iam_roles.sso_admin_roles.arns) + policy_associations = { + admin = { + policy_arn = "arn:aws:eks::aws:cluster-access-policy/AmazonEKSClusterAdminPolicy" + access_scope = { + type = "cluster" + } + } + } + } + + developer = { + principal_arn = one(data.aws_iam_roles.sso_developer_roles.arns) + policy_associations = { + edit = { + policy_arn = var.app_metadata.env == "prd" ? "arn:aws:eks::aws:cluster-access-policy/AmazonEKSEditPolicy" : "arn:aws:eks::aws:cluster-access-policy/AmazonEKSAdminPolicy" + access_scope = { + type = "namespace" + namespaces = [ + "default", + "drupal-${var.app_metadata.env}", + ] + } + } + view = { + policy_arn = "arn:aws:eks::aws:cluster-access-policy/AmazonEKSAdminViewPolicy" + access_scope = { + type = "cluster" + } + } + } + } + } }
-
If the Bottlerocket update agent is enabled from the eks essentials module, then set it to
false
, as it is not required withkarpenter
:module "eks_essentials" { source = "SPHTech-Platform/eks/aws//modules/essentials" version = ">= 0.20.0" - brupop_enabled = true + brupop_enabled = false }
-
Push these changes and apply Terraform changes. You may need to apply twice if some resources fail to create.
-
After all changes are applied successfully, set the
migrate_aws_auth_to_access_entry
variable tofalse
OR remove it from the code:module "eks" { source = "SPHTech-Platform/eks/aws" version = ">= 0.20.0" authentication_mode = "API_AND_CONFIG_MAP" karpenter_crd_helm_install = false enable_cluster_creator_admin_permissions = false - migrate_aws_auth_to_access_entry = true + migrate_aws_auth_to_access_entry = false access_entries = { admin = { principal_arn = one(data.aws_iam_roles.sso_admin_roles.arns) policy_associations = { admin = { policy_arn = "arn:aws:eks::aws:cluster-access-policy/AmazonEKSClusterAdminPolicy" access_scope = { type = "cluster" } } } } developer = { principal_arn = one(data.aws_iam_roles.sso_developer_roles.arns) policy_associations = { edit = { policy_arn = var.app_metadata.env == "prd" ? "arn:aws:eks::aws:cluster-access-policy/AmazonEKSEditPolicy" : "arn:aws:eks::aws:cluster-access-policy/AmazonEKSAdminPolicy" access_scope = { type = "namespace" namespaces = [ "default", "drupal-${var.app_metadata.env}", ] } } view = { policy_arn = "arn:aws:eks::aws:cluster-access-policy/AmazonEKSAdminViewPolicy" access_scope = { type = "cluster" } } } } } }
-
After the changes are applied, verify the cluster and application functionality as usual.
-
If further updates are needed to change the cluster
authentication_mode
toAPI
, you can do so by simply changing the value of theauthentication_mode
variable toAPI
:module "eks" { source = "SPHTech-Platform/eks/aws" version = ">= 0.20.0" - authentication_mode = "API_AND_CONFIG_MAP" + authentication_mode = "API" karpenter_crd_helm_install = false migrate_aws_auth_to_access_entry = false access_entries = { admin = { principal_arn = one...
v0.19.7
What's Changed
- feat: add karpenter upgrade process to zero downtime. variable
karpenter_upgrade = true
before doing the karpenter upgrade and set tofalse
after the upgrade by @uchinda-sph in #135
Full Changelog: v0.19.6...v0.19.7
v0.19.6
What's Changed
fix:
nodepool templatefeat:
bumpup karpenter version to 0.37.5 by @uchinda-sph in #134
Full Changelog: v0.19.5...v0.19.6
v0.19.5
What's Changed
-
feat:
update module to match with upstream module -
feat:
update essentials -
fix:
metrics exporter on fargate clusters by enablingvar.fargate_cluster = true
in essentials
Full Changelog: v0.19.4...v0.19.5
v0.19.4
What's Changed
feat:
upgrade eks essentialsfeat:
upgrade karpenter to0.37.4
by @uchinda-sph in #131
Full Changelog: v0.19.3...v0.19.4
v0.19.3
What's Changed
- feat: Update fluent-bit IAM to create log groups by @ianuragsingh in #130
Full Changelog: v0.19.2...v0.19.3
v0.19.2
What's Changed
- fix: bump up karpenter chart for correct AMI ordering function bug by @uchinda-sph in #124
Full Changelog: v0.19.1...v0.19.2
v0.11.4
Full Changelog: v0.19.1...v0.11.4