You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Copy file name to clipboardExpand all lines: docs/arc-iac-docs/modules/terraform-aws-ref-arch-eks/docs/module-usage-guide/README.md
+175-7Lines changed: 175 additions & 7 deletions
Original file line number
Diff line number
Diff line change
@@ -18,7 +18,7 @@ Before using this module, ensure you have the following:
18
18
19
19
- AWS credentials configured.
20
20
- Terraform installed.
21
-
- A working knowledge of AWS VPC, EKS, Kubernetes, Helm and Terraform concepts.
21
+
- A working knowledge of AWS VPC, EKS, Kubernetes, Helm, Karpenter and Terraform concepts.
22
22
23
23
## Getting Started
24
24
@@ -29,7 +29,7 @@ To use the module in your Terraform configuration, include the following source
29
29
```hcl
30
30
module "arc-eks" {
31
31
source = "sourcefuse/arc-eks/aws"
32
-
version = "5.0.0"
32
+
version = "5.0.16"
33
33
# insert the 8 required variables here
34
34
}
35
35
```
@@ -41,7 +41,7 @@ Refer to the [Terraform Registry](https://registry.terraform.io/modules/sourcefu
41
41
Integrate the module with your existing Terraform mono repo configuration, follow the steps below:
42
42
43
43
1. Create a new folder in `terraform/` named `eks`.
44
-
2. Create the required files, see the [examples](https://github.com/sourcefuse/terraform-aws-arc-eks/tree/main/example) to base off of.
44
+
2. Create the required files, see the [examples](https://github.com/sourcefuse/terraform-aws-arc-eks/tree/main/examples) to base off of.
45
45
3. Configure with your backend
46
46
- Create the environment backend configuration file: `config.<environment>.hcl`
47
47
-**region**: Where the backend resides
@@ -55,7 +55,7 @@ Integrate the module with your existing Terraform mono repo configuration, follo
55
55
Ensure that the AWS credentials used to execute Terraform have the necessary permissions to create, list and modify:
56
56
57
57
- EKS cluster
58
-
- EKS node groups, EC2 AMIs, Launch templates
58
+
- EKS node groups, EC2 AMIs
59
59
- EKS Fargate profile
60
60
- Security Groups and Security Group rules
61
61
- Cloudwatch Log groups
@@ -76,15 +76,183 @@ For a list of outputs, see the README [Outputs](https://github.com/sourcefuse/te
76
76
77
77
### Basic Usage
78
78
79
-
For basic usage, see the [example](https://github.com/sourcefuse/terraform-aws-arc-eks/tree/main/example) folder.
79
+
For basic usage, see the [examples/simple](https://github.com/sourcefuse/terraform-aws-arc-eks/tree/main/examples/simple) folder.
80
80
81
81
This example will create:
82
82
83
-
- An EKS cluster with a single EC2 node group.
83
+
- An EKS cluster.
84
+
85
+
### Additional Usage Patterns
86
+
87
+
Below are advanced usage examples enabled by this module to support a range of EKS deployment strategies:
88
+
89
+
---
90
+
91
+
#### 1. **EKS with Managed Node Groups**
92
+
93
+
For EKS Cluster with node-group creation, see the [examples/node-group](https://github.com/sourcefuse/terraform-aws-arc-eks/tree/main/examples/node-group) folder.
94
+
95
+
Use `node_group_config` to provision one or more managed node groups with specific instance types, scaling configuration, and networking.
96
+
97
+
**Key Capabilities:**
98
+
- Support for ON_DEMAND and SPOT instances.
99
+
- Control over desired, min, and max node counts.
100
+
- Custom disk size, AMI type, and instance types.
101
+
- Define multiple node groups for various workloads.
102
+
103
+
**Example Use Case:**
104
+
You need a general-purpose node group for application workloads and a spot node group for cost-optimized batch jobs.
105
+
106
+
**How to Use:**
107
+
```hcl
108
+
node_group_config = {
109
+
enable = true
110
+
config = {
111
+
general-ng = {
112
+
node_group_name = "general-nodegroup"
113
+
subnet_ids = data.aws_subnets.private.ids
114
+
scaling_config = {
115
+
desired_size = 2
116
+
max_size = 3
117
+
min_size = 1
118
+
}
119
+
instance_types = ["t3.medium"]
120
+
capacity_type = "ON_DEMAND"
121
+
disk_size = 20
122
+
ami_type = "AL2_x86_64"
123
+
}
124
+
}
125
+
}
126
+
```
127
+
128
+
---
129
+
130
+
#### 2. **EKS with Fargate Profile**
131
+
132
+
For EKS Cluster with fargate-profile creation, see the [examples/fargate-profile](https://github.com/sourcefuse/terraform-aws-arc-eks/tree/main/examples/fargate-profile) folder.
133
+
134
+
Enable `fargate_profile_config` to run specific workloads on AWS Fargate — a serverless compute engine — ideal for lightweight, isolated, or on-demand applications without managing underlying infrastructure.
135
+
136
+
**Key Capabilities:**
137
+
- Eliminates the need to manage EC2 nodes for specific workloads.
138
+
- Ideal for low-resource, burstable or security-isolated workloads.
139
+
- Assign specific namespaces to Fargate using selectors.
140
+
141
+
**Example Use Case:**
142
+
An `AWS EKS Fargate profile` allows an administrator to declare which pods run on Fargate and specify which pods run on which Fargate profile through selectors.
143
+
144
+
**How to Use:**
145
+
```hcl
146
+
fargate_profile_config = {
147
+
enable = true
148
+
fargate_profile_name = "example"
149
+
subnet_ids = data.aws_subnets.private.ids
150
+
selectors = [
151
+
{
152
+
namespace = "example"
153
+
}
154
+
]
155
+
}
156
+
```
157
+
158
+
---
159
+
160
+
#### 3. **Auto Mode Support**
161
+
162
+
For EKS Cluster with auto-mode creation, see the [examples/auto-mode](https://github.com/sourcefuse/terraform-aws-arc-eks/tree/main/examples/auto-mode) folder.
163
+
164
+
This module supports an **Auto Mode** configuration, `EKS Auto Mode` extends AWS management of Kubernetes clusters beyond the cluster itself, to allow AWS to also set up and manage the infrastructure that enables the smooth operation of your workloads.
165
+
166
+
**Key Capabilities:**
167
+
- Simplified Cluster Operations: Automatically provisions production-ready EKS clusters with minimal manual configuration or deep Kubernetes expertise.
168
+
- Dynamic Scaling: Continuously adjusts cluster capacity by adding/removing nodes based on application demand, ensuring high availability without manual planning.
169
+
- Cost Efficiency: Optimizes compute usage by consolidating workloads and terminating idle instances to reduce operational costs.
170
+
- Enhanced Security: Uses immutable, hardened AMIs with SELinux, read-only file systems, and automatic node recycling every 21 days to maintain a strong security posture.
171
+
- Automated Maintenance: Keeps your cluster components up to date with automated patching that respects disruption budgets.
172
+
- Built-In Add-ons: Includes essential networking, storage, and observability components (e.g., Pod networking, DNS, CSI drivers) without requiring manual add-on setup.
173
+
- Custom Node Configuration: Supports creation of custom NodePools and NodeClasses to fine-tune compute, storage, and networking based on workload needs.
174
+
175
+
**Example Use Case:**
176
+
You want to deploy workloads that require automatic scaling based on resource demands.
177
+
178
+
**How to Use:**
179
+
```hcl
180
+
module "eks_cluster" {
181
+
source = "../../"
182
+
namespace = "arc"
183
+
environment = "poc"
184
+
kubernetes_version = "1.31"
185
+
name = "${var.namespace}-${var.environment}-cluster"
186
+
vpc_config = local.vpc_config
187
+
auto_mode_config = local.auto_mode_config
188
+
bootstrap_self_managed_addons_enabled = false # Make sure this is false for auto-mode creation
For EKS Cluster with karpenter creation, see the [examples/karpenter](https://github.com/sourcefuse/terraform-aws-arc-eks/tree/main/examples/karpenter) folder.
201
+
202
+
Enable `karpenter_config` to provision and manage dynamic compute capacity for Kubernetes workloads using [Karpenter](https://karpenter.sh/).
203
+
204
+
**Key Capabilities:**
205
+
- Auto-provision capacity based on pod requirements.
206
+
- Faster scaling and cost-optimization compared to static node groups.
207
+
- Fully automated with Helm-based deployment.
208
+
209
+
**Example Use Case:**
210
+
You need highly dynamic compute provisioning with support for heterogeneous workloads and instance types, with minimal manual intervention.
- This module only creates a single node group with up to 20 distinct instance types. If additional node groups groups are needed, the same may be provisioned downstream.
250
+
- Use `node_group_config` for granular node group management
251
+
- Use `karpenter_config` for dynamic compute provisioning
252
+
- Leverage `fargate_profile_config` for low-priority or bursty workloads
253
+
- Consider EKS Auto Mode for minimal operational overhead
254
+
- Use custom `access_config` to centralize EKS access management
0 commit comments