Skip to content

Commit f84c2ee

Browse files
committed
updated eks demos
1 parent c8a5e87 commit f84c2ee

File tree

7 files changed

+126
-78
lines changed

7 files changed

+126
-78
lines changed

04-cloud/01-eks/02-launching-cluster-eks/readme.md

Lines changed: 16 additions & 8 deletions
Original file line numberDiff line numberDiff line change
@@ -2,7 +2,7 @@
22

33
## Introduction
44

5-
The official CLI to launch a cluster is `eksctl`, this is a tool develop by weavworks on conjuntion of AWS team. The goal of this tool is to build an `EKS cluster` much easier. If we thing the things that we have to do to build a cluster, we have to create VPC, provision subnets (multiples of them), set up routing in your VPC, then you con go to the control plane on the console and launch the cluster. All the infrastructure could be done using `CloudFormation`, but still being a lot of work.
5+
The official CLI to launch a cluster is `eksctl`, this is a tool develop by weavworks on conjuntion of AWS team. The goal of this tool is to build an `EKS cluster` much easier. If we thing the things that we have to do to build a cluster, we have to create VPC, provision subnets (multiples of them), set up routing in your VPC, then you can go to the control plane on the console and launch the cluster. All the infrastructure could be done using `CloudFormation`, but still being a lot of work.
66

77
We can build the cluster from `EKS Cluster console`, all the choices expose there can be done with `eksctl`
88

@@ -15,13 +15,13 @@ aws sts get-caller-identity
1515
## Create EC2 Key
1616

1717
```bash
18-
$ aws ec2 create-key-pair --key-name EksKeyPair --query 'KeyMaterial' --output text > EksKeyPair.pem
18+
aws ec2 create-key-pair --key-name EksKeyPair --query 'KeyMaterial' --output text > EksKeyPair.pem
1919
```
2020

2121
Modify permissions over the private key to avoid future warnings
2222

2323
```bash
24-
$ chmod 400 EksKeyPair.pem
24+
chmod 400 EksKeyPair.pem
2525
```
2626

2727
With this new private key we can go ahead and generate a public one, that's the key that will be upload into the node (EC2 instance). If we provide this key, and we have the private one, we can connect to the remote instance.
@@ -39,7 +39,7 @@ kind: ClusterConfig
3939
metadata:
4040
name: lc-cluster
4141
region: eu-west-3
42-
version: "1.18"
42+
version: "1.21"
4343

4444
iam:
4545
withOIDC: true
@@ -58,7 +58,7 @@ managedNodeGroups:
5858
```bash
5959
eksctl create cluster \
6060
--name lc-cluster \
61-
--version 1.18 \
61+
--version 1.21 \
6262
--region eu-west-3 \
6363
--nodegroup-name lc-nodes \
6464
--node-type t2.small \
@@ -104,7 +104,7 @@ managedNodeGroups: # [5]
104104
2. The AZ where the cluster is going to be deplyed
105105
3. The Kuberntes version that we're going to use, if we let it empty, will use the last stable for `AWS`
106106
4. enables the IAM OIDC provider as well as IRSA for the Amazon CNI plugin
107-
5. `managedNodeGroups` are a way for the `eks service` to actually provision your data plane on your behalf so normally if you think about the of a container orchestrator it's jsut orchestarte containers on your compute so we're starting to see expansion of that role a little bit so now instead of you bringing your own compute and you having to manage patching it, updating it, rolling in new versions of it and all that day to day stuff, it's possible to be managed by AWS, this is what `managedNodeGroup` does. AWS provides the AMI and provisioning into your account on your behalf.
107+
5. [`managedNodeGroups`](https://eksctl.io/usage/eks-managed-nodes/) are a way for the `eks service` to actually provision your data plane on your behalf so normally if you think about the of a container orchestrator it's jsut orchestarte containers on your compute so we're starting to see expansion of that role a little bit so now instead of you bringing your own compute and you having to manage patching it, updating it, rolling in new versions of it and all that day to day stuff, it's possible to be managed by AWS, this is what `managedNodeGroup` does. AWS provides the AMI and provisioning into your account on your behalf.
108108
6. The name of the group of nodes
109109
7. The instance type that we're running. We're usding the free tier
110110
8. The number of nodes that we want to have on the node group
@@ -115,10 +115,16 @@ managedNodeGroups: # [5]
115115

116116
## Launching the Cluster
117117

118+
Before launchin the cluster we can use the `dry-run` feature, this will allow us to inspect and change the instances matched by the instance selector before proceeding to creating a nodegroup. If we run `eksctl create cluster <options> --dry-run`, `eksctl` will output a ClusterConfig file containing a nodegroup representing the CLI options and the instance types set to the instances matched by the instance selector resource criteria.
119+
120+
```bash
121+
eksctl create cluster -f demos.yml --dry-run
122+
```
123+
118124
Now we're ready to launch the cluster
119125

120126
```bash
121-
$ eksctl create cluster -f demos.yml
127+
eksctl create cluster -f demos.yml
122128
```
123129

124130
## Test the cluster
@@ -127,4 +133,6 @@ Now we can test that our cluster is up and running.
127133

128134
```bash
129135
$ kubectl get nodes
130-
```
136+
```
137+
138+
> `eksctl` has edit `./kube/config` to make `kubectl` point to the new created cluster.

04-cloud/01-eks/04-deploy-solution/readme.md

Lines changed: 12 additions & 12 deletions
Original file line numberDiff line numberDiff line change
@@ -117,9 +117,9 @@ spec:
117117
Let's start by bringing up `lc-age-service` Backend API
118118

119119
```bash
120-
$ cd lc-age-service
121-
$ kubectl apply -f kubernetes/deployment.yaml
122-
$ kubectl apply -f kubernetes/service.yaml
120+
cd lc-age-service
121+
kubectl apply -f kubernetes/deployment.yaml
122+
kubectl apply -f kubernetes/service.yaml
123123
```
124124

125125
We can check the progress by looking at the deployment status:
@@ -133,9 +133,9 @@ kubectl get deployemnt lc-age-service
133133
Let's continue by bringing up `lc-name-service` Backend API
134134

135135
```bash
136-
$ cd lc-name-service
137-
$ kubectl apply -f kubernetes/deployment.yaml
138-
$ kubectl apply -f kubernetes/service.yaml
136+
cd lc-name-service
137+
kubectl apply -f kubernetes/deployment.yaml
138+
kubectl apply -f kubernetes/service.yaml
139139
```
140140

141141
We can check the progress by looking at the deployment status:
@@ -152,15 +152,15 @@ Create `lc-front/kubernetes/deployment.yaml`
152152
apiVersion: apps/v1
153153
kind: Deployment
154154
metadata:
155-
name: jaimesalas/lc-front
155+
name: lc-front
156156
labels:
157-
app: jaimesalas/lc-front
157+
app: lc-front
158158
namespace: default
159159
spec:
160160
replicas: 1
161161
selector:
162162
matchLabels:
163-
app: jaimesalas/lc-front
163+
app: lc-front
164164
strategy:
165165
rollingUpdate:
166166
maxSurge: 25%
@@ -169,12 +169,12 @@ spec:
169169
template:
170170
metadata:
171171
labels:
172-
app: jaimesalas/lc-front
172+
app: lc-front
173173
spec:
174174
containers:
175175
- image: jaimesalas/lc-front:latest
176176
imagePullPolicy: Always
177-
name: jaimesalas/lc-front
177+
name: lc-front
178178
ports:
179179
- containerPort: 3000
180180
protocol: TCP
@@ -233,7 +233,7 @@ In AWS accounts that have never created a load balancer before, it’s possible
233233
We can check for the role, and create it if it’s missing.
234234

235235
```bash
236-
$ aws iam get-role --role-name "AWSServiceRoleForElasticLoadBalancing" || aws iam create-service-linked-role --aws-service-name "elasticloadbalancing.amazonaws.com"
236+
aws iam get-role --role-name "AWSServiceRoleForElasticLoadBalancing" || aws iam create-service-linked-role --aws-service-name "elasticloadbalancing.amazonaws.com"
237237
```
238238

239239
## Ingress different options

04-cloud/01-eks/06-autoscalling-our-applications/01-scale-an-application-with-HPA.md

Lines changed: 2 additions & 2 deletions
Original file line numberDiff line numberDiff line change
@@ -75,10 +75,10 @@ $ kubectl get hpa
7575
**Open a new terminal**
7676

7777
```bash
78-
$ kubectl --generator=run-pod/v1 run -i --tty load-generator --image=busybox /bin/sh
79-
78+
kubectl run -it --rm --restart=Never busybox --image=gcr.io/google-containers/busybox sh
8079
```
8180

81+
8282
> Reference: https://medium.com/better-programming/kubernetes-tips-create-pods-with-imperative-commands-in-1-18-62ea6e1ceb32
8383
8484
Execute a while loop to continue getting http://php-cache

04-cloud/01-eks/06-autoscalling-our-applications/02-cluster-auto-scaler/00-configure-cluster-autoscaler.md

Lines changed: 24 additions & 12 deletions
Original file line numberDiff line numberDiff line change
@@ -31,16 +31,20 @@ Now we increase the maximum capacity to 5 instances
3131

3232
```bash
3333
# we need the ASG name
34-
$ export ASG_NAME=$(aws autoscaling describe-auto-scaling-groups --query "AutoScalingGroups[? Tags[? (Key=='eks:cluster-name') && Value=='lc-cluster']].AutoScalingGroupName" --output text)
34+
export ASG_NAME=$(aws autoscaling describe-auto-scaling-groups --query "AutoScalingGroups[? Tags[? (Key=='eks:cluster-name') && Value=='lc-cluster']].AutoScalingGroupName" --output text)
35+
```
3536

37+
```bash
3638
# increase max capacity up to 5
37-
$ aws autoscaling \
39+
aws autoscaling \
3840
update-auto-scaling-group \
3941
--auto-scaling-group-name ${ASG_NAME} \
4042
--min-size 1 \
4143
--desired-capacity 3 \
4244
--max-size 5
45+
```
4346

47+
```bash
4448
# Check new values
4549
$ aws autoscaling \
4650
describe-auto-scaling-groups \
@@ -60,19 +64,24 @@ With IAM roles for service accounts on Amazon EKS clusters, you can associate an
6064
Enabling IAM roles for service accounts on your cluster
6165

6266
```bash
63-
$ eksctl utils associate-iam-oidc-provider \
67+
eksctl utils associate-iam-oidc-provider \
6468
--cluster lc-cluster \
6569
--approve
6670

6771
```
6872

6973
Creating an IAM policy for your service account that will allow your CA pod to interact with the autoscaling groups.
7074

75+
file://~/Documents/lemoncode/bootcamp-devops-lemoncode/04-cloud/01-eks/06-autoscalling-our-applications/02-cluster-auto-scaler/cluster-autoscaler/k8s-asg-policy.json
76+
7177
```bash
72-
$ aws iam create-policy \
78+
aws iam create-policy \
7379
--policy-name k8s-asg-policy \
74-
--policy-document file://~/Documents/paths/kubernetes/01_eks_workshop/05_autoscalling_our_applications_and_clusters/cluster-autoscaler/k8s-asg-policy.json
80+
--policy-document file://~/Documents/lemoncode/bootcamp-devops-lemoncode/04-cloud/01-eks/06-autoscalling-our-applications/02-cluster-auto-scaler/cluster-autoscaler/k8s-asg-policy.json
81+
```
7582

83+
```bash
84+
# output
7685
{
7786
"Policy": {
7887
"PolicyName": "k8s-asg-policy",
@@ -94,14 +103,17 @@ Finally, create an IAM role for the cluster-autoscaler Service Account in the ku
94103
> Note: Grab the account id from the previous output
95104
96105
```bash
97-
$ eksctl create iamserviceaccount \
106+
eksctl create iamserviceaccount \
98107
--name cluster-autoscaler \
99108
--namespace kube-system \
100109
--cluster lc-cluster \
101110
--attach-policy-arn "arn:aws:iam::${ACCOUNT_ID}:policy/k8s-asg-policy" \
102111
--approve \
103112
--override-existing-serviceaccounts
113+
```
104114

115+
```bash
116+
# output
105117
[ℹ] eksctl version 0.32.0
106118
[ℹ] using region eu-west-3
107119
[ℹ] 1 existing iamserviceaccount(s) (kube-system/aws-node) will be excluded
@@ -119,14 +131,14 @@ $ eksctl create iamserviceaccount \
119131
Deploy the Cluster Autoscaler to your cluster with the following command.
120132
121133
```bash
122-
$ kubectl apply -f ./05_autoscalling_our_applications_and_clusters/cluster-autoscaler/autodiscover.yaml
134+
kubectl apply -f ./06-autoscalling-our-applications/02-cluster-auto-scaler/cluster-autoscaler/autodiscover.yaml
123135

124136
```
125137
126138
To prevent CA from removing nodes where its own pod is running, we will add the `cluster-autoscaler.kubernetes.io/safe-to-evict` annotation to its deployment with the following command
127139
128140
```bash
129-
$ kubectl -n kube-system \
141+
kubectl -n kube-system \
130142
annotate deployment.apps/cluster-autoscaler \
131143
cluster-autoscaler.kubernetes.io/safe-to-evict="false"
132144

@@ -136,10 +148,10 @@ Finally let's update the autoscaler image
136148
137149
```bash
138150
# we need to retrieve the latest docker image available for our EKS version
139-
$ export K8S_VERSION=$(kubectl version --short | grep 'Server Version:' | sed 's/[^0-9.]*\([0-9.]*\).*/\1/' | cut -d. -f1,2)
140-
$ export AUTOSCALER_VERSION=$(curl -s "https://api.github.com/repos/kubernetes/autoscaler/releases" | grep '"tag_name":' | sed -s 's/.*-\([0-9][0-9\.]*\).*/\1/' | grep -m1 ${K8S_VERSION})
151+
export K8S_VERSION=$(kubectl version --short | grep 'Server Version:' | sed 's/[^0-9.]*\([0-9.]*\).*/\1/' | cut -d. -f1,2)
152+
export AUTOSCALER_VERSION=$(curl -s "https://api.github.com/repos/kubernetes/autoscaler/releases" | grep '"tag_name":' | sed -s 's/.*-\([0-9][0-9\.]*\).*/\1/' | grep -m1 ${K8S_VERSION})
141153
142-
$ kubectl -n kube-system \
154+
kubectl -n kube-system \
143155
set image deployment.apps/cluster-autoscaler \
144156
cluster-autoscaler=us.gcr.io/k8s-artifacts-prod/autoscaling/cluster-autoscaler:v${AUTOSCALER_VERSION}
145157
@@ -148,7 +160,7 @@ $ kubectl -n kube-system \
148160
Watch the logs
149161
150162
```bash
151-
$ kubectl -n kube-system logs -f deployment/cluster-autoscaler
163+
kubectl -n kube-system logs -f deployment/cluster-autoscaler
152164
153165
```
154166

04-cloud/01-eks/06-autoscalling-our-applications/02-cluster-auto-scaler/01-scale-cluster-with-CA.md

Lines changed: 19 additions & 20 deletions
Original file line numberDiff line numberDiff line change
@@ -38,8 +38,8 @@ spec:
3838
Execute
3939

4040
```bash
41-
$ kubectl apply -f ./05_autoscalling_our_applications_and_clusters/sample-app/nginx.yaml
42-
$ kubectl get deployment/nginx-to-scaleout
41+
kubectl apply -f ./06-autoscalling-our-applications/02-cluster-auto-scaler/sample-app/nginx.yaml
42+
kubectl get deployment/nginx-to-scaleout
4343

4444
NAME READY UP-TO-DATE AVAILABLE AGE
4545
nginx-to-scaleout 1/1 1 1 45s
@@ -50,15 +50,14 @@ nginx-to-scaleout 1/1 1 1 45s
5050
Let's scale out the replpicaset to 10
5151

5252
```bash
53-
$ kubectl scale --replicas=10 deployment/nginx-to-scaleout
53+
kubectl scale --replicas=10 deployment/nginx-to-scaleout
5454

5555
```
5656

5757
Some pods will be in the `Pending` state, which triggers the cluster-autoscaler to scale out the EC2 fleet.
5858

5959
```bash
60-
$ kubectl get pods -l app=nginx -o wide --watch
61-
60+
kubectl get pods -l app=nginx -o wide --watch
6261
```
6362

6463
View the cluster-autoscaler logs
@@ -83,41 +82,41 @@ ip-192-168-72-123.eu-west-3.compute.internal Ready <none> 100s v1.18.9-
8382
## Cleanup Scaling
8483

8584
```bash
86-
$ kubectl delete -f ./05_autoscalling_our_applications_and_clusters/sample-app/nginx.yaml
85+
kubectl delete -f ./06-autoscalling-our-applications/02-cluster-auto-scaler/sample-app/nginx.yaml
8786

88-
$ kubectl delete -f ./05_autoscalling_our_applications_and_clusters/cluster-autoscaler/autodiscover.yaml
87+
kubectl delete -f ./06-autoscalling-our-applications/02-cluster-auto-scaler/cluster-autoscaler/autodiscover.yaml
8988

90-
$ eksctl delete iamserviceaccount \
89+
eksctl delete iamserviceaccount \
9190
--name cluster-autoscaler \
9291
--namespace kube-system \
9392
--cluster lc-cluster \
9493
--wait
9594

96-
$ aws iam delete-policy \
95+
aws iam delete-policy \
9796
--policy-arn arn:aws:iam::${ACCOUNT_ID}:policy/k8s-asg-policy
9897

99-
$ export ASG_NAME=$(aws autoscaling describe-auto-scaling-groups --query "AutoScalingGroups[? Tags[? (Key=='eks:cluster-name') && Value=='lc-cluster']].AutoScalingGroupName" --output text)
98+
export ASG_NAME=$(aws autoscaling describe-auto-scaling-groups --query "AutoScalingGroups[? Tags[? (Key=='eks:cluster-name') && Value=='lc-cluster']].AutoScalingGroupName" --output text)
10099

101-
$ aws autoscaling \
100+
aws autoscaling \
102101
update-auto-scaling-group \
103102
--auto-scaling-group-name ${ASG_NAME} \
104103
--min-size 1 \
105104
--desired-capacity 3 \
106105
--max-size 4
107106

108-
$ kubectl delete hpa,svc php-apache
107+
kubectl delete hpa,svc php-apache
109108

110-
$ kubectl delete deployment php-apache
109+
kubectl delete deployment php-apache
111110

112-
$ kubectl delete pod load-generator
111+
kubectl delete pod load-generator
113112

114-
$ helm -n metrics uninstall metrics-server
113+
helm -n metrics uninstall metrics-server
115114

116-
$ kubectl delete ns metrics
115+
kubectl delete ns metrics
117116

118-
$ helm uninstall kube-ops-view
117+
helm uninstall kube-ops-view
119118

120-
$ unset ASG_NAME
121-
$ unset AUTOSCALER_VERSION
122-
$ unset K8S_VERSION
119+
unset ASG_NAME
120+
unset AUTOSCALER_VERSION
121+
unset K8S_VERSION
123122
```

0 commit comments

Comments
 (0)