Skip to content

Commit 42066b2

Browse files
committed
added final demos
1 parent cff7e31 commit 42066b2

File tree

14 files changed

+1606
-0
lines changed

14 files changed

+1606
-0
lines changed
Lines changed: 41 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,41 @@
1+
# Install Helm CLI
2+
3+
## Introduction
4+
5+
[Helm](https://helm.sh/) is a package manager and application management tool for Kubernetes that packages multiple Kubernetes resources into a single logical deployment unit called a Chart.
6+
7+
* Helm helps you to:
8+
- Achieve a simple (one command) and repeatable deployment
9+
- Manage application dependency, using specific versions of other application and services
10+
- Manage multiple deployment configurations: test, staging, production and others
11+
- Execute post/pre deployment jobs during application deployment
12+
- Update/rollback and test application deployments
13+
14+
15+
## Installing Helm
16+
17+
Before we can get started configuring Helm, we'll need to first install the command line tools.
18+
19+
```bash
20+
$ curl -sSL https://raw.githubusercontent.com/helm/helm/master/scripts/get-helm-3 | bash
21+
```
22+
23+
We can verify the installation by
24+
25+
```bash
26+
$ helm version --short
27+
```
28+
29+
Let’s configure our first Chart repository. Chart repositories are similar to APT or yum repositories that you might be familiar with on Linux, or Taps for Homebrew on macOS.
30+
31+
Download the `stable` repository so we have something to start with:
32+
33+
```bash
34+
helm repo add stable https://charts.helm.sh/stable
35+
```
36+
37+
Once this is installed, we will be able to list the charts you can install:
38+
39+
```bash
40+
$ helm search repo stable
41+
```
Lines changed: 213 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,213 @@
1+
# Deploy nginx with Helm
2+
3+
## Update the Chart Repository
4+
5+
Helm uses a packaging format called [Charts](https://helm.sh/docs/topics/charts/). A Chart is a collection of files and templates that describes Kubernetes resources.
6+
7+
Charts can be simple, describing something like a standalone web server (which is what we are going to create), but they can also be more complex, for example, a chart that represents a full web application stack, including web servers, databases, proxies, etc.
8+
9+
Instead of installing Kubernetes resources manually via `kubectl`, one can use Helm to install pre-defined Charts faster, with less chance of typos or other operator errors.
10+
11+
Chart repositories change frequently due to updates and new additions. To keep Helm’s local list updated with all these changes, we need to occasionally run the [repository update](https://helm.sh/docs/helm/helm_repo_update/) command.
12+
13+
To update Helm's local list of Charts, run:
14+
15+
```bash
16+
# first, add the default repository, then update
17+
helm repo add stable https://charts.helm.sh/stable
18+
helm repo update
19+
20+
```
21+
22+
## Search Chart Repositories
23+
24+
Now that our repository Chart list has been updated, we can [search for Charts](https://helm.sh/docs/helm/helm_search/)
25+
26+
To list all Charts:
27+
28+
```bash
29+
$ helm search repo
30+
```
31+
32+
You can see from the output that it dumped the list of all Charts we have added. In some cases that may be useful, but an even more useful search would involve a keyword argument. So next, we’ll search just for nginx:
33+
34+
```bash
35+
$ helm search repo nginx
36+
```
37+
38+
The results in:
39+
40+
```
41+
NAME CHART VERSION APP VERSION DESCRIPTION
42+
stable/nginx-ingress 1.41.3 v0.34.1 DEPRECATED! An nginx Ingress controller that us...
43+
stable/nginx-ldapauth-proxy 0.1.6 1.13.5 DEPRECATED - nginx proxy with ldapauth
44+
stable/nginx-lego 0.3.1 Chart for nginx-ingress-controller and kube-lego
45+
stable/gcloud-endpoints 0.1.2 1 DEPRECATED Develop, deploy, protect and monitor...
46+
```
47+
48+
This new list of Charts are specific to nginx, because we passed the **nginx** argument to the `helm search repo` command.
49+
50+
> Reference: https://helm.sh/docs/helm/helm_search_repo/
51+
52+
## Add the Bitnami Repository
53+
54+
We saw that nginx offers many different products via the default Helm Chart repository, but the nginx standalone web server is not one of them.
55+
56+
After a quick web search, we discover that there is a Chart for the nginx standalone web server available via the [Bitnami Chart repository](https://github.com/bitnami/charts/tree/master/bitnami).
57+
58+
59+
```bash
60+
$ helm repo add bitnami https://charts.bitnami.com/bitnami
61+
```
62+
63+
Once that completes, we can search all Bitnami Charts:
64+
65+
```bash
66+
helm search repo bitnami
67+
68+
```
69+
70+
Search once again for nginx
71+
72+
```bash
73+
$ helm search repo nginx
74+
```
75+
76+
Now we are seeing more nginx options, across both repositories:
77+
78+
```
79+
bitnami/nginx 8.2.3 1.19.6 Chart for the nginx server
80+
bitnami/nginx-ingress-controller 7.0.5 0.41.2 Chart for the nginx Ingress controller
81+
stable/nginx-ingress 1.41.3 v0.34.1 DEPRECATED! An nginx Ingress controller that us...
82+
```
83+
84+
Or even search the Bitnami repo, just for nginx:
85+
86+
```bash
87+
$ helm search repo bitnami/nginx
88+
89+
```
90+
91+
## Install bitnami/nginx
92+
93+
Installing the Bitnami standalone nginx web server Chart involves us using the [helm install](https://helm.sh/docs/helm/helm_install/) command.
94+
95+
A Helm Chart can be installed multiple times inside a Kubernetes cluster. This is because each installation of a Chart can be customized to suit a different purpose.
96+
97+
For this reason, you must supply a unique name for the installation, or ask Helm to generate a name for you.
98+
99+
```bash
100+
$ helm install mywebserver bitnami/nginx --dry-run
101+
```
102+
103+
Now to really install nginx on our cluster, we can run:
104+
105+
```bash
106+
$ helm install mywebserver bitnami/nginx
107+
```
108+
109+
The output is simillar to this
110+
111+
```bash
112+
NAME: mywebserver
113+
LAST DEPLOYED: Mon Dec 21 15:45:05 2020
114+
NAMESPACE: default
115+
STATUS: deployed
116+
REVISION: 1
117+
TEST SUITE: None
118+
NOTES:
119+
** Please be patient while the chart is being deployed **
120+
121+
NGINX can be accessed through the following DNS name from within your cluster:
122+
123+
mywebserver-nginx.default.svc.cluster.local (port 80)
124+
125+
To access NGINX from outside the cluster, follow the steps below:
126+
127+
1. Get the NGINX URL by running these commands:
128+
129+
NOTE: It may take a few minutes for the LoadBalancer IP to be available.
130+
Watch the status with: 'kubectl get svc --namespace default -w mywebserver-nginx'
131+
132+
export SERVICE_PORT=$(kubectl get --namespace default -o jsonpath="{.spec.ports[0].port}" services mywebserver-nginx)
133+
export SERVICE_IP=$(kubectl get svc --namespace default mywebserver-nginx -o jsonpath='{.status.loadBalancer.ingress[0].ip}')
134+
echo "http://${SERVICE_IP}:${SERVICE_PORT}"
135+
```
136+
137+
In order to review the underlying Kubernetes services, pods and deployments, run:
138+
139+
```bash
140+
$ kubectl get svc,po,deploy
141+
142+
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
143+
service/kubernetes ClusterIP 10.100.0.1 <none> 443/TCP 136m
144+
service/mywebserver-nginx LoadBalancer 10.100.9.232 a7130a0207757453594c4cb5bdf072e5-381544302.eu-west-3.elb.amazonaws.com 80:31519/TCP 2m38s
145+
146+
NAME READY STATUS RESTARTS AGE
147+
pod/mywebserver-nginx-857766d4fd-9tdwf 1/1 Running 0 2m37s
148+
149+
NAME READY UP-TO-DATE AVAILABLE AGE
150+
deployment.apps/mywebserver-nginx 1/1 1 1 2m38s
151+
```
152+
153+
The first object shown in this output is a Deployment. A Deployment object manages rollouts (and rollbacks) of different versions of an application.
154+
155+
You can inspect this Deployment object in more detail by running the following command:
156+
157+
```bash
158+
$ kubectl describe deployment mywebserver
159+
160+
```
161+
162+
The next object shown created by the Chart is a Pod. A Pod is a group of one or more containers.
163+
164+
To verify the Pod object was successfully deployed, we can run the following command:
165+
166+
```bash
167+
$ kubectl get pods -l app.kubernetes.io/name=nginx
168+
169+
NAME READY STATUS RESTARTS AGE
170+
mywebserver-nginx-857766d4fd-9tdwf 1/1 Running 0 4m48s
171+
172+
```
173+
The third object that this Chart creates for us is a Service. A Service enables us to contact this nginx web server from the Internet, via an Elastic Load Balancer (ELB).
174+
175+
To get the complete URL of this Service, run:
176+
177+
```bash
178+
$ kubectl get service mywebserver-nginx -o wide
179+
180+
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE SELECTOR
181+
mywebserver-nginx LoadBalancer 10.100.9.232 a7130a0207757453594c4cb5bdf072e5-381544302.eu-west-3.elb.amazonaws.com 80:31519/TCP 6m22s app.kubernetes.io/instance=mywebserver,app.kubernetes.io/name=nginx
182+
```
183+
184+
Copy the value for EXTERNAL-IP, open a new tab in your web browser, and paste it in.
185+
186+
## Clean up
187+
188+
To remove all the objects that the Helm Chart create we can use [helm uninstall](https://helm.sh/docs/helm/helm_uninstall/)
189+
190+
Before we uninstall our application, we can verify what we have running via the [helm list](https://helm.sh/docs/helm/helm_list/) command:
191+
192+
```bash
193+
$ helm list
194+
WARNING: Kubernetes configuration file is group-readable. This is insecure. Location: /home/lemoncode/.kube/config
195+
WARNING: Kubernetes configuration file is world-readable. This is insecure. Location: /home/lemoncode/.kube/config
196+
NAME NAMESPACE REVISION UPDATED STATUS CHART APP VERSION
197+
mywebserver default 1 2020-12-21 15:45:05.835403883 +0100 CET deployed nginx-8.2.3 1.19.6
198+
```
199+
200+
To uninstall:
201+
202+
```bash
203+
$ helm uninstall mywebserver
204+
205+
```
206+
207+
kubectl will also demonstrate that our pods and service are no longer available:
208+
209+
```bash
210+
kubectl get pods -l app.kubernetes.io/name=nginx
211+
kubectl get service mywebserver-nginx -o wide
212+
213+
```
Lines changed: 41 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,41 @@
1+
# Install Kube-ops-view
2+
3+
Before starting to learn about the various auto-scaling options for your EKS cluster we are going to install Kube-ops-view.
4+
5+
Kube-ops-view provides a common operational picture for a Kubernetes cluster that helps with understanding our cluster setup in a visual way.
6+
7+
> Note: helm must be installed
8+
9+
The following line updates the stable helm repository and then installs kube-ops-view using a LoadBalancer Service type and creating a RBAC (Resource Base Access Control) entry for the read-only service account to read nodes and pods information from the cluster.
10+
11+
```bash
12+
helm install kube-ops-view \
13+
stable/kube-ops-view \
14+
--set service.type=LoadBalancer \
15+
--set rbac.create=True
16+
17+
```
18+
19+
The execution above installs kube-ops-view exposing it through a Service using the LoadBalancer type. A successful execution of the command will display the set of resources created and will prompt some advice asking you to use `kubectl proxy` and a local URL for the service. Given we are using the type LoadBalancer for our service, we can disregard this; Instead we will point our browser to the external load balancer.
20+
21+
> Monitoring and visualization shouldn’t be typically be exposed publicly unless the service is properly secured and provide methods for authentication and authorization. You can still deploy kube-ops-view using a Service of type ClusterIP by removing the --set service.type=LoadBalancer section and using kubectl proxy. Kube-ops-view does also support Oauth 2
22+
23+
To check the chart was installed successfully:
24+
25+
```bash
26+
helm list
27+
WARNING: Kubernetes configuration file is group-readable. This is insecure. Location: /home/lemoncode/.kube/config
28+
WARNING: Kubernetes configuration file is world-readable. This is insecure. Location: /home/lemoncode/.kube/config
29+
NAME NAMESPACE REVISION UPDATED STATUS CHART APP VERSION
30+
kube-ops-view default 1 2020-12-21 16:29:18.677926812 +0100 CET deployed kube-ops-view-1.2.4 20.4.0
31+
```
32+
33+
With this we can explore kube-ops-view output by checking the details about the newly service created.
34+
35+
```bash
36+
kubectl get svc kube-ops-view | tail -n 1 | awk '{ print "Kube-ops-view URL = http://"$4 }'
37+
38+
```
39+
40+
This will display a line similar to Kube-ops-view URL = http://<URL_PREFIX_ELB>.amazonaws.com Opening the URL in your browser will provide the current state of our cluster.
41+
Lines changed: 97 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,97 @@
1+
# Scale an aplication with HPA
2+
3+
## Configure Horizontal Pod Autoscaler (HPA)
4+
5+
### Deploy the Metrics Server
6+
7+
Metrics Server is a scalable, efficient source of container resource metrics for Kubernetes built-in autoscaling pipelines.
8+
9+
These metrics will drive the scaling behavior of the *deployments*.
10+
11+
We will deploy the metrics server using [Kubernetes Metrics Server](https://github.com/kubernetes-sigs/metrics-server).
12+
13+
```bash
14+
$ kubectl apply -f https://github.com/kubernetes-sigs/metrics-server/releases/download/v0.4.1/components.yaml
15+
```
16+
17+
Lets' verify the status of the metrics-server APIService (it could take a few minutes).
18+
19+
```bash
20+
$ kubectl get apiservice v1beta1.metrics.k8s.io -o json | jq '.status'
21+
22+
23+
{
24+
"conditions": [
25+
{
26+
"lastTransitionTime": "2020-12-21T15:42:43Z",
27+
"message": "all checks passed",
28+
"reason": "Passed",
29+
"status": "True",
30+
"type": "Available"
31+
}
32+
]
33+
}
34+
```
35+
36+
**We are now ready to scale a deployed application**
37+
38+
A new `addon` is set in our system we can check
39+
40+
## Deploy a Sample App
41+
42+
We will deploy an application and expose as a service on TCP port 80.
43+
44+
The application is a custom-built image based on the php-apache image. The index.php page performs calculations to generate CPU load. More information can be found [here](https://kubernetes.io/docs/tasks/run-application/horizontal-pod-autoscale-walkthrough/#run-expose-php-apache-server)
45+
46+
```bash
47+
kubectl create deployment php-apache --image=us.gcr.io/k8s-artifacts-prod/hpa-example
48+
kubectl set resources deploy php-apache --requests=cpu=200m
49+
kubectl expose deploy php-apache --port 80
50+
51+
kubectl get pod -l app=php-apache
52+
53+
```
54+
55+
## Create an HPA resource
56+
57+
This HPA scales up when CPU exceeds 50% of the allocated container resource.
58+
59+
```bash
60+
kubectl autoscale deployment php-apache `#The target average CPU utilization` \
61+
--cpu-percent=50 \
62+
--min=1 `#The lower limit for the number of pods that can be set by the autoscaler` \
63+
--max=10 `#The upper limit for the number of pods that can be set by the autoscaler`
64+
65+
```
66+
67+
View the HPA using kubectl. You probably will see <unknown>/50% for 1-2 minutes and then you should be able to see 0%/50%
68+
69+
```bash
70+
$ kubectl get hpa
71+
```
72+
73+
## Generate load to trigger scaling
74+
75+
**Open a new terminal**
76+
77+
```bash
78+
$ kubectl --generator=run-pod/v1 run -i --tty load-generator --image=busybox /bin/sh
79+
80+
```
81+
82+
> Reference: https://medium.com/better-programming/kubernetes-tips-create-pods-with-imperative-commands-in-1-18-62ea6e1ceb32
83+
84+
Execute a while loop to continue getting http://php-cache
85+
86+
```bash
87+
while true; do wget -q -O - http://php-apache; done
88+
```
89+
90+
In the previous tab, watch the HPA with the following command
91+
92+
```bash
93+
$ kubectl get hpa -w
94+
95+
```
96+
97+
You can now stop (Ctrl + C) load test that was running in the other terminal. You will notice that HPA will slowly bring the replica count to min number based on its configuration. You should also get out of load testing application by pressing Ctrl + D.

0 commit comments

Comments
 (0)