Skip to content

Commit a1ed1cb

Browse files
authored
Merge pull request #74502 from EricPonvelle/OSDOCS-8395_Scaling-Workshop-Migration
OSDOCS-8395: Migrated the Scaling lab from the ROSA workshop
2 parents 18afa7e + 5404e49 commit a1ed1cb

10 files changed

+308
-0
lines changed

_topic_maps/_topic_map_rosa.yml

Lines changed: 2 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -206,6 +206,8 @@ Topics:
206206
File: cloud-experts-deploying-s2i-webhook-cicd
207207
- Name: S2i deployments
208208
File: cloud-experts-deploying-application-s2i-deployments
209+
- Name: Scaling an application
210+
File: cloud-experts-deploying-application-scaling
209211
---
210212
Name: Getting started
211213
Dir: rosa_getting_started
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,306 @@
1+
:_mod-docs-content-type: ASSEMBLY
2+
[id="cloud-experts-deploying-application-scaling"]
3+
= Tutorial: Scaling an application
4+
include::_attributes/attributes-openshift-dedicated.adoc[]
5+
:context: cloud-experts-deploying-application-scaling
6+
:source-highlighter: pygments
7+
:pygments-style: emacs
8+
:icons: font
9+
10+
toc::[]
11+
12+
//rosaworkshop.io content metadata
13+
//Brought into ROSA product docs 2024-04-10
14+
15+
== Scaling
16+
17+
You can manually or automatically scale your pods by using the Horizontal Pod Autoscaler (HPA). You can also scale your cluster nodes.
18+
19+
=== Manual pod scaling
20+
21+
You can manually scale your application's pods by using one of the following methods:
22+
23+
* Changing your ReplicaSet or deployment definition
24+
* Using the command line
25+
* Using the web console
26+
27+
28+
This workshop starts by using only one pod for the microservice. By defining a replica of `1` in your deployment definition, the Kubernetes Replication Controller strives to keep one pod alive. You then learn how to define pod autoscaling by using the link:https://docs.openshift.com/container-platform/latest/nodes/pods/nodes-pods-autoscaling.html[Horizontal Pod Autoscaler](HPA) which is based on the load and will scale out more pods, beyond your initial definition, if high load is experienced.
29+
30+
.Prerequisites
31+
32+
* An active ROSA cluster
33+
* A deloyed the OSToy application
34+
35+
.Procedure
36+
37+
. In the OSToy app, click the *Networking* tab in the navigational menu.
38+
. In the "Intra-cluster Communication" section, locate the box located beneath "Remote Pods" that randomly changes colors. Inside the box, you see the microservice's pod name. There is only one box in this example because there is only one microservice pod.
39+
+
40+
image::deploy-scale-network.png[HPA Menu]
41+
+
42+
43+
. Confirm that there is only one pod running for the microservice by running the following command:
44+
+
45+
[source,terminal,highlight='4']
46+
----
47+
$ oc get pods
48+
----
49+
+
50+
.Example output
51+
[source,terminal]
52+
----
53+
NAME READY STATUS RESTARTS AGE
54+
ostoy-frontend-679cb85695-5cn7x 1/1 Running 0 1h
55+
ostoy-microservice-86b4c6f559-p594d 1/1 Running 0 1h
56+
----
57+
58+
. Download the link:https://www.rosaworkshop.io/ostoy/yaml/ostoy-microservice-deployment.yaml[ostoy-microservice-deployment.yaml] and save it to your local machine.
59+
. Change the deployment definition to three pods instead of one by using the following example:
60+
+
61+
[source,yaml]
62+
----
63+
spec:
64+
selector:
65+
matchLabels:
66+
app: ostoy-microservice
67+
replicas: 3
68+
----
69+
70+
. Apply the replica changes by running the following command:
71+
+
72+
[source,terminal]
73+
----
74+
$ oc apply -f ostoy-microservice-deployment.yaml
75+
----
76+
+
77+
[NOTE]
78+
====
79+
You can also edit the `ostoy-microservice-deployment.yaml` file in the OpenShift Web Console by going to the *Workloads > Deployments > ostoy-microservice > YAML* tab.
80+
====
81+
82+
. Confirm that there are now 3 pods by running the following command:
83+
+
84+
[source,terminal]
85+
----
86+
$ oc get pods
87+
----
88+
+
89+
The output shows that there are now 3 pods for the microservice instead of only one.
90+
+
91+
.Example output
92+
+
93+
[source,terminal]
94+
----
95+
NAME READY STATUS RESTARTS AGE
96+
ostoy-frontend-5fbcc7d9-rzlgz 1/1 Running 0 26m
97+
ostoy-microservice-6666dcf455-2lcv4 1/1 Running 0 81s
98+
ostoy-microservice-6666dcf455-5z56w 1/1 Running 0 81s
99+
ostoy-microservice-6666dcf455-tqzmn 1/1 Running 0 26m
100+
----
101+
102+
. Scale the application by using the CLI or by using the web UI:
103+
+
104+
** In the CLI, decrease the number of pods from `3` to `2` by running the following command:
105+
+
106+
[source,terminal]
107+
----
108+
$ oc scale deployment ostoy-microservice --replicas=2
109+
----
110+
+
111+
** From the navigational menu of the OpenShift web console UI, click *Workloads > Deployments > ostoy-microservice*.
112+
** On the left side of the page, locate the blue circle with a "3 Pod" label in the middle.
113+
** Selecting the arrows next to the circle scales the number of pods. Select the down arrow to `2`.
114+
+
115+
image::deploy-scale-uiscale.png[UI Scale]
116+
117+
.Verification
118+
119+
Check your pod counts by using the CLI, the web UI, or the OSToy app:
120+
121+
* From the CLI, confirm that you are using two pods for the microservice by running the following command:
122+
+
123+
[source,terminal]
124+
----
125+
$ oc get pods
126+
----
127+
+
128+
.Example output
129+
[source,terminal]
130+
----
131+
NAME READY STATUS RESTARTS AGE
132+
ostoy-frontend-5fbcc7d9-rzlgz 1/1 Running 0 75m
133+
ostoy-microservice-6666dcf455-2lcv4 1/1 Running 0 50m
134+
ostoy-microservice-6666dcf455-tqzmn 1/1 Running 0 75m
135+
----
136+
137+
* In the web UI, select *Workloads > Deployments > ostoy-microservice*.
138+
+
139+
image::deploy-scale-verify-workload.png[Verify the workload pods]
140+
141+
142+
* You can also confirm that there are two pods in use by selecting **Networking** in the navigational menu of the OSToy app. There should be two colored boxes for the two pods.
143+
+
144+
image::deploy-scale-colorspods.png[UI Scale]
145+
146+
=== Pod Autoscaling
147+
148+
{product-title} offers a link:https://docs.openshift.com/container-platform/latest/nodes/pods/nodes-pods-autoscaling.html[Horizontal Pod Autoscaler] (HPA). The HPA uses metrics to increase or decrease the number of pods when necessary.
149+
150+
.Procedure
151+
152+
. From the navigational menu of the web UI, select *Pod Auto Scaling*.
153+
+
154+
image::deploy-scale-hpa-menu.png[HPA Menu]
155+
156+
. Create the HPA by running the following command:
157+
+
158+
[source,terminal]
159+
----
160+
$ oc autoscale deployment/ostoy-microservice --cpu-percent=80 --min=1 --max=10
161+
----
162+
+
163+
This command creates an HPA that maintains between 1 and 10 replicas of the pods controlled by the ostoy-microservice deployment. Thoughout deployment, HPA increases and decreases the number of replicas to keep the average CPU use across all pods at 80% and 40 millicores.
164+
165+
. On the *Pod Auto Scaling > Horizontal Pod Autoscaling* page, select *Increase the load*.
166+
+
167+
[IMPORTANT]
168+
====
169+
Because increasing the load generates CPU intensive calculations, the page can become unresponsive. This is an expected response. Click *Increase the Load* only once. For more information about the process, see the link:https://github.com/openshift-cs/ostoy/blob/master/microservice/app.js#L32[microservice's GitHub repository].
170+
====
171+
+
172+
After a few minutes, the new pods display on the page represented by colored boxes.
173+
+
174+
[NOTE]
175+
====
176+
The page can experience lag.
177+
====
178+
179+
.Verification
180+
181+
Check your pod counts with one of the following methods:
182+
183+
* In the OSToy application's web UI, see the remote pods box:
184+
+
185+
image::deploy-scale-hpa-mainpage.png[HPA Main]
186+
+
187+
Because there is only one pod, increasing the workload should trigger an increase of pods.
188+
+
189+
* In the CLI, run the following command:
190+
+
191+
[source,terminal]
192+
----
193+
oc get pods --field-selector=status.phase=Running | grep microservice
194+
----
195+
+
196+
.Example output
197+
+
198+
[source,terminal]
199+
----
200+
ostoy-microservice-79894f6945-cdmbd 1/1 Running 0 3m14s
201+
ostoy-microservice-79894f6945-mgwk7 1/1 Running 0 4h24m
202+
ostoy-microservice-79894f6945-q925d 1/1 Running 0 3m14s
203+
----
204+
205+
* You can also verify autoscaling from the {cluster-manager}
206+
+
207+
. In the OpenShift web console navigational menu, click *Observe > Dashboards*.
208+
. In the dashboard, select *Kubernetes / Compute Resources / Namespace (Pods)* and your namespace *ostoy*.
209+
+
210+
image::deploy-scale-hpa-metrics.png[Select metrics]
211+
+
212+
. A graph appears showing your resource usage across CPU and memory. The top graph shows recent CPU consumption per pod and the lower graph indicates memory usage. The following lists the callouts in the graph:
213+
.. The load increased (A).
214+
.. Two new pods were created (B and C).
215+
.. The thickness of each graph represents the CPU consumption and indicates which pods handled more load.
216+
.. The load decreased (D), and the pods were deleted.
217+
+
218+
image::deploy-scale-metrics.png[Select metrics]
219+
220+
=== Node Autoscaling
221+
222+
{product-title} allows you to use link:https://docs.openshift.com/rosa/rosa_cluster_admin/rosa_nodes/rosa-nodes-about-autoscaling-nodes.html[node autoscaling]. In this scenario, you will create a new project with a job that has a large workload that the cluster cannot handle. With autoscaling enabled, when the load is larger than your current capacity, the cluster will automatically create new nodes to handle the load.
223+
224+
.Prerequisites
225+
226+
* Autoscaling is enabled on your machine pools.
227+
228+
.Procedure
229+
230+
. Create a new project called `autoscale-ex` by running the following command:
231+
+
232+
[source,terminal]
233+
----
234+
$ oc new-project autoscale-ex
235+
----
236+
237+
. Create the job by running the following command:
238+
+
239+
[source,terminal]
240+
----
241+
$ oc create -f https://raw.githubusercontent.com/openshift-cs/rosaworkshop/master/rosa-workshop/ostoy/yaml/job-work-queue.yaml
242+
----
243+
+
244+
245+
. After a few minuts, run the following command to see the pods:
246+
+
247+
[source,terminal]
248+
----
249+
$ oc get pods
250+
----
251+
+
252+
.Example output
253+
+
254+
[source,terminal]
255+
----
256+
NAME READY STATUS RESTARTS AGE
257+
work-queue-5x2nq-24xxn 0/1 Pending 0 10s
258+
work-queue-5x2nq-57zpt 0/1 Pending 0 10s
259+
work-queue-5x2nq-58bvs 0/1 Pending 0 10s
260+
work-queue-5x2nq-6c5tl 1/1 Running 0 10s
261+
work-queue-5x2nq-7b84p 0/1 Pending 0 10s
262+
work-queue-5x2nq-7hktm 0/1 Pending 0 10s
263+
work-queue-5x2nq-7md52 0/1 Pending 0 10s
264+
work-queue-5x2nq-7qgmp 0/1 Pending 0 10s
265+
work-queue-5x2nq-8279r 0/1 Pending 0 10s
266+
work-queue-5x2nq-8rkj2 0/1 Pending 0 10s
267+
work-queue-5x2nq-96cdl 0/1 Pending 0 10s
268+
work-queue-5x2nq-96tfr 0/1 Pending 0 10s
269+
----
270+
271+
. Because there are many pods in a `Pending` state, this status should trigger the autoscaler to create more nodes in your machine pool. Allow time to create these worker nodes.
272+
273+
. After a few minutes, use the following command to see how many worker nodes you now have:
274+
+
275+
[source,terminal]
276+
----
277+
$ oc get nodes
278+
----
279+
+
280+
.Example output
281+
+
282+
[source,terminal]
283+
----
284+
NAME STATUS ROLES AGE VERSION
285+
ip-10-0-138-106.us-west-2.compute.internal Ready infra,worker 22h v1.23.5+3afdacb
286+
ip-10-0-153-68.us-west-2.compute.internal Ready worker 2m12s v1.23.5+3afdacb
287+
ip-10-0-165-183.us-west-2.compute.internal Ready worker 2m8s v1.23.5+3afdacb
288+
ip-10-0-176-123.us-west-2.compute.internal Ready infra,worker 22h v1.23.5+3afdacb
289+
ip-10-0-195-210.us-west-2.compute.internal Ready master 23h v1.23.5+3afdacb
290+
ip-10-0-196-84.us-west-2.compute.internal Ready master 23h v1.23.5+3afdacb
291+
ip-10-0-203-104.us-west-2.compute.internal Ready worker 2m6s v1.23.5+3afdacb
292+
ip-10-0-217-202.us-west-2.compute.internal Ready master 23h v1.23.5+3afdacb
293+
ip-10-0-225-141.us-west-2.compute.internal Ready worker 23h v1.23.5+3afdacb
294+
ip-10-0-231-245.us-west-2.compute.internal Ready worker 2m11s v1.23.5+3afdacb
295+
ip-10-0-245-27.us-west-2.compute.internal Ready worker 2m8s v1.23.5+3afdacb
296+
ip-10-0-245-7.us-west-2.compute.internal Ready worker 23h v1.23.5+3afdacb
297+
----
298+
+
299+
You can see the worker nodes were automatically created to handle the workload.
300+
301+
. Return to the OSToy app by entering the following command:
302+
+
303+
[source,terminal]
304+
----
305+
$ oc project ostoy
306+
----

images/deploy-scale-colorspods.png

43.6 KB
Loading

images/deploy-scale-hpa-mainpage.png

111 KB
Loading

images/deploy-scale-hpa-menu.png

50.3 KB
Loading

images/deploy-scale-hpa-metrics.png

13.9 KB
Loading

images/deploy-scale-metrics.png

89.7 KB
Loading

images/deploy-scale-network.png

143 KB
Loading

images/deploy-scale-uiscale.png

44.8 KB
Loading
43.6 KB
Loading

0 commit comments

Comments
 (0)