Skip to content

Commit 476815a

Browse files
authored
Merge pull request #81843 from jneczypor/OSDOCS-11269
OSDOCS-11269: Re-work second half of the "Getting Started with ROSA" Tutorials to be HCP specific
2 parents 3bfadac + 3fdff19 commit 476815a

6 files changed

+580
-22
lines changed

_topic_maps/_topic_map_rosa_hcp.yml

Lines changed: 10 additions & 22 deletions
Original file line numberDiff line numberDiff line change
@@ -91,6 +91,16 @@ Topics:
9191
File: cloud-experts-getting-started-admin-rights
9292
- Name: Accessing your cluster
9393
File: cloud-experts-getting-started-accessing
94+
- Name: Managing worker nodes
95+
File: cloud-experts-getting-started-managing-worker-nodes
96+
- Name: Autoscaling
97+
File: cloud-experts-getting-started-autoscaling
98+
- Name: Upgrading your cluster
99+
File: cloud-experts-getting-started-upgrading
100+
- Name: Deleting your cluster
101+
File: cloud-experts-getting-started-deleting
102+
- Name: Obtaining support
103+
File: cloud-experts-getting-started-support
94104
# ---
95105
# Name: Architecture
96106
# Dir: architecture
@@ -145,28 +155,6 @@ Topics:
145155
# File: cloud-experts-dynamic-certificate-custom-domain
146156
# - Name: Assigning consistent egress IP for external traffic
147157
# File: cloud-experts-consistent-egress-ip
148-
- Name: Getting started with ROSA
149-
Dir: cloud-experts-getting-started
150-
Distros: openshift-rosa-hcp
151-
Topics:
152-
- Name: Creating an admin user
153-
File: cloud-experts-getting-started-admin
154-
# - Name: Setting up an identity provider
155-
# File: cloud-experts-getting-started-idp
156-
# - Name: Granting admin rights
157-
# File: cloud-experts-getting-started-admin-rights
158-
# - Name: Accessing your cluster
159-
# File: cloud-experts-getting-started-accessing
160-
# - Name: Managing worker nodes
161-
# File: cloud-experts-getting-started-managing-worker-nodes
162-
# - Name: Autoscaling
163-
# File: cloud-experts-getting-started-autoscaling
164-
# - Name: Upgrading your cluster
165-
# File: cloud-experts-getting-started-upgrading
166-
# - Name: Deleting your cluster
167-
# File: cloud-experts-getting-started-deleting
168-
- Name: Obtaining support
169-
File: cloud-experts-getting-started-support
170158
- Name: Deploying an application
171159
Dir: cloud-experts-deploying-application
172160
Distros: openshift-rosa-hcp
Lines changed: 87 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,87 @@
1+
:_mod-docs-content-type: ASSEMBLY
2+
[id="cloud-experts-getting-started-autoscaling"]
3+
= Tutorial: Autoscaling
4+
include::_attributes/attributes-openshift-dedicated.adoc[]
5+
:context: cloud-experts-getting-started-autoscaling
6+
7+
toc::[]
8+
9+
//rosaworkshop.io content metadata
10+
//Brought into ROSA product docs 2024-01-04
11+
12+
The xref:../../rosa_cluster_admin/rosa_nodes/rosa-nodes-about-autoscaling-nodes.adoc#rosa-nodes-about-autoscaling-nodes[cluster autoscaler] adds or removes worker nodes from a cluster based on pod resources.
13+
14+
The cluster autoscaler increases the size of the cluster when:
15+
16+
* Pods fail to schedule on the current nodes due to insufficient resources.
17+
* Another node is necessary to meet deployment needs.
18+
19+
The cluster autoscaler does not increase the cluster resources beyond the limits that you specify.
20+
21+
The cluster autoscaler decreases the size of the cluster when:
22+
23+
* Some nodes are consistently not needed for a significant period. For example, when a node has low resource use and all of its important pods can fit on other nodes.
24+
25+
== Enabling autoscaling for an existing machine pool using the CLI
26+
27+
[NOTE]
28+
====
29+
Cluster autoscaling can be enabled at cluster creation and when creating a new machine pool by using the `--enable-autoscaling` option.
30+
====
31+
32+
. Autoscaling is set based on machine pool availability. To find out which machine pools are available for autoscaling, run the following command:
33+
+
34+
[source,terminal]
35+
----
36+
$ rosa list machinepools -c <cluster-name>
37+
----
38+
+
39+
.Example output
40+
+
41+
[source,terminal]
42+
----
43+
ID AUTOSCALING REPLICAS INSTANCE TYPE LABELS TAINTS AVAILABILITY ZONE SUBNET DISK SIZE VERSION AUTOREPAIR
44+
workers No 2/2 m5.xlarge us-east-1f subnet-<subnet_id> 300 GiB 4.14.36 Yes
45+
----
46+
47+
. Run the following command to add autoscaling to an available machine pool:
48+
+
49+
[source,terminal]
50+
----
51+
$ rosa edit machinepool -c <cluster-name> --enable-autoscaling <machinepool-name> --min-replicas=<num> --max-replicas=<num>
52+
----
53+
+
54+
.Example input
55+
+
56+
[source,terminal]
57+
----
58+
$ rosa edit machinepool -c my-rosa-cluster --enable-autoscaling workers --min-replicas=2 --max-replicas=4
59+
----
60+
+
61+
The above command creates an autoscaler for the worker nodes that scales between 2 and 4 nodes depending on the resources.
62+
63+
== Enabling autoscaling for an existing machine pool using the UI
64+
65+
[NOTE]
66+
====
67+
Cluster autoscaling can be enabled at cluster creation by checking the *Enable autoscaling* checkbox when creating machine pools.
68+
====
69+
70+
. Go to the *Machine pools* tab and click the three dots in the right..
71+
. Click *Edit*, then *Enable autoscaling*.
72+
. Edit the number of minimum and maximum node counts or leave the default numbers.
73+
. Click *Save*.
74+
. Run the following command to confirm that autoscaling was added:
75+
+
76+
[source,terminal]
77+
----
78+
$ rosa list machinepools -c <cluster-name>
79+
----
80+
+
81+
.Example output
82+
+
83+
[source,terminal]
84+
----
85+
ID AUTOSCALING REPLICAS INSTANCE TYPE LABELS TAINTS AVAILABILITY ZONE SUBNET DISK SIZE VERSION AUTOREPAIR
86+
workers Yes 2/2-4 m5.xlarge us-east-1f subnet-<subnet_id> 300 GiB 4.14.36 Yes
87+
----
Lines changed: 91 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,91 @@
1+
:_mod-docs-content-type: ASSEMBLY
2+
[id="cloud-experts-getting-started-deleting"]
3+
= Tutorial: Deleting your cluster
4+
include::_attributes/attributes-openshift-dedicated.adoc[]
5+
:context: cloud-experts-getting-started-deleting
6+
7+
toc::[]
8+
9+
//rosaworkshop.io content metadata
10+
//Brought into ROSA product docs 2024-01-11
11+
12+
You can delete your {product-title} (ROSA) cluster using either the command line interface (CLI) or the user interface (UI).
13+
14+
== Deleting a ROSA cluster using the CLI
15+
16+
. *Optional:* List your clusters to make sure you are deleting the correct one by running the following command:
17+
+
18+
[source,terminal]
19+
----
20+
$ rosa list clusters
21+
----
22+
23+
. Delete a cluster by running the following command:
24+
+
25+
[source,terminal]
26+
----
27+
$ rosa delete cluster --cluster <cluster-name>
28+
----
29+
+
30+
[WARNING]
31+
====
32+
This command is non-recoverable.
33+
====
34+
35+
. The CLI prompts you to confirm that you want to delete the cluster. Press *y* and then *Enter*. The cluster and all its associated infrastructure will be deleted.
36+
+
37+
[NOTE]
38+
====
39+
All AWS STS and IAM roles and policies will remain and must be deleted manually once the cluster deletion is complete by following the steps below.
40+
====
41+
42+
. The CLI outputs the commands to delete the OpenID Connect (OIDC) provider and Operator IAM roles resources that were created. Wait until the cluster finishes deleting before deleting these resources. Perform a quick status check by running the following command:
43+
+
44+
[source,terminal]
45+
----
46+
$ rosa list clusters
47+
----
48+
49+
. Once the cluster is deleted, delete the OIDC provider by running the following command:
50+
+
51+
[source,terminal]
52+
----
53+
$ rosa delete oidc-provider -c <clusterID> --mode auto --yes
54+
----
55+
56+
. Delete the Operator IAM roles by running the following command:
57+
+
58+
[source,terminal]
59+
----
60+
$ rosa delete operator-roles -c <clusterID> --mode auto --yes
61+
----
62+
+
63+
[NOTE]
64+
====
65+
This command requires the cluster ID and not the cluster name.
66+
====
67+
68+
. Only remove the remaining account roles if they are no longer needed by other clusters in the same account. If you want to create other ROSA clusters in this account, do not perform this step.
69+
+
70+
To delete the account roles, you need to know the prefix used when creating them. The default is "ManagedOpenShift" unless you specified otherwise.
71+
+
72+
Delete the account roles by running the following command:
73+
+
74+
[source,terminal]
75+
----
76+
$ rosa delete account-roles --prefix <prefix> --mode auto --yes
77+
----
78+
79+
== Deleting a ROSA cluster using the UI
80+
81+
. Log in to the {cluster-manager-url}, and locate the cluster you want to delete.
82+
83+
. Click the three dots to the right of the cluster.
84+
+
85+
image::cloud-experts-getting-started-deleting1.png[]
86+
87+
. In the dropdown menu, click *Delete cluster*.
88+
+
89+
image::cloud-experts-getting-started-deleting2.png[]
90+
91+
. Enter the name of the cluster to confirm deletion, and click *Delete*.

0 commit comments

Comments
 (0)