Skip to content

Commit 6117384

Browse files
committed
Typo fixes for main
1 parent 0fb9b87 commit 6117384

32 files changed

+68
-72
lines changed

cloud_experts_tutorials/cloud-experts-rosa-hcp-activation-and-account-linking-tutorial.adoc

Lines changed: 4 additions & 7 deletions
Original file line numberDiff line numberDiff line change
@@ -68,7 +68,7 @@ image::rosa-continue-rh-6.png[]
6868
+
6969
image::rosa-login-rh-account-7.png[]
7070
+
71-
* You can also register for a new Red{nbsp}Hat account or reset your password on this page.
71+
* You can also register for a new Red{nbsp}Hat account or reset your password on this page.
7272
* Make sure to log in to the Red{nbsp}Hat account that you plan to associate with the AWS account where you have activated {hcp-title} in previous steps.
7373
* Only a single AWS account that will be used for service billing can be associated with a Red{nbsp}Hat account. Typically an organizational AWS account that has other AWS accounts, such as developer accounts, linked would be the one that is to be billed, rather than individual AWS end user accounts.
7474
* Red{nbsp}Hat accounts belonging to the same Red{nbsp}Hat organization will be linked with the same AWS account. Therefore, you can manage who has access to creating {hcp-title} clusters on the Red{nbsp}Hat organization account level.
@@ -133,7 +133,7 @@ Do not share your unique token.
133133

134134
. The final prerequisite before your first cluster deployment is making sure the necessary account-wide roles and policies are created. The `rosa` CLI can help with that by using the command shown under step _2.2. To create the necessary account-wide roles and policies quickly…_ on the web console. The alternative to that is manual creation of these roles and policies.
135135

136-
. After logging in, creating the account roles, and verifying your identity using the `rosa whoami` command, your terminal will look similar to this:
136+
. After logging in, creating the account roles, and verifying your identity using the `rosa whoami` command, your terminal will look similar to this:
137137
+
138138
image::rosa-whoami-14.png[]
139139

@@ -174,7 +174,7 @@ image::rosa-deploy-ui-19.png[]
174174
+
175175
image::rosa-deploy-ui-hcp-20.png[]
176176

177-
. The next step *Accounts and roles* allows you specifying the infrastructure AWS account, into which the the ROSA cluster will be deployed and where the resources will be consumed and managed:
177+
. The next step *Accounts and roles* allows you specifying the infrastructure AWS account, into which the ROSA cluster will be deployed and where the resources will be consumed and managed:
178178
+
179179
image::rosa-ui-account-21.png[]
180180
+
@@ -191,12 +191,9 @@ image::rosa-ui-billing-22.png[]
191191
* You can see an indicator whether the ROSA contract is enabled for a given AWS billing account or not.
192192
* In case you would like to use an AWS account that does not have a contract enabled yet, you can either use the _Connect ROSA to a new AWS billing account_ to reach the ROSA AWS console page, where you can activate it after logging in using the respective AWS account by following steps described earlier in this tutorial, or ask the administrator of the AWS account to do that for you.
193193

194-
The following steps past the billing AWS account selection are beyond the scope of this tutorial.
194+
The following steps past the billing AWS account selection are beyond the scope of this tutorial.
195195

196196
.Additional resources
197197

198198
* For information on using the CLI to create a cluster, see xref:../rosa_hcp/rosa-hcp-sts-creating-a-cluster-quickly.adoc#rosa-hcp-sts-creating-a-cluster-cli_rosa-hcp-sts-creating-a-cluster-quickly[Creating a ROSA with HCP cluster using the CLI].
199199
* See link:https://cloud.redhat.com/learning/learn:getting-started-red-hat-openshift-service-aws-rosa/resource/resources:how-deploy-cluster-red-hat-openshift-service-aws-using-console-ui[this learning path] for more details on how to complete ROSA cluster deployment using the web console.
200-
201-
202-

modules/agent-installer-architectures.adoc

Lines changed: 5 additions & 5 deletions
Original file line numberDiff line numberDiff line change
@@ -8,23 +8,23 @@
88

99
Before installing an {product-title} cluster using the Agent-based Installer, you can verify the supported architecture on which you can install the cluster. This procedure is optional.
1010

11-
.Prerequisites:
11+
.Prerequisites
1212

1313
* You installed the {oc-first}.
1414
* You have downloaded the installation program.
1515
16-
.Procedure:
16+
.Procedure
1717

1818
. Log in to the {oc-first}.
1919

20-
. Check your release payload by running the following command:
20+
. Check your release payload by running the following command:
2121
[source,terminal]
2222
+
2323
----
2424
$ ./openshift-install version
2525
----
2626
+
27-
.Example output
27+
.Example output
2828
[source,terminal]
2929
----
3030
./openshift-install 4.16.0
@@ -49,4 +49,4 @@ $ oc adm release info <release_image> -o jsonpath="{ .metadata.metadata}" <1>
4949
{"release.openshift.io architecture":"multi"}
5050
----
5151
+
52-
If you are using the release image with the `multi` payload, you can install the cluster on different architectures such as `arm64`, `amd64`, `s390x`, and `ppc64le`. Otherwise, you can install the cluster only on the `release architecture` displayed in the output of the `openshift-install version` command.
52+
If you are using the release image with the `multi` payload, you can install the cluster on different architectures such as `arm64`, `amd64`, `s390x`, and `ppc64le`. Otherwise, you can install the cluster only on the `release architecture` displayed in the output of the `openshift-install version` command.

modules/cleaning-crio-storage.adoc

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -31,7 +31,7 @@ can't stat lower layer ... because it does not exist. Going through storage to
3131

3232
Follow this process to completely wipe the CRI-O storage and resolve the errors.
3333

34-
.Prerequisites:
34+
.Prerequisites
3535

3636
* You have access to the cluster as a user with the `cluster-admin` role.
3737
* You have installed the OpenShift CLI (`oc`).

modules/cnf-configuring-kubelet-nro.adoc

Lines changed: 3 additions & 3 deletions
Original file line numberDiff line numberDiff line change
@@ -8,7 +8,7 @@
88

99
The recommended way to configure a single NUMA node policy is to apply a performance profile. Another way is by creating and applying a `KubeletConfig` custom resource (CR), as shown in the following procedure.
1010

11-
.Procedure
11+
.Procedure
1212

1313
. Create the `KubeletConfig` custom resource (CR) that configures the pod admittance policy for the machine profile:
1414

@@ -41,7 +41,7 @@ spec:
4141
memory: "512Mi"
4242
topologyManagerPolicy: "single-numa-node" <5>
4343
----
44-
<1> Adjust this label to match the the `machineConfigPoolSelector` in the `NUMAResourcesOperator` CR.
44+
<1> Adjust this label to match the `machineConfigPoolSelector` in the `NUMAResourcesOperator` CR.
4545
<2> For `cpuManagerPolicy`, `static` must use a lowercase `s`.
4646
<3> Adjust this based on the CPU on your nodes.
4747
<4> For `memoryManagerPolicy`, `Static` must use an uppercase `S`.
@@ -56,5 +56,5 @@ $ oc create -f nro-kubeletconfig.yaml
5656
+
5757
[NOTE]
5858
====
59-
Applying performance profile or `KubeletConfig` automatically triggers rebooting of the nodes. If no reboot is triggered, you can troubleshoot the issue by looking at the labels in `KubeletConfig` that address the node group.
59+
Applying performance profile or `KubeletConfig` automatically triggers rebooting of the nodes. If no reboot is triggered, you can troubleshoot the issue by looking at the labels in `KubeletConfig` that address the node group.
6060
====

modules/cnf-image-based-upgrade-rollback.adoc

Lines changed: 2 additions & 2 deletions
Original file line numberDiff line numberDiff line change
@@ -44,7 +44,7 @@ $ oc patch imagebasedupgrades.lca.openshift.io upgrade -p='{"spec": {"stage": "R
4444
The {lcao} reboots the cluster with the previously installed version of {product-title} and restores the applications.
4545
--
4646

47-
. If you are satisfied with the changes, finalize the the rollback by patching the value of the `stage` field to `Idle` in the `ImageBasedUpgrade` CR by running the following command:
47+
. If you are satisfied with the changes, finalize the rollback by patching the value of the `stage` field to `Idle` in the `ImageBasedUpgrade` CR by running the following command:
4848
+
4949
--
5050
[source,terminal]
@@ -56,4 +56,4 @@ $ oc patch imagebasedupgrades.lca.openshift.io upgrade -p='{"spec": {"stage": "I
5656
====
5757
If you move to the `Idle` stage after a rollback, the {lcao} cleans up resources that can be used to troubleshoot a failed upgrade.
5858
====
59-
--
59+
--

modules/customize-certificates-add-service-serving.adoc

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -18,7 +18,7 @@ Because the generated certificates contain wildcard subjects for headless servic
1818
* Do not accept the service CA as a trusted CA for connections that are directed to individual pods and must not be impersonated by other pods. These connections must be configured to trust the CA that was used to generate the individual TLS certificates.
1919
====
2020

21-
.Prerequisites:
21+
.Prerequisites
2222

2323
* You must have a service defined.
2424

modules/ipi-install-troubleshooting-investigating-an-unavailable-kubernetes-api.adoc

Lines changed: 2 additions & 2 deletions
Original file line numberDiff line numberDiff line change
@@ -1,4 +1,4 @@
1-
// This module is included in the following assemblies:
1+
// This module is included in the following assemblies:
22
//
33
// installing/installing_bare_metal_ipi/ipi-install-troubleshooting.adoc
44

@@ -17,7 +17,7 @@ When the Kubernetes API is unavailable, check the control plane nodes to ensure
1717
$ sudo crictl logs $(sudo crictl ps --pod=$(sudo crictl pods --name=etcd-member --quiet) --quiet)
1818
----
1919

20-
. If the previous command fails, ensure that Kublet created the `etcd` pods by running the following command:
20+
. If the previous command fails, ensure that Kubelet created the `etcd` pods by running the following command:
2121
+
2222
[source,terminal]
2323
----

modules/kmm-validation-status.adoc

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -25,4 +25,4 @@ A `PreflightValidationOCP` resource reports the status and progress of each modu
2525
* `true`: Verified
2626
* `false`: Verification failed
2727
* `error`: Error during the verification process
28-
* `unknown`: Verfication has not started
28+
* `unknown`: Verification has not started

modules/lvms-installing-logical-volume-manager-operator-using-rhacm.adoc

Lines changed: 3 additions & 3 deletions
Original file line numberDiff line numberDiff line change
@@ -16,7 +16,7 @@ The `Policy` CR that is created to install {lvms} is also applied to the cluster
1616
.Prerequisites
1717
* You have access to the {rh-rhacm} cluster using an account with `cluster-admin` and Operator installation permissions.
1818
* You have dedicated disks that {lvms} can use on each cluster.
19-
* The cluster must be be managed by {rh-rhacm}.
19+
* The cluster must be managed by {rh-rhacm}.
2020
2121
.Procedure
2222

@@ -129,8 +129,8 @@ spec:
129129
$ oc create -f <file_name> -n <namespace>
130130
----
131131
+
132-
Upon creating the `Policy` CR, the following custom resources are created on the clusters that match the selection criteria configured in the `PlacementRule` CR:
132+
Upon creating the `Policy` CR, the following custom resources are created on the clusters that match the selection criteria configured in the `PlacementRule` CR:
133133

134134
* `Namespace`
135135
* `OperatorGroup`
136-
* `Subscription`
136+
* `Subscription`

modules/lvms-restoring-volume-snapshots.adoc

Lines changed: 3 additions & 3 deletions
Original file line numberDiff line numberDiff line change
@@ -6,7 +6,7 @@
66
[id="lvms-restoring-volume-snapshots_{context}"]
77
= Restoring volume snapshots
88

9-
To restore a volume snapshot, you must create a persistent volume claim (PVC) with the `dataSource.name` field set to the name of the volume snapshot.
9+
To restore a volume snapshot, you must create a persistent volume claim (PVC) with the `dataSource.name` field set to the name of the volume snapshot.
1010

1111
The restored PVC is independent of the volume snapshot and the source PVC.
1212

@@ -43,9 +43,9 @@ spec:
4343
----
4444
<1> Specify the storage size of the restored PVC. The storage size of the requested PVC must be greater than or equal to the stoage size of the volume snapshot that you want to restore. If a larger PVC is required, you can also resize the PVC after restoring the volume snapshot.
4545
<2> Set this field to the value of the `storageClassName` field in the source PVC of the volume snapshot that you want to restore.
46-
<3> Set this field to the name of the volume snapshot that you want to restore.
46+
<3> Set this field to the name of the volume snapshot that you want to restore.
4747

48-
. Create the PVC in the namespace where you created the the volume snapshot by running the following command:
48+
. Create the PVC in the namespace where you created the volume snapshot by running the following command:
4949
+
5050
[source,terminal]
5151
----

0 commit comments

Comments
 (0)