Skip to content

Commit 1fc7bf5

Browse files
committed
Fix syntax
1 parent b5db35a commit 1fc7bf5

File tree

5 files changed

+36
-36
lines changed

5 files changed

+36
-36
lines changed

modules/hcp-bm-access.adoc

Lines changed: 5 additions & 5 deletions
Original file line numberDiff line numberDiff line change
@@ -15,14 +15,14 @@ To access the hosted cluster by getting the `kubeconfig` file and credentials di
1515

1616
The secret name formats are as follows:
1717

18-
** `kubeconfig` secret: `<hosted-cluster-namespace>-<name>-admin-kubeconfig` (clusters-hypershift-demo-admin-kubeconfig)
19-
** `kubeadmin` password secret: `<hosted-cluster-namespace>-<name>-kubeadmin-password` (clusters-hypershift-demo-kubeadmin-password)
18+
** `kubeconfig` secret: `<hosted_cluster_namespace>-<name>-admin-kubeconfig`. For example, `clusters-hypershift-demo-admin-kubeconfig`.
19+
** `kubeadmin` password secret: `<hosted_cluster_namespace>-<name>-kubeadmin-password`. For example, `clusters-hypershift-demo-kubeadmin-password`.
2020

2121
The `kubeconfig` secret contains a Base64-encoded `kubeconfig` field, which you can decode and save into a file to use with the following command:
2222

2323
[source,terminal]
2424
----
25-
$ oc --kubeconfig <hosted-cluster-name>.kubeconfig get nodes
25+
$ oc --kubeconfig <hosted_cluster_name>.kubeconfig get nodes
2626
----
2727

2828
The `kubeadmin` password secret is also Base64-encoded. You can decode it and use the password to log in to the API server or console of the hosted cluster.
@@ -35,12 +35,12 @@ The `kubeadmin` password secret is also Base64-encoded. You can decode it and us
3535
+
3636
[source,terminal]
3737
----
38-
$ hcp create kubeconfig --namespace <hosted-cluster-namespace> --name <hosted-cluster-name> > <hosted-cluster-name>.kubeconfig
38+
$ hcp create kubeconfig --namespace <hosted_cluster_namespace> --name <hosted_cluster_name> > <hosted_cluster_name>.kubeconfig
3939
----
4040

4141
. After you save the `kubeconfig` file, you can access the hosted cluster by entering the following example command:
4242
+
4343
[source,terminal]
4444
----
45-
$ oc --kubeconfig <hosted-cluster-name>.kubeconfig get nodes
45+
$ oc --kubeconfig <hosted_cluster_name>.kubeconfig get nodes
4646
----

modules/hcp-bm-add-np.adoc

Lines changed: 10 additions & 10 deletions
Original file line numberDiff line numberDiff line change
@@ -15,30 +15,30 @@ You can create node pools for a hosted cluster by specifying a name, number of r
1515
+
1616
[source,terminal]
1717
----
18-
export NODEPOOL_NAME=${CLUSTER_NAME}-extra-cpu
19-
export WORKER_COUNT="2"
20-
2118
$ hcp create nodepool agent \
22-
--cluster-name $CLUSTER_NAME \
23-
--name $NODEPOOL_NAME \
24-
--node-count $WORKER_COUNT \
25-
--agentLabelSelector '{"matchLabels": {"size": "medium"}}' <1>
19+
--cluster-name <hosted_cluster_name> \// <1>
20+
--name <nodepool_name> \// <2>
21+
--node-count <worker_node_count> \// <3>
22+
--agentLabelSelector '{"matchLabels": {"size": "medium"}}' <4>
2623
----
2724
+
28-
<1> The `--agentLabelSelector` is optional. The node pool uses agents with the `"size" : "medium"` label.
25+
<1> Replace `<hosted_cluster_name>` with your hosted cluster name.
26+
<2> Replace `<nodepool_name>` with the name of your node pool, for example, `<hosted_cluster_name>-extra-cpu`.
27+
<3> Replace `<worker_node_count>` with the worker node count, for example, `2`.
28+
<4> The `--agentLabelSelector` flag is optional. The node pool uses agents with the `"size" : "medium"` label.
2929

3030
. Check the status of the node pool by listing `nodepool` resources in the `clusters` namespace:
3131
+
3232
[source,terminal]
3333
----
34-
$oc get nodepools --namespace clusters
34+
$ oc get nodepools --namespace clusters
3535
----
3636

3737
. Extract the `admin-kubeconfig` secret by entering the following command:
3838
+
3939
[source,terminal]
4040
----
41-
$ oc extract -n <hosted-control-plane-namespace> secret/admin-kubeconfig --to=./hostedcluster-secrets --confirm
41+
$ oc extract -n <hosted_control_plane_namespace> secret/admin-kubeconfig --to=./hostedcluster-secrets --confirm
4242
----
4343
+
4444
.Example output

modules/hcp-bm-autoscale.adoc

Lines changed: 3 additions & 3 deletions
Original file line numberDiff line numberDiff line change
@@ -15,7 +15,7 @@ When you need more capacity in your hosted cluster and spare agents are availabl
1515
+
1616
[source,terminal]
1717
----
18-
$ oc -n <hosted-cluster-namespace> patch nodepool <hosted-cluster-name> --type=json -p '[{"op": "remove", "path": "/spec/replicas"},{"op":"add", "path": "/spec/autoScaling", "value": { "max": 5, "min": 2 }}]'
18+
$ oc -n <hosted_cluster_namespace> patch nodepool <hosted_cluster_name> --type=json -p '[{"op": "remove", "path": "/spec/replicas"},{"op":"add", "path": "/spec/autoScaling", "value": { "max": 5, "min": 2 }}]'
1919
----
2020
+
2121
[NOTE]
@@ -71,7 +71,7 @@ $ oc apply -f workload-config.yaml
7171
+
7272
[source,terminal]
7373
----
74-
$ oc extract -n <hosted-cluster-namespace> secret/<hosted-cluster-name>-admin-kubeconfig --to=./hostedcluster-secrets --confirm
74+
$ oc extract -n <hosted_cluster_namespace> secret/<hosted_cluster_name>-admin-kubeconfig --to=./hostedcluster-secrets --confirm
7575
----
7676
+
7777
.Example output
@@ -90,7 +90,7 @@ $ oc --kubeconfig ./hostedcluster-secrets get nodes
9090
+
9191
[source,terminal]
9292
----
93-
$ oc --kubeconfig ./hostedcluster-secrets -n default delete deployment reversewords
93+
$ oc --kubeconfig ./hostedcluster-secrets -n <namespace> delete deployment <deployment_name>
9494
----
9595

9696
. Wait for several minutes to pass without requiring the additional capacity. On the Agent platform, the agent is decommissioned and can be reused. You can confirm that the node was removed by entering the following command:

modules/hcp-bm-scale-np.adoc

Lines changed: 11 additions & 11 deletions
Original file line numberDiff line numberDiff line change
@@ -19,7 +19,7 @@ You can scale up the `NodePool` object by adding nodes to your hosted cluster. W
1919
+
2020
[source,terminal]
2121
----
22-
$ oc -n <hosted-cluster-namespace> scale nodepool <nodepool-name> --replicas 2
22+
$ oc -n <hosted_cluster_namespace> scale nodepool <nodepool_name> --replicas 2
2323
----
2424
+
2525
The Cluster API agent provider randomly picks two agents that are then assigned to the hosted cluster. Those agents go through different states and finally join the hosted cluster as {product-title} nodes. The agents pass through states in the following order:
@@ -35,7 +35,7 @@ The Cluster API agent provider randomly picks two agents that are then assigned
3535
+
3636
[source,terminal]
3737
----
38-
$ oc -n <hosted-control-plane-namespace> get agent
38+
$ oc -n <hosted_control_plane_namespace> get agent
3939
----
4040
+
4141
.Example output
@@ -51,7 +51,7 @@ da503cf1-a347-44f2-875c-4960ddb04091 hypercluster1 true auto-assign
5151
+
5252
[source,terminal]
5353
----
54-
$ oc -n <hosted-control-plane-namespace> get agent -o jsonpath='{range .items[*]}BMH: {@.metadata.labels.agent-install\.openshift\.io/bmh} Agent: {@.metadata.name} State: {@.status.debugInfo.state}{"\n"}{end}'
54+
$ oc -n <hosted_control_plane_namespace> get agent -o jsonpath='{range .items[*]}BMH: {@.metadata.labels.agent-install\.openshift\.io/bmh} Agent: {@.metadata.name} State: {@.status.debugInfo.state}{"\n"}{end}'
5555
----
5656
+
5757
.Example output
@@ -66,14 +66,14 @@ BMH: ocp-worker-1 Agent: da503cf1-a347-44f2-875c-4960ddb04091 State: insufficien
6666
+
6767
[source,terminal]
6868
----
69-
$ oc extract -n <hosted-cluster-namespace> secret/<hosted-cluster-name>-admin-kubeconfig --to=- > kubeconfig-<hosted-cluster-name>
69+
$ oc extract -n <hosted_cluster_namespace> secret/<hosted_cluster_name>-admin-kubeconfig --to=- > kubeconfig-<hosted_cluster_name>
7070
----
7171

7272
. After the agents reach the `added-to-existing-cluster` state, verify that you can see the {product-title} nodes in the hosted cluster by entering the following command:
7373
+
7474
[source,terminal]
7575
----
76-
$ oc --kubeconfig kubeconfig-<hosted-cluster-name> get nodes
76+
$ oc --kubeconfig kubeconfig-<hosted_cluster_name> get nodes
7777
----
7878
+
7979
.Example output
@@ -90,7 +90,7 @@ Cluster Operators start to reconcile by adding workloads to the nodes.
9090
+
9191
[source,terminal]
9292
----
93-
$ oc -n <hosted-control-plane-namespace> get machines
93+
$ oc -n <hosted_control_plane_namespace> get machines
9494
----
9595
+
9696
.Example output
@@ -107,7 +107,7 @@ The `clusterversion` reconcile process eventually reaches a point where only Ing
107107
+
108108
[source,terminal]
109109
----
110-
$ oc --kubeconfig kubeconfig-<hosted-cluster-name> get clusterversion,co
110+
$ oc --kubeconfig kubeconfig-<hosted_cluster_name> get clusterversion,co
111111
----
112112
+
113113
.Example output
@@ -116,8 +116,8 @@ $ oc --kubeconfig kubeconfig-<hosted-cluster-name> get clusterversion,co
116116
NAME VERSION AVAILABLE PROGRESSING SINCE STATUS
117117
clusterversion.config.openshift.io/version False True 40m Unable to apply 4.x.z: the cluster operator console has not yet successfully rolled out
118118
119-
NAME VERSION AVAILABLE PROGRESSING DEGRADED SINCE MESSAGE
120-
clusteroperator.config.openshift.io/console 4.12z False False False 11m RouteHealthAvailable: failed to GET route (https://console-openshift-console.apps.hypercluster1.domain.com): Get "https://console-openshift-console.apps.hypercluster1.domain.com": dial tcp 10.19.3.29:443: connect: connection refused
121-
clusteroperator.config.openshift.io/csi-snapshot-controller 4.12z True False False 10m
122-
clusteroperator.config.openshift.io/dns 4.12z True False False 9m16s
119+
NAME VERSION AVAILABLE PROGRESSING DEGRADED SINCE MESSAGE
120+
clusteroperator.config.openshift.io/console 4.12z False False False 11m RouteHealthAvailable: failed to GET route (https://console-openshift-console.apps.hypercluster1.domain.com): Get "https://console-openshift-console.apps.hypercluster1.domain.com": dial tcp 10.19.3.29:443: connect: connection refused
121+
clusteroperator.config.openshift.io/csi-snapshot-controller 4.12z True False False 10m
122+
clusteroperator.config.openshift.io/dns 4.12z True False False 9m16s
123123
----

modules/hcp-ibmz-lpar-agents.adoc

Lines changed: 7 additions & 7 deletions
Original file line numberDiff line numberDiff line change
@@ -42,7 +42,7 @@ The `.ins` file includes installation data and is on the FTP server. You can acc
4242
In {product-title} 4.16, the `.ins` file and `initrd.img.addrsize` are not automatically generated as part of boot-artifacts from the installation program. You must manually generate these files.
4343
====
4444

45-
.. Run the following commands to get the size of the `kernel` and `initrd`:
45+
.. Run the following commands to get the size of the `kernel` and `initrd`:
4646
+
4747
[source,yaml]
4848
----
@@ -71,25 +71,25 @@ INITRD_ADDR_SIZE_OFFSET=0x00010408
7171
OFFSET_HEX=$(printf '0x%08x\n' $offset)
7272
----
7373

74-
.. Convert the address and size to binary format by running the following commands:
74+
.. Convert the address and size to binary format by running the following command:
7575
+
7676
[source,terminal]
7777
----
78-
printf "$(printf '%016x\n' $initrd_size)" | xxd -r -p > temp_size.bin
78+
$ printf "$(printf '%016x\n' $initrd_size)" | xxd -r -p > temp_size.bin
7979
----
8080

8181
.. Merge the address and size binaries by running the following command:
8282
+
8383
[source,terminal]
8484
----
85-
cat temp_address.bin temp_size.bin > "$INITRD_IMG_NAME.addrsize"
85+
$ cat temp_address.bin temp_size.bin > "$INITRD_IMG_NAME.addrsize"
8686
----
8787

8888
.. Clean up temporary files by running the following command:
8989
+
9090
[source,terminal]
9191
----
92-
rm -rf temp_address.bin temp_size.bin
92+
$ rm -rf temp_address.bin temp_size.bin
9393
----
9494

9595
.. Create the `.ins` file. The file is based on the paths of the `kernel.img`, `initrd.img`, `initrd.img.addrsize`, and `cmdline` files and the memory locations where the data is to be copied.
@@ -102,8 +102,8 @@ $INITRD_IMG_NAME.addrsize $INITRD_ADDR_SIZE_OFFSET
102102
$CMDLINE_PATH $KERNEL_CMDLINE_OFFSET
103103
----
104104

105-
. Transfer the `initrd`, `kernel`, `generic.ins`, and `initrd.img.addrsize` parameter files to the file server. For more information about how to transfer the files with FTP and boot, see _Installing in an LPAR_.
105+
. Transfer the `initrd`, `kernel`, `generic.ins`, and `initrd.img.addrsize` parameter files to the file server. For more information about how to transfer the files with FTP and boot, see _Installing in an LPAR_.
106106

107107
. Start the machine.
108108

109-
. Repeat the procedure for all other machines in the cluster.
109+
. Repeat the procedure for all other machines in the cluster.

0 commit comments

Comments
 (0)