Skip to content

Commit f63ab8a

Browse files
authored
Merge pull request #90335 from mgencur/short_lines
[OCPBUGS-53162]: Long command-line prompts not visible
2 parents d717ee0 + 0c4ce88 commit f63ab8a

File tree

62 files changed

+314
-144
lines changed

Some content is hidden

Large Commits have some content hidden by default. Use the searchbox below for content that may be hidden.

62 files changed

+314
-144
lines changed

modules/advanced-node-tuning-hosted-cluster.adoc

Lines changed: 6 additions & 3 deletions
Original file line numberDiff line numberDiff line change
@@ -114,7 +114,8 @@ After the nodes are available, the containerized TuneD daemon calculates the req
114114
+
115115
[source,terminal]
116116
----
117-
$ oc --kubeconfig="<hosted_cluster_kubeconfig>" get tuned.tuned.openshift.io -n openshift-cluster-node-tuning-operator
117+
$ oc --kubeconfig="<hosted_cluster_kubeconfig>" get tuned.tuned.openshift.io \
118+
-n openshift-cluster-node-tuning-operator
118119
----
119120
+
120121
.Example output
@@ -130,7 +131,8 @@ rendered 123m
130131
+
131132
[source,terminal]
132133
----
133-
$ oc --kubeconfig="<hosted_cluster_kubeconfig>" get profile.tuned.openshift.io -n openshift-cluster-node-tuning-operator
134+
$ oc --kubeconfig="<hosted_cluster_kubeconfig>" get profile.tuned.openshift.io \
135+
-n openshift-cluster-node-tuning-operator
134136
----
135137
+
136138
.Example output
@@ -149,7 +151,8 @@ Both of the worker nodes in the new `NodePool` have the `openshift-node-hugepage
149151
+
150152
[source,terminal]
151153
----
152-
$ oc --kubeconfig="<hosted_cluster_kubeconfig>" debug node/nodepool-1-worker-1 -- chroot /host cat /proc/cmdline
154+
$ oc --kubeconfig="<hosted_cluster_kubeconfig>" \
155+
debug node/nodepool-1-worker-1 -- chroot /host cat /proc/cmdline
153156
----
154157
+
155158
.Example output

modules/backup-etcd-hosted-cluster.adoc

Lines changed: 22 additions & 8 deletions
Original file line numberDiff line numberDiff line change
@@ -19,28 +19,38 @@ This procedure requires API downtime.
1919
+
2020
[source,terminal]
2121
----
22-
$ oc patch -n clusters hostedclusters/<hosted_cluster_name> -p '{"spec":{"pausedUntil":"true"}}' --type=merge
22+
$ oc patch -n clusters hostedclusters/<hosted_cluster_name> \
23+
-p '{"spec":{"pausedUntil":"true"}}' --type=merge
2324
----
2425

2526
. Stop all etcd-writer deployments by entering the following command:
2627
+
2728
[source,terminal]
2829
----
29-
$ oc scale deployment -n <hosted_cluster_namespace> --replicas=0 kube-apiserver openshift-apiserver openshift-oauth-apiserver
30+
$ oc scale deployment -n <hosted_cluster_namespace> --replicas=0 \
31+
kube-apiserver openshift-apiserver openshift-oauth-apiserver
3032
----
3133

3234
. To take an etcd snapshot, use the `exec` command in each etcd container by entering the following command:
3335
+
3436
[source,terminal]
3537
----
36-
$ oc exec -it <etcd_pod_name> -n <hosted_cluster_namespace> -- env ETCDCTL_API=3 /usr/bin/etcdctl --cacert /etc/etcd/tls/etcd-ca/ca.crt --cert /etc/etcd/tls/client/etcd-client.crt --key /etc/etcd/tls/client/etcd-client.key --endpoints=localhost:2379 snapshot save /var/lib/data/snapshot.db
38+
$ oc exec -it <etcd_pod_name> -n <hosted_cluster_namespace> -- \
39+
env ETCDCTL_API=3 /usr/bin/etcdctl \
40+
--cacert /etc/etcd/tls/etcd-ca/ca.crt \
41+
--cert /etc/etcd/tls/client/etcd-client.crt \
42+
--key /etc/etcd/tls/client/etcd-client.key \
43+
--endpoints=localhost:2379 \
44+
snapshot save /var/lib/data/snapshot.db
3745
----
3846

3947
. To check the snapshot status, use the `exec` command in each etcd container by running the following command:
4048
+
4149
[source,terminal]
4250
----
43-
$ oc exec -it <etcd_pod_name> -n <hosted_cluster_namespace> -- env ETCDCTL_API=3 /usr/bin/etcdctl -w table snapshot status /var/lib/data/snapshot.db
51+
$ oc exec -it <etcd_pod_name> -n <hosted_cluster_namespace> -- \
52+
env ETCDCTL_API=3 /usr/bin/etcdctl -w table snapshot status \
53+
/var/lib/data/snapshot.db
4454
----
4555

4656
. Copy the snapshot data to a location where you can retrieve it later, such as an S3 bucket. See the following example.
@@ -78,15 +88,17 @@ oc exec -it etcd-0 -n ${HOSTED_CLUSTER_NAMESPACE} -- curl -X PUT -T "/var/lib/da
7888
+
7989
[source,terminal]
8090
----
81-
$ oc get hostedcluster <hosted_cluster_name> -o=jsonpath='{.spec.secretEncryption.aescbc}'
91+
$ oc get hostedcluster <hosted_cluster_name> \
92+
-o=jsonpath='{.spec.secretEncryption.aescbc}'
8293
{"activeKey":{"name":"<hosted_cluster_name>-etcd-encryption-key"}}
8394
----
8495

8596
.. Save the secret encryption key by entering the following command:
8697
+
8798
[source,terminal]
8899
----
89-
$ oc get secret <hosted_cluster_name>-etcd-encryption-key -o=jsonpath='{.data.key}'
100+
$ oc get secret <hosted_cluster_name>-etcd-encryption-key \
101+
-o=jsonpath='{.data.key}'
90102
----
91103
+
92104
You can decrypt this key when restoring a snapshot on a new cluster.
@@ -95,14 +107,16 @@ You can decrypt this key when restoring a snapshot on a new cluster.
95107
+
96108
[source,terminal]
97109
----
98-
$ oc scale deployment -n <control_plane_namespace> --replicas=3 kube-apiserver openshift-apiserver openshift-oauth-apiserver
110+
$ oc scale deployment -n <control_plane_namespace> --replicas=3 \
111+
kube-apiserver openshift-apiserver openshift-oauth-apiserver
99112
----
100113

101114
. Resume the reconciliation of the hosted cluster by entering the following command:
102115
+
103116
[source,terminal]
104117
----
105-
$ oc patch -n <hosted_cluster_namespace> -p '[\{"op": "remove", "path": "/spec/pausedUntil"}]' --type=json
118+
$ oc patch -n <hosted_cluster_namespace> \
119+
-p '[\{"op": "remove", "path": "/spec/pausedUntil"}]' --type=json
106120
----
107121

108122
.Next steps

modules/destroy-hc-ibm-z-cli.adoc

Lines changed: 6 additions & 3 deletions
Original file line numberDiff line numberDiff line change
@@ -14,7 +14,8 @@ To destroy a hosted cluster and its managed cluster on `x86` bare metal with {ib
1414
+
1515
[source,terminal]
1616
----
17-
$ oc -n <hosted_cluster_namespace> scale nodepool <nodepool_name> --replicas 0
17+
$ oc -n <hosted_cluster_namespace> scale nodepool <nodepool_name> \
18+
--replicas 0
1819
----
1920
+
2021
After the `NodePool` object is scaled to `0`, the compute nodes are detached from the hosted cluster. In {product-title} version 4.17, this function is applicable only for {ibm-z-title} with KVM. For z/VM and LPAR, you must delete the compute nodes manually.
@@ -26,7 +27,8 @@ If you want to re-attach compute nodes to the cluster, you can scale up the `Nod
2627
If the compute nodes are not detached from the hosted cluster or are stuck in the `Notready` state, delete the compute nodes manually by running the following command:
2728
[source,terminal]
2829
----
29-
$ oc --kubeconfig <hosted_cluster_name>.kubeconfig delete node <compute_node_name>
30+
$ oc --kubeconfig <hosted_cluster_name>.kubeconfig delete \
31+
node <compute_node_name>
3032
----
3133
====
3234
@@ -55,5 +57,6 @@ You can delete the virtual machines that you created as agents after you delete
5557
+
5658
[source,terminal]
5759
----
58-
$ hcp destroy cluster agent --name <hosted_cluster_name> --namespace <hosted_cluster_namespace>
60+
$ hcp destroy cluster agent --name <hosted_cluster_name> \
61+
--namespace <hosted_cluster_namespace>
5962
----

modules/dr-hosted-cluster-within-aws-region-backup.adoc

Lines changed: 14 additions & 7 deletions
Original file line numberDiff line numberDiff line change
@@ -14,24 +14,30 @@ To recover your hosted cluster in your target management cluster, you first need
1414
+
1515
[source,terminal]
1616
----
17-
$ oc create configmap mgmt-parent-cluster -n default --from-literal=from=${MGMT_CLUSTER_NAME}
17+
$ oc create configmap mgmt-parent-cluster -n default \
18+
--from-literal=from=${MGMT_CLUSTER_NAME}
1819
----
1920

2021
. Shut down the reconciliation in the hosted cluster and in the node pools by entering these commands:
2122
+
2223
[source,terminal]
2324
----
2425
$ PAUSED_UNTIL="true"
25-
$ oc patch -n ${HC_CLUSTER_NS} hostedclusters/${HC_CLUSTER_NAME} -p '{"spec":{"pausedUntil":"'${PAUSED_UNTIL}'"}}' --type=merge
26-
$ oc scale deployment -n ${HC_CLUSTER_NS}-${HC_CLUSTER_NAME} --replicas=0 kube-apiserver openshift-apiserver openshift-oauth-apiserver control-plane-operator
26+
$ oc patch -n ${HC_CLUSTER_NS} hostedclusters/${HC_CLUSTER_NAME} \
27+
-p '{"spec":{"pausedUntil":"'${PAUSED_UNTIL}'"}}' --type=merge
28+
$ oc scale deployment -n ${HC_CLUSTER_NS}-${HC_CLUSTER_NAME} --replicas=0 \
29+
kube-apiserver openshift-apiserver openshift-oauth-apiserver control-plane-operator
2730
----
2831
+
2932
[source,terminal]
3033
----
3134
$ PAUSED_UNTIL="true"
32-
$ oc patch -n ${HC_CLUSTER_NS} hostedclusters/${HC_CLUSTER_NAME} -p '{"spec":{"pausedUntil":"'${PAUSED_UNTIL}'"}}' --type=merge
33-
$ oc patch -n ${HC_CLUSTER_NS} nodepools/${NODEPOOLS} -p '{"spec":{"pausedUntil":"'${PAUSED_UNTIL}'"}}' --type=merge
34-
$ oc scale deployment -n ${HC_CLUSTER_NS}-${HC_CLUSTER_NAME} --replicas=0 kube-apiserver openshift-apiserver openshift-oauth-apiserver control-plane-operator
35+
$ oc patch -n ${HC_CLUSTER_NS} hostedclusters/${HC_CLUSTER_NAME} \
36+
-p '{"spec":{"pausedUntil":"'${PAUSED_UNTIL}'"}}' --type=merge
37+
$ oc patch -n ${HC_CLUSTER_NS} nodepools/${NODEPOOLS} \
38+
-p '{"spec":{"pausedUntil":"'${PAUSED_UNTIL}'"}}' --type=merge
39+
$ oc scale deployment -n ${HC_CLUSTER_NS}-${HC_CLUSTER_NAME} --replicas=0 \
40+
kube-apiserver openshift-apiserver openshift-oauth-apiserver control-plane-operator
3541
----
3642

3743
. Back up etcd and upload the data to an S3 bucket by running this bash script:
@@ -89,7 +95,8 @@ For more information about backing up etcd, see "Backing up and restoring etcd o
8995
+
9096
[source,terminal]
9197
----
92-
$ mkdir -p ${BACKUP_DIR}/namespaces/${HC_CLUSTER_NS} ${BACKUP_DIR}/namespaces/${HC_CLUSTER_NS}-${HC_CLUSTER_NAME}
98+
$ mkdir -p ${BACKUP_DIR}/namespaces/${HC_CLUSTER_NS} \
99+
${BACKUP_DIR}/namespaces/${HC_CLUSTER_NS}-${HC_CLUSTER_NAME}
93100
$ chmod 700 ${BACKUP_DIR}/namespaces/
94101
95102
# HostedCluster

modules/hcp-access-hc-aws-hcpcli.adoc

Lines changed: 2 additions & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -14,7 +14,8 @@ You can access the hosted cluster by using the `hcp` command-line interface (CLI
1414
+
1515
[source,terminal]
1616
----
17-
$ hcp create kubeconfig --namespace <hosted_cluster_namespace> --name <hosted_cluster_name> > <hosted_cluster_name>.kubeconfig
17+
$ hcp create kubeconfig --namespace <hosted_cluster_namespace> \
18+
--name <hosted_cluster_name> > <hosted_cluster_name>.kubeconfig
1819
----
1920

2021
. After you save the `kubeconfig` file, you can access the hosted cluster by entering the following command:

modules/hcp-access-priv-mgmt-aws.adoc

Lines changed: 6 additions & 2 deletions
Original file line numberDiff line numberDiff line change
@@ -14,7 +14,10 @@ You can access your private management cluster by using the command-line interfa
1414
+
1515
[source,terminal]
1616
----
17-
$ aws ec2 describe-instances --filter="Name=tag:kubernetes.io/cluster/<infra_id>,Values=owned" | jq '.Reservations[] | .Instances[] | select(.PublicDnsName=="") | .PrivateIpAddress'
17+
$ aws ec2 describe-instances \
18+
--filter="Name=tag:kubernetes.io/cluster/<infra_id>,Values=owned" \
19+
| jq '.Reservations[] | .Instances[] | select(.PublicDnsName=="") \
20+
| .PrivateIpAddress'
1821
----
1922

2023
. Create a `kubeconfig` file for the hosted cluster that you can copy to a node by entering the following command:
@@ -28,7 +31,8 @@ $ hcp create kubeconfig > <hosted_cluster_kubeconfig>
2831
+
2932
[source,terminal]
3033
----
31-
$ ssh -o ProxyCommand="ssh ec2-user@<bastion_ip> -W %h:%p" core@<node_ip>
34+
$ ssh -o ProxyCommand="ssh ec2-user@<bastion_ip> \
35+
-W %h:%p" core@<node_ip>
3236
----
3337

3438
. From the SSH shell, copy the `kubeconfig` file contents to a file on the node by entering the following command:

modules/hcp-access-pub-hc-aws-cli.adoc

Lines changed: 2 additions & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -14,7 +14,8 @@ You can access the hosted cluster by using the `hcp` command-line interface (CLI
1414
+
1515
[source,terminal]
1616
----
17-
$ hcp create kubeconfig --namespace <hosted_cluster_namespace> --name <hosted_cluster_name> > <hosted_cluster_name>.kubeconfig
17+
$ hcp create kubeconfig --namespace <hosted_cluster_namespace> \
18+
--name <hosted_cluster_name> > <hosted_cluster_name>.kubeconfig
1819
----
1920

2021
. After you save the `kubeconfig` file, access the hosted cluster by entering the following command:

modules/hcp-aws-create-dns-hosted-zone.adoc

Lines changed: 3 additions & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -39,7 +39,9 @@ $ dig +short test.user-dest-public.aws.kerberos.com
3939
+
4040
[source,terminal]
4141
----
42-
$ hcp create cluster aws --name=<hosted_cluster_name> --endpoint-access=PublicAndPrivate --external-dns-domain=<public_hosted_zone> ... <1>
42+
$ hcp create cluster aws --name=<hosted_cluster_name> \
43+
--endpoint-access=PublicAndPrivate \
44+
--external-dns-domain=<public_hosted_zone> ... <1>
4345
----
4446
+
4547
<1> Replace `<public_hosted_zone>` with the public hosted zone that you created.

modules/hcp-aws-create-public-zone.adoc

Lines changed: 3 additions & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -14,7 +14,9 @@ To access applications in your hosted clusters, you must configure the routable
1414
+
1515
[source,terminal]
1616
----
17-
$ aws route53 create-hosted-zone --name <basedomain> --caller-reference $(whoami)-$(date --rfc-3339=date) <1>
17+
$ aws route53 create-hosted-zone \
18+
--name <basedomain> \// <1>
19+
--caller-reference $(whoami)-$(date --rfc-3339=date)
1820
----
1921
+
2022
<1> Replace `<basedomain>` with your base domain, for example, `www.example.com`.

modules/hcp-aws-create-role-sts-creds.adoc

Lines changed: 10 additions & 10 deletions
Original file line numberDiff line numberDiff line change
@@ -49,9 +49,9 @@ Use this output as the value for `<arn>` in the next step.
4949
[source,terminal]
5050
----
5151
$ aws iam create-role \
52-
--role-name <name> \// <1>
53-
--assume-role-policy-document file://<file_name>.json \// <2>
54-
--query "Role.Arn"
52+
--role-name <name> \// <1>
53+
--assume-role-policy-document file://<file_name>.json \// <2>
54+
--query "Role.Arn"
5555
----
5656
<1> Replace `<name>` with the role name, for example, `hcp-cli-role`.
5757
<2> Replace `<file_name>` with the name of the JSON file you created in the previous step.
@@ -221,11 +221,11 @@ $ aws sts get-session-token --output json > sts-creds.json
221221
[source,json]
222222
----
223223
{
224-
"Credentials": {
225-
"AccessKeyId": "ASIA1443CE0GN2ATHWJU",
226-
"SecretAccessKey": "XFLN7cZ5AP0d66KhyI4gd8Mu0UCQEDN9cfelW1”,
227-
"SessionToken": "IQoJb3JpZ2luX2VjEEAaCXVzLWVhc3QtMiJHMEUCIDyipkM7oPKBHiGeI0pMnXst1gDLfs/TvfskXseKCbshAiEAnl1l/Html7Iq9AEIqf////KQburfkq4A3TuppHMr/9j1TgCj1z83SO261bHqlJUazKoy7vBFR/a6LHt55iMBqtKPEsIWjBgj/jSdRJI3j4Gyk1//luKDytcfF/tb9YrxDTPLrACS1lqAxSIFZ82I/jDhbDs=",
228-
"Expiration": "2025-05-16T04:19:32+00:00"
229-
}
230-
}
224+
"Credentials": {
225+
"AccessKeyId": "ASIA1443CE0GN2ATHWJU",
226+
"SecretAccessKey": "XFLN7cZ5AP0d66KhyI4gd8Mu0UCQEDN9cfelW1”,
227+
"SessionToken": "IQoJb3JpZ2luX2VjEEAaCXVzLWVhc3QtMiJHMEUCIDyipkM7oPKBHiGeI0pMnXst1gDLfs/TvfskXseKCbshAiEAnl1l/Html7Iq9AEIqf////KQburfkq4A3TuppHMr/9j1TgCj1z83SO261bHqlJUazKoy7vBFR/a6LHt55iMBqtKPEsIWjBgj/jSdRJI3j4Gyk1//luKDytcfF/tb9YrxDTPLrACS1lqAxSIFZ82I/jDhbDs=",
228+
"Expiration": "2025-05-16T04:19:32+00:00"
229+
}
230+
}
231231
----

0 commit comments

Comments
 (0)