Skip to content

Commit c57568c

Browse files
authored
Merge pull request #81324 from lahinson/osdocs-11001-hcp-ibmz
[OSDOCS-11001]: Moving IBM Z docs for HCP
2 parents 66bab8b + 35fe3b8 commit c57568c

12 files changed

+589
-9
lines changed

_topic_maps/_topic_map.yml

Lines changed: 0 additions & 2 deletions
Original file line numberDiff line numberDiff line change
@@ -2414,8 +2414,6 @@ Topics:
24142414
File: hcp-manage-virt
24152415
- Name: Managing hosted control planes on non-bare metal agent machines
24162416
File: hcp-manage-non-bm
2417-
- Name: Managing hosted control planes on IBM Z
2418-
File: hcp-manage-ibmz
24192417
- Name: Managing hosted control planes on IBM Power
24202418
File: hcp-manage-ibmpower
24212419
- Name: Preparing to deploy hosted control planes in a disconnected environment

hosted_control_planes/hcp-deploy/hcp-deploy-ibmz.adoc

Lines changed: 65 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -5,3 +5,68 @@ include::_attributes/common-attributes.adoc[]
55
:context: hcp-deploy-ibmz
66

77
toc::[]
8+
9+
You can deploy {hcp} by configuring a cluster to function as a management cluster. The management cluster is the {product-title} cluster where the control planes are hosted. The management cluster is also known as the _hosting_ cluster.
10+
11+
[NOTE]
12+
====
13+
The _management_ cluster is not the _managed_ cluster. A managed cluster is a cluster that the hub cluster manages.
14+
====
15+
16+
You can convert a managed cluster to a management cluster by using the `hypershift` add-on to deploy the HyperShift Operator on that cluster. Then, you can start to create the hosted cluster.
17+
18+
The {mce-short} 2.5 supports only the default `local-cluster`, which is a hub cluster that is managed, and the hub cluster as the management cluster.
19+
20+
:FeatureName: {hcp-capital} on {ibm-z-title}
21+
include::snippets/technology-preview.adoc[]
22+
23+
To provision {hcp} on bare metal, you can use the Agent platform. The Agent platform uses the central infrastructure management service to add worker nodes to a hosted cluster. For more information, see _Enabling the central infrastructure management service_.
24+
25+
Each {ibm-z-title} system host must be started with the PXE images provided by the central infrastructure management. After each host starts, it runs an Agent process to discover the details of the host and completes the installation. An Agent custom resource represents each host.
26+
27+
When you create a hosted cluster with the Agent platform, HyperShift Operator installs the Agent Cluster API provider in the hosted control plane namespace.
28+
29+
include::modules/hcp-ibmz-prereqs.adoc[leveloffset=+1]
30+
31+
[role="_additional-resources"]
32+
.Additional resources
33+
34+
* link:https://docs.redhat.com/en/documentation/red_hat_advanced_cluster_management_for_kubernetes/2.11/html/clusters/cluster_mce_overview#advanced-config-engine[Advanced configuration]
35+
* link:https://docs.redhat.com/en/documentation/red_hat_advanced_cluster_management_for_kubernetes/2.11/html/clusters/cluster_mce_overview#enable-cim[Enabling the central infrastructure management service]
36+
// * Installing the hosted control plane command line interface
37+
* xref:../../hosted_control_planes/hcp-prepare/hcp-enable-disable.adoc[Enabling or disabling the {hcp} feature]
38+
39+
include::modules/hcp-ibmz-infra-reqs.adoc[leveloffset=+1]
40+
41+
[role="_additional-resources"]
42+
.Additional resources
43+
44+
* xref:../../hosted_control_planes/hcp-prepare/hcp-enable-disable.adoc[Enabling or disabling the {hcp} feature]
45+
46+
include::modules/hcp-ibmz-dns.adoc[leveloffset=+1]
47+
include::modules/hcp-bm-hc.adoc[leveloffset=+1]
48+
include::modules/hcp-ibmz-infraenv.adoc[leveloffset=+1]
49+
50+
[id="hcp-ibmz-add-agents"]
51+
== Adding {ibm-z-title} agents to the InfraEnv resource
52+
53+
To attach compute nodes to a hosted control plane, create agents that help you to scale the node pool. Adding agents in an {ibm-z-title} environment requires additional steps, which are described in detail in this section.
54+
55+
Unless stated otherwise, these procedures apply to both z/VM and RHEL KVM installations on {ibm-z-title} and {ibm-linuxone-title}.
56+
57+
include::modules/hcp-ibmz-kvm-agents.adoc[leveloffset=+2]
58+
include::modules/hcp-ibmz-lpar-agents.adoc[leveloffset=+2]
59+
60+
[role="_additional-resources"]
61+
.Additional resources
62+
63+
* link:https://access.redhat.com/documentation/en-us/red_hat_enterprise_linux/8/html/performing_a_standard_rhel_8_installation/installing-in-an-lpar_installing-rhel[Installing in an LPAR]
64+
65+
include::modules/hcp-ibmz-zvm-agents.adoc[leveloffset=+2]
66+
67+
include::modules/hcp-ibmz-scale-np.adoc[leveloffset=+1]
68+
69+
[role="_additional-resources"]
70+
.Additional resources
71+
72+
* xref:../../installing/installing_ibm_z/installing-ibm-z.adoc#installation-operators-config[Initial Operator configuration]

hosted_control_planes/hcp-manage/hcp-manage-ibmz.adoc

Lines changed: 0 additions & 7 deletions
This file was deleted.

modules/hcp-bm-hc.adoc

Lines changed: 1 addition & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -1,6 +1,7 @@
11
// Module included in the following assemblies:
22
//
33
// * hosted_control_planes/hcp-deploy/hcp-deploy-bm.adoc
4+
// * hosted_control_planes/hcp-deploy/hcp-deploy-ibmz.adoc
45

56
:_mod-docs-content-type: PROCEDURE
67
[id="hcp-bm-hc_{context}"]

modules/hcp-ibmz-dns.adoc

Lines changed: 48 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,48 @@
1+
// Module included in the following assemblies:
2+
//
3+
// * hosted_control_planes/hcp-deploy/hcp-deploy-ibmz.adoc
4+
5+
:_mod-docs-content-type: CONCEPT
6+
[id="hcp-ibmz-dns_{context}"]
7+
= DNS configuration for {hcp} on {ibm-z-title}
8+
9+
The API server for the hosted cluster is exposed as a `NodePort` service. A DNS entry must exist for the `api.<hosted_cluster_name>.<base_domain>` that points to the destination where the API server is reachable.
10+
11+
The DNS entry can be as simple as a record that points to one of the nodes in the managed cluster that is running the hosted control plane.
12+
13+
The entry can also point to a load balancer deployed to redirect incoming traffic to the Ingress pods.
14+
15+
See the following example of a DNS configuration:
16+
17+
[source,terminal]
18+
----
19+
$ cat /var/named/<example.krnl.es.zone>
20+
----
21+
22+
.Example output
23+
[source,terminal]
24+
----
25+
$ TTL 900
26+
@ IN SOA bastion.example.krnl.es.com. hostmaster.example.krnl.es.com. (
27+
2019062002
28+
1D 1H 1W 3H )
29+
IN NS bastion.example.krnl.es.com.
30+
;
31+
;
32+
api IN A 1xx.2x.2xx.1xx <1>
33+
api-int IN A 1xx.2x.2xx.1xx
34+
;
35+
;
36+
*.apps IN A 1xx.2x.2xx.1xx
37+
;
38+
;EOF
39+
----
40+
<1> The record refers to the IP address of the API load balancer that handles ingress and egress traffic for hosted control planes.
41+
42+
For {ibm-title} z/VM, add IP addresses that correspond to the IP address of the agent.
43+
44+
[source,terminal]
45+
----
46+
compute-0 IN A 1xx.2x.2xx.1yy
47+
compute-1 IN A 1xx.2x.2xx.1yy
48+
----

modules/hcp-ibmz-infra-reqs.adoc

Lines changed: 15 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,15 @@
1+
// Module included in the following assemblies:
2+
//
3+
// * hosted_control_planes/hcp-deploy/hcp-deploy-ibmz.adoc
4+
5+
:_mod-docs-content-type: CONCEPT
6+
[id="hcp-ibmz-infra-reqs_{context}"]
7+
= {ibm-z-title} infrastructure requirements
8+
9+
The Agent platform does not create any infrastructure, but requires the following resources for infrastructure:
10+
11+
* Agents: An _Agent_ represents a host that is booted with a discovery image, or PXE image and is ready to be provisioned as an {product-title} node.
12+
13+
* DNS: The API and Ingress endpoints must be routable.
14+
15+
The {hcp} feature is enabled by default. If you disabled the feature and want to manually enable it, or if you need to disable the feature, see _Enabling or disabling the {hcp} feature_.

modules/hcp-ibmz-infraenv.adoc

Lines changed: 43 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,43 @@
1+
// Module included in the following assemblies:
2+
//
3+
// * hosted_control_planes/hcp-deploy/hcp-deploy-ibmz.adoc
4+
5+
:_mod-docs-content-type: PROCEDURE
6+
[id="hcp-ibmz-infraenv_{context}"]
7+
= Creating an InfraEnv resource for {hcp} on {ibm-z-title}
8+
9+
An `InfraEnv` is an environment where hosts that are booted with PXE images can join as agents. In this case, the agents are created in the same namespace as your hosted control plane.
10+
11+
.Procedure
12+
13+
. Create a YAML file to contain the configuration. See the following example:
14+
+
15+
[source,yaml]
16+
----
17+
apiVersion: agent-install.openshift.io/v1beta1
18+
kind: InfraEnv
19+
metadata:
20+
name: <hosted_cluster_name>
21+
namespace: <hosted_control_plane_namespace>
22+
spec:
23+
cpuArchitecture: s390x
24+
pullSecretRef:
25+
name: pull-secret
26+
sshAuthorizedKey: <ssh_public_key>
27+
----
28+
29+
. Save the file as `infraenv-config.yaml`.
30+
31+
. Apply the configuration by entering the following command:
32+
+
33+
[source,terminal]
34+
----
35+
$ oc apply -f infraenv-config.yaml
36+
----
37+
38+
. To fetch the URL to download the PXE images, such as, `initrd.img`, `kernel.img`, or `rootfs.img`, which allows {ibm-z-title} machines to join as agents, enter the following command:
39+
+
40+
[source,terminal]
41+
----
42+
$ oc -n <hosted_control_plane_namespace> get InfraEnv <hosted_cluster_name> -o json
43+
----

modules/hcp-ibmz-kvm-agents.adoc

Lines changed: 60 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,60 @@
1+
// Module included in the following assemblies:
2+
//
3+
// * hosted_control_planes/hcp-deploy/hcp-deploy-ibmz.adoc
4+
5+
:_mod-docs-content-type: PROCEDURE
6+
[id="hcp-ibmz-kvm-agents_{context}"]
7+
= Adding {ibm-z-title} KVM as agents
8+
9+
For {ibm-z-title} with KVM, run the following command to start your {ibm-z-title} environment with the downloaded PXE images from the `InfraEnv` resource. After the Agents are created, the host communicates with the Assisted Service and registers in the same namespace as the `InfraEnv` resource on the management cluster.
10+
11+
.Procedure
12+
13+
. Run the following command:
14+
+
15+
[source,terminal]
16+
----
17+
virt-install \
18+
--name "<vm_name>" \ <1>
19+
--autostart \
20+
--ram=16384 \
21+
--cpu host \
22+
--vcpus=4 \
23+
--location "<path_to_kernel_initrd_image>,kernel=kernel.img,initrd=initrd.img" \ <2>
24+
--disk <qcow_image_path> \ <3>
25+
--network network:macvtap-net,mac=<mac_address> \ <4>
26+
--graphics none \
27+
--noautoconsole \
28+
--wait=-1
29+
--extra-args "rd.neednet=1 nameserver=<nameserver> coreos.live.rootfs_url=http://<http_server>/rootfs.img random.trust_cpu=on rd.luks.options=discard ignition.firstboot ignition.platform.id=metal console=tty1 console=ttyS1,115200n8 coreos.inst.persistent-kargs=console=tty1 console=ttyS1,115200n8" <5>
30+
----
31+
+
32+
<1> Specify the name of the virtual machine.
33+
<2> Specify the location of the `kernel_initrd_image` file.
34+
<3> Specify the disk image path.
35+
<4> Specify the Mac address.
36+
<5> Specify the server name of the agents.
37+
38+
. For ISO boot, download ISO from the `InfraEnv` resource and boot the nodes by running the following command:
39+
+
40+
[source,terminal]
41+
----
42+
virt-install \
43+
--name "<vm_name>" \ <1>
44+
--autostart \
45+
--memory=16384 \
46+
--cpu host \
47+
--vcpus=4 \
48+
--network network:macvtap-net,mac=<mac_address> \ <2>
49+
--cdrom "<path_to_image.iso>" \ <3>
50+
--disk <qcow_image_path> \
51+
--graphics none \
52+
--noautoconsole \
53+
--os-variant <os_version> \ <4>
54+
--wait=-1
55+
----
56+
+
57+
<1> Specify the name of the virtual machine.
58+
<2> Specify the Mac address.
59+
<3> Specify the location of the `image.iso` file.
60+
<4> Specify the operating system version that you are using.

modules/hcp-ibmz-lpar-agents.adoc

Lines changed: 109 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,109 @@
1+
// Module included in the following assemblies:
2+
//
3+
// * hosted_control_planes/hcp-deploy/hcp-deploy-ibmz.adoc
4+
5+
:_mod-docs-content-type: PROCEDURE
6+
[id="hcp-ibmz-lpar-agents_{context}"]
7+
= Adding {ibm-z-title} LPAR as agents
8+
9+
You can add the Logical Partition (LPAR) on {ibm-z-title} or {ibm-linuxone-title} as a compute node to a hosted control plane.
10+
11+
.Procedure
12+
13+
. Create a boot parameter file for the agents:
14+
+
15+
.Example parameter file
16+
[source,yaml]
17+
----
18+
rd.neednet=1 cio_ignore=all,!condev \
19+
console=ttysclp0 \
20+
ignition.firstboot ignition.platform.id=metal
21+
coreos.live.rootfs_url=http://<http_server>/rhcos-<version>-live-rootfs.<architecture>.img \// <1>
22+
coreos.inst.persistent-kargs=console=ttysclp0
23+
ip=<ip>::<gateway>:<netmask>:<hostname>::none nameserver=<dns> \// <2>
24+
rd.znet=qeth,<network_adaptor_range>,layer2=1
25+
rd.<disk_type>=<adapter> \// <3>
26+
zfcp.allow_lun_scan=0
27+
ai.ip_cfg_override=1 \// <4>
28+
random.trust_cpu=on rd.luks.options=discard
29+
----
30+
+
31+
<1> For the `coreos.live.rootfs_url` artifact, specify the matching `rootfs` artifact for the `kernel` and `initramfs` that you are starting. Only HTTP and HTTPS protocols are supported.
32+
<2> For the `ip` parameter, manually assign the IP address, as described in _Installing a cluster with z/VM on {ibm-z-title} and {ibm-linuxone-title}_.
33+
<3> For installations on DASD-type disks, use `rd.dasd` to specify the DASD where {op-system-first} is to be installed. For installations on FCP-type disks, use `rd.zfcp=<adapter>,<wwpn>,<lun>` to specify the FCP disk where {op-system} is to be installed.
34+
<4> Specify this parameter when you use an Open Systems Adapter (OSA) or HiperSockets.
35+
36+
. Generate the `.ins` and `initrd.img.addrsize` files.
37+
+
38+
The `.ins` file includes installation data and is on the FTP server. You can access the file from the HMC system. The `.ins` file contains details such as mapping of the location of installation data on the disk or FTP server, the memory locations where the data is to be copied.
39+
+
40+
[NOTE]
41+
====
42+
In {product-title} 4.16, the `.ins` file and `initrd.img.addrsize` are not automatically generated as part of boot-artifacts from the installation program. You must manually generate these files.
43+
====
44+
45+
.. Run the following commands to get the size of the `kernel` and `initrd`:
46+
+
47+
[source,yaml]
48+
----
49+
KERNEL_IMG_PATH='./kernel.img'
50+
INITRD_IMG_PATH='./initrd.img'
51+
CMDLINE_PATH='./generic.prm'
52+
kernel_size=$(stat -c%s $KERNEL_IMG_PATH )
53+
initrd_size=$(stat -c%s $INITRD_IMG_PATH)
54+
----
55+
56+
.. Round the `kernel` size up to the next MiB boundary. This value is the starting address of `initrd.img`.
57+
+
58+
[source,terminal]
59+
----
60+
offset=$(( (kernel_size + 1048575) / 1048576 * 1048576 ))
61+
----
62+
63+
.. Create the kernel binary patch file that contains the `initrd` address and size by running the following commands:
64+
+
65+
[source,terminal]
66+
----
67+
INITRD_IMG_NAME=$(echo $INITRD_IMG_PATH | rev | cut -d '/' -f 1 | rev)
68+
KERNEL_OFFSET=0x00000000
69+
KERNEL_CMDLINE_OFFSET=0x00010480
70+
INITRD_ADDR_SIZE_OFFSET=0x00010408
71+
OFFSET_HEX=$(printf '0x%08x\n' $offset)
72+
----
73+
74+
.. Convert the address and size to binary format by running the following commands:
75+
+
76+
[source,terminal]
77+
----
78+
printf "$(printf '%016x\n' $initrd_size)" | xxd -r -p > temp_size.bin
79+
----
80+
81+
.. Merge the address and size binaries by running the following command:
82+
+
83+
[source,terminal]
84+
----
85+
cat temp_address.bin temp_size.bin > "$INITRD_IMG_NAME.addrsize"
86+
----
87+
88+
.. Clean up temporary files by running the following command:
89+
+
90+
[source,terminal]
91+
----
92+
rm -rf temp_address.bin temp_size.bin
93+
----
94+
95+
.. Create the `.ins` file. The file is based on the paths of the `kernel.img`, `initrd.img`, `initrd.img.addrsize`, and `cmdline` files and the memory locations where the data is to be copied.
96+
+
97+
[source,yaml]
98+
----
99+
$KERNEL_IMG_PATH $KERNEL_OFFSET
100+
$INITRD_IMG_PATH $OFFSET_HEX
101+
$INITRD_IMG_NAME.addrsize $INITRD_ADDR_SIZE_OFFSET
102+
$CMDLINE_PATH $KERNEL_CMDLINE_OFFSET
103+
----
104+
105+
. Transfer the `initrd`, `kernel`, `generic.ins`, and `initrd.img.addrsize` parameter files to the file server. For more information about how to transfer the files with FTP and boot, see _Installing in an LPAR_.
106+
107+
. Start the machine.
108+
109+
. Repeat the procedure for all other machines in the cluster.

0 commit comments

Comments
 (0)