|
| 1 | +// Module included in the following assemblies: |
| 2 | +// |
| 3 | +// * |
| 4 | + |
| 5 | +:_mod-docs-content-type: PROCEDURE |
| 6 | + |
| 7 | +[id="installation-workflow_{context}"] |
| 8 | += Installation workflow |
| 9 | + |
| 10 | +Once an environment has been prepared according to the documented prerequisites, the install process is the same as other installer-provisioned infrastructure based platforms. Note that for bare-metal environments the installation program is named `openshift-baremetal-install`. |
| 11 | + |
| 12 | +.Prerequisites |
| 13 | +* A bulleted list of conditions that must be satisfied before the user starts the steps in this module. |
| 14 | +* Prerequisites can be full sentences or sentence fragments; however, prerequisite list items must be parallel. |
| 15 | +* Do not use imperative statements in the Prerequisites section. |
| 16 | + |
| 17 | +.Procedure |
| 18 | +The installation program supports interactive mode, but it is recommended to prepare an `install-config.yaml` file in advance, containing all of the details of the bare-metal hosts to be provisioned. |
| 19 | + |
| 20 | +The administrator fills in all the details of the cluster in the `install-config.yaml` file. |
| 21 | + |
| 22 | +The `install-config.yaml` file ships to the provisioning host. |
| 23 | + |
| 24 | +. The administrator follows along the steps to generate manifests. |
| 25 | + |
| 26 | +. All prerequisites should be in place and verified. |
| 27 | + |
| 28 | +. The installation program starts the bootstrap VM. |
| 29 | + |
| 30 | +. The bootstrap VM will start the ironic pod. |
| 31 | + |
| 32 | +. The ironic pod will contain the following containers: |
| 33 | +.. dnsmasq: DHCP server responsible for handing over the IP addresses to the provisioning interface of various nodes on the provisioning network. |
| 34 | +.. httpd: http server used to ship the images to the nodes. |
| 35 | +.. Cluster-bootstrap |
| 36 | +.. Image-customization |
| 37 | +.. Ironic |
| 38 | +.. Ironic-inspector |
| 39 | +.. Ironic-ramdisk-logs |
| 40 | +.. Coreos-downloader |
| 41 | + |
| 42 | +. All nodes in the cluster are enrolled in the installation program i.e. ironic. |
| 43 | + |
| 44 | +. Nodes enter the validation phase: |
| 45 | + |
| 46 | +.. Ironic wants to verify that the nodes are accessible. |
| 47 | +.. The credentials to access iLO are checked for each node using the machine network. |
| 48 | +.. Also, network access to the iLO from the primary nodes using the machine network is checked. Primary nodes need to be able to access the iLO interface of every other node in the cluster (worker nodes, storage nodes and infra nodes). |
| 49 | +.. If any of these checks fail, the whole installation fails. |
| 50 | + |
| 51 | +. When the validation phase succeeds for all the nodes, the nodes move to a “manageable” state. |
| 52 | + |
| 53 | +. Once in the “manageable” state, the “inspection” phase starts. Inspection phase aims at ensuring the hardware meets the minimum requirements needed for a successful deployment of the OpenShift. |
| 54 | + |
| 55 | +. When PXE boot is enabled i.e. if the `install-config.yaml` file has details for the provisioning network, then the installation program on the bootstrap VM will try to push first a live image to every node with the Ironic Python Agent (IPA) that will be loaded in the RAM. |
| 56 | + |
| 57 | +.. Each node will be rebooted to start the PXE process. |
| 58 | +.. A given node will ask for an IP address via DHCP along with the IP address of the TFTP boot server. These will be provided by the dnsmasq container running on the bootstrap VM. |
| 59 | +.. The `rootfs` is loaded into the host using http. |
| 60 | +.. The boot loader on a given node loads the kernel along with `initramfs` of CoreOS providing ignition and `rootfs` link via PXE kernel parameters. The ignition then starts the IPA as a normal container inside the RAM disk. |
| 61 | +.. The hardware information from each node is sent back to the ironic-inspector container on the bootstrap VM. |
| 62 | +.. Once each host is inspected and passed the inspection, the local IPA on each node will wait for more instruction coming from the cluster/ bootstrap. |
| 63 | + |
| 64 | +. The nodes will enter the “cleaning” state. |
| 65 | +.. In this phase, the installation program waits for each node to clean all the disks. |
| 66 | +.. Further configurations could be made (for example, firmware settings, RAID configurations - only supported in Fujitsu) . |
| 67 | + |
| 68 | +. Once the “cleaning” state finishes, the node state moves to “available” state. |
| 69 | + |
| 70 | +. The installation program moves each node to the “deploying” state. |
| 71 | +.. IPA runs the coreos-installer to install the RHCOS image on the disk defined by `rootDeviceHints` in the `install-config.yaml` file. |
| 72 | +.. The node boots into the new RHCOS. |
| 73 | + |
| 74 | +. Once the primary nodes are configured, control moves to the primary nodes and the bootstrap is removed. |
| 75 | + |
| 76 | +. The baremetal-operator picks up the mantle and continues the deployment of the workers, storage and infra nodes. |
| 77 | + |
| 78 | +. Once the installation is done, the node moves to the active state. |
| 79 | + |
| 80 | +. The administrator carries out the postinstallation checks. |
| 81 | + |
| 82 | +. The administrator proceeds with Day 2 tasks i.e. post deployment phase. |
0 commit comments