From f93ec02aa7fbcab89b2e080304aae08650c371c6 Mon Sep 17 00:00:00 2001 From: Rutuja Date: Fri, 27 Sep 2024 15:50:03 +0530 Subject: [PATCH 1/2] Edits suggested by Rutuja --- modules/ROOT/pages/index.adoc | 8 +++-- modules/appendix/pages/section1.adoc | 40 +++++++++++----------- modules/appendix/pages/section2.adoc | 10 +++--- modules/appendix/pages/section3.adoc | 18 +++++----- modules/appendix/pages/section4.adoc | 24 ++++++------- modules/chapter1/pages/section1.adoc | 25 +++++++------- modules/chapter1/pages/section2.adoc | 10 +++--- modules/chapter1/pages/section3.adoc | 4 +-- modules/chapter1/pages/section4.adoc | 32 +++++++++--------- modules/chapter2/pages/index.adoc | 2 +- modules/chapter2/pages/section1.adoc | 5 +-- modules/chapter2/pages/section2.adoc | 46 ++++++++++++------------- modules/chapter2/pages/section3.adoc | 28 ++++++++-------- modules/chapter2/pages/section4.adoc | 22 ++++++------ modules/chapter3/pages/index.adoc | 6 ++-- modules/chapter3/pages/section1.adoc | 6 ++-- modules/chapter3/pages/section2.adoc | 50 ++++++++++++++-------------- modules/chapter3/pages/section3.adoc | 26 +++++++-------- modules/chapter3/pages/section4.adoc | 34 +++++++++---------- modules/chapter3/pages/section5.adoc | 20 +++++------ modules/chapter4/pages/index.adoc | 4 +-- modules/chapter4/pages/section1.adoc | 18 +++++----- modules/chapter4/pages/section2.adoc | 42 +++++++++++------------ modules/chapter4/pages/section3.adoc | 20 +++++------ modules/chapter4/pages/section4.adoc | 14 ++++---- modules/chapter4/pages/section5.adoc | 30 ++++++++--------- 26 files changed, 275 insertions(+), 269 deletions(-) diff --git a/modules/ROOT/pages/index.adoc b/modules/ROOT/pages/index.adoc index 4e03f24..f9505df 100644 --- a/modules/ROOT/pages/index.adoc +++ b/modules/ROOT/pages/index.adoc @@ -22,18 +22,20 @@ The PTL team acknowledges the valuable contributions of the following Red Hat as This introductory course has a few, simple hands-on labs. You will use the https://demo.redhat.com/catalog?item=babylon-catalog-prod/equinix-metal.eqx-blank.prod&utm_source=webapp&utm_medium=share-link.ocp4-workshop-rhods-base-aws.prod[Equinix Metal baremetal blank server,window=read-later] catalog item in the Red Hat Demo Platform (RHDP) to run the hands-on exercises in this course. Update the Catalog link - ##FIX THIS## +// Can you fix the above link? When ordering this catalog item in RHDP: . Select *Practice/Enablement* for the *Activity* field. . Select the nearest or any region in the *Region* field. -. Select *n3.xlarge.x86* as the flavor in *Type* field. +. Select *n3.xlarge.x86* as the flavor in the *Type* field. === Red Hat Partners Partners should be able to access the new https://partner.demo.redhat.com[Red Hat Demo Platform for Partners,window=read-later] by logging in with their RHN account credentials. For more information on this you can refer https://content.redhat.com/us/en/product/cross-portfolio-initiatives/rhdp.html#tabs-333fa7ebb9-item-b6fc845e73-tab[about Red Hat Demo Platform (RHDP),window=read-later] Catalog link - ##FIX THIS## +// Can you fix the above link? For partner support - https://connect.redhat.com/en/support[Help and support,window=read-later] @@ -44,8 +46,10 @@ For this course, you should have: * Red Hat Certified Systems Administrator certification, or equivalent knowledge of Linux system administration is recommended for all roles. * https://rol.redhat.com/rol/app/courses/do280-4.14[Red Hat OpenShift Administration II: Configuring a Production Cluster (DO280),window=read-later], or equivalent knowledge on configuring a production Red Hat OpenShift cluster. * https://rol.redhat.com/rol/app/technical-overview/do016-4.14[Red Hat OpenShift Virtualization (OCP Virt) Technical Overview (DO016),window=read-later] +// Above course showed - temporarily unavailable for me. Can you please check? * https://rol.redhat.com/rol/app/courses/do316-4.14[Managing Virtual Machines with Red Hat OpenShift Virtualization (DO316),window=read-later], or equivalent knowledge on how to create virtual machines on Red Hat OpenShift cluster using Red Hat OpenShift Virtualization. -* Knowledge on installing Red Hat Advanced Cluster Management (RHACM) operator +// Avove course showed - temporarily unavailable for me. Can you please check? +* Knowledge of installing Red Hat Advanced Cluster Management (RHACM) operator == Objectives diff --git a/modules/appendix/pages/section1.adoc b/modules/appendix/pages/section1.adoc index 67ba2cd..e02e501 100644 --- a/modules/appendix/pages/section1.adoc +++ b/modules/appendix/pages/section1.adoc @@ -1,7 +1,7 @@ = Initial Setup on Hypervisor without Automation -Before proceeding the actual MCAP deployment, let's first ensure the initial setup and configuration is done on the hypervisor. -You will need to perform few tasks before actual deployment of openshift clusters. +Before proceeding with the actual MCAP deployment, let's first ensure the initial setup and configuration are done on the hypervisor. +You will need to perform a few tasks before the actual deployment of openshift clusters. image::MCAP_setup_1.png[] @@ -20,10 +20,10 @@ Login as `root` on the hypervisor. == Setup and Configuration without Automation -=== Create SSH key for the root user +=== Create an SSH key for the root user The public key from the following command will be used while deploying _Hub_ and _Infrastructure_ clusters. -You will be asked for public key while building the discovery iso for the host. +You will be asked for a public key while building the discovery iso for the host. [source,bash,role=execute] ---- @@ -32,8 +32,8 @@ ssh-keygen -t rsa -f /root/.ssh/id_rsa -N '' === Install the required packages -The following required packages needed for DHCP server, HTTP server, DNS server, VNC server and creating virtual machines on hypervisor. -It also include additional tools packages. +The following required packages are needed for DHCP server, HTTP server, DNS server, VNC server, and creating virtual machines on the hypervisor. +It also includes additional tools and packages. [source,bash,role=execute] ---- @@ -44,9 +44,9 @@ libguestfs-tools-c cockpit cockpit-machines unzip tigervnc tigervnc-server firef === Increase the _Swap_ space -_Swap_ space is extension of physical RAM. +_Swap_ space is an extension of physical RAM. It offers virtual memory in case of physical RAM is fully used. -MCAP needs considerable amount of memory and it is recommended to have proportionate amount of _Swap_ space configured in an environment. +MCAP needs a considerable amount of memory and it is recommended to have a proportionate amount of _Swap_ space configured in an environment. . Disable the existing _Swap_ first. + @@ -55,7 +55,7 @@ MCAP needs considerable amount of memory and it is recommended to have proportio swapoff -a ---- + -Check the _Swap_ space using `free` command. +Check the _Swap_ space using the `free` command. + .Sample output ---- @@ -93,7 +93,7 @@ Here `nvme1n1` disk can be used to create the swap space. The NVMe disk sequencing used as an OS partition will change, whenever you provision the catalog. The unused NVMe disk may vary in your environment output. -. Use unused 250GB nvme disk as swap partition. +. Use unused 250GB nvme disk as a swap partition. + [source,bash,role=execute] ---- @@ -111,7 +111,7 @@ The above UUID may differ in your environment. . Add swap entry in `/etc/fstab` to make it persistent throughout the reboot. + -Ensure to replace the UUID in following command with UUID from previous step. +Ensure to replace the UUID in following command with the UUID from the previous step. + [source,bash,role=execute] ---- @@ -147,7 +147,7 @@ Swap: 238Gi 0B 238Gi === Create LV for VM storage pool -The LV created in this section will be used as storage pool for virtual machine disks and backend shared OpenShift DataFoundation using Red Hat Ceph storage for _Tenant_ cluster. +The LV created in this section will be used as a storage pool for virtual machine disks and backend-shared OpenShift DataFoundation using Red Hat Ceph storage for the _Tenant_ cluster. . Find the 3.5TB nvme disks. + @@ -229,14 +229,14 @@ The above UUID may differ in your environment. . Mount the 7TB LV on `/var/lib/libvirt/images`. + -Ensure to replace the UUID in following command with UUID from previous step. +Ensure to replace the UUID in the following command with UUID from the previous step. + [source,bash,role=execute] ---- echo "UUID=195dc91e-58be-4671-bbf5-b4fdf70945e2 /var/lib/libvirt/images ext4 errors=remount-ro 0 1" >> /etc/fstab ---- + -Run `mount` command to mount the LV on `/var/lib/libvirt/images`. +Run the `mount` command to mount the LV on `/var/lib/libvirt/images`. + [source,bash,role=execute] ---- @@ -273,7 +273,7 @@ After enabling and starting the libvirt services, `virbr0` bridge will be create You can verify it by running the `ip addr` command. After enabling and starting the cockpit services, it creates cockpit web console access. -You can login to cockpit web console with `lab-user's` credentials. +You can log in to the cockpit web console with the `lab-user's` credentials. [source,bash,role=execute] ---- @@ -300,7 +300,7 @@ You can use the cockpit web console (https://:9090/) to moni === Configure DHCP -It is recommended to have DHCP server. +It is recommended to have the DHCP server. In this section, you will be configuring the DHCP server. . Create the `/etc/dhcp/dhcpd.conf` file. @@ -393,7 +393,7 @@ systemctl start dhcpd === Configure DNS -To have name resolution, DNS server is needed. +To have name resolution, the DNS server is needed. In this section, you will be configuring the DNS server. . Create the `/etc/named.conf` file. @@ -692,9 +692,9 @@ dig sno1.lab.example.com === Configure HTTP The HTTP server is needed to serve the ignition configuration files. -These ignition configuration files will be pulled from HTTP server during the openshift node installation. +These ignition configuration files will be pulled from the HTTP server during the openshift node installation. In this section, you will be configuring the HTTP server. -There are multiple ways to configure the HTTP server but here directory from user's home directory holds the files. +There are multiple ways to configure the HTTP server but here directory from the user's home directory holds the files. . Create the `/etc/httpd/conf.d/userdir.conf` file. + @@ -818,7 +818,7 @@ rm /home/lab-user/public_html/cmd [NOTE] "HTTP/1.1 200 OK" indicates http server is working. -=== Create Storage Pool for KVMs +=== Create a Storage Pool for KVMs All five KVMs need the storage pool for storing the VM disks. In this section, you will be creating the storage pool. diff --git a/modules/appendix/pages/section2.adoc b/modules/appendix/pages/section2.adoc index e0c0452..3c56257 100644 --- a/modules/appendix/pages/section2.adoc +++ b/modules/appendix/pages/section2.adoc @@ -8,7 +8,7 @@ image::MCAP_setup_2.png[] . Download the `rhel-9.4-x86_64-kvm.qcow2` image from the https://access.redhat.com/downloads/content/rhel[Red Hat Customer Portal,window=read-later] to your laptop/desktop. -. Use secured copy (scp) to copy the `rhel-9.4-x86_64-kvm.qcow2` image from your laptop/desktop to hypervisor and then place it in `/root` directory. +. Use secured copy (scp) to copy the `rhel-9.4-x86_64-kvm.qcow2` image from your laptop/desktop to the hypervisor and then place it in `/root` directory. + .Sample output ---- @@ -63,7 +63,7 @@ cluster_size: 65536 ...output omitted... ---- -. Resize the image with additional size of 30G. +. Resize the image with an additional size of 30G. + [source,bash,role=execute] ---- @@ -72,7 +72,7 @@ qemu-img resize rhel-9.4-x86_64-kvm.qcow2 +30G + This increases the virtual size of the disk. + -Ensure virtual size is increased by 30GB. +Ensure the virtual size is increased by 30GB. + .Sample output ---- @@ -121,7 +121,7 @@ https://access.redhat.com/solutions/57263[How to extend a XFS filesytem using th virt-customize -a rhel-9.4-x86_64-kvm.qcow2 --root-password password:redhat ---- + -You can use this password for logging into VM via console. +You can use this password for logging into VM via the console. . Disable the `cloud-init` service in qcow2 image. + @@ -207,7 +207,7 @@ Domain 'storage' started ---- + -Verify `storage` VM is in `running` state. +Verify `storage` VM is in a `running` state. + .Sample output ---- diff --git a/modules/appendix/pages/section3.adoc b/modules/appendix/pages/section3.adoc index 39cd58f..304c423 100644 --- a/modules/appendix/pages/section3.adoc +++ b/modules/appendix/pages/section3.adoc @@ -1,12 +1,12 @@ = Hub VM Deployment without Automation -In this section, you will be creating one KVM with name `hub`. +In this section, you will be creating one KVM with the name `hub`. image::MCAP_setup_3.png[] == Prerequisites -Copy the `rhel-9.4-x86_64-kvm.qcow2` image from `/root` and place it in /var/lib/libvirt/images directory with name as `rhel9-guest.qcow2`. +Copy the `rhel-9.4-x86_64-kvm.qcow2` image from `/root` and place it in /var/lib/libvirt/images directory with the name as `rhel9-guest.qcow2`. .Sample output ---- @@ -48,7 +48,7 @@ cluster_size: 65536 ...output omitted... ---- -. Resize the image with additional size of 120G. +. Resize the image with an additional size of 120G. + [source,bash,role=execute] ---- @@ -57,7 +57,7 @@ qemu-img resize rhel9-guest.qcow2 +120G + This increases the virtual size of the disk. + -Ensure virtual size is increased by 120GB. +Ensure the virtual size is increased by 120GB. + .Sample output ---- @@ -107,7 +107,7 @@ https://access.redhat.com/solutions/57263[How to extend a XFS filesytem using th virt-customize -a rhel9-guest.qcow2 --root-password password:redhat ---- + -You can use this password for logging into VM via console. +You can use this password for logging into the VM via the console. . Disable the `cloud-init` service in qcow2 image. + @@ -130,7 +130,7 @@ virt-customize -a rhel9-guest.qcow2 --ssh-inject root:file:/root/.ssh/id_rsa.pub virt-customize -a rhel9-guest.qcow2 --selinux-relabel ---- -. Create the image for `hub` VM using the _rhel9.X_ qcow2 image. +. Create the image for the `hub` VM using the _rhel9.X_ qcow2 image. + [source,bash,role=execute] ---- @@ -144,7 +144,7 @@ Formatting '/var/lib/libvirt/images/hub.qcow2', fmt=qcow2 cluster_size=65536 ext . Create the `hub` VM with three 2TB disks. Disk path should be storage pool path i.e. `/var/lib/libvirt/images/`. -mac address for the `hub` VM should be same as from the dhcp configuration. +mac address for the `hub` VM should be the same as from the dhcp configuration. + [source,bash,role=execute] ---- @@ -190,7 +190,7 @@ Domain 'hub' started ---- + -Verify `hub` VM is in `running` state. +Verify `hub` VM is in a `running` state. + .Sample output ---- @@ -204,7 +204,7 @@ virsh list --all . Verify `hub` VM is booted successfully. + -Take the console of the `hub` VM and login as _root_ user with _redhat_ as password. +Take the console of the `hub` VM and log in as _root_ user with _redhat_ as the password. + [source,bash,role=execute] ---- diff --git a/modules/appendix/pages/section4.adoc b/modules/appendix/pages/section4.adoc index bf44dfd..c0bab3e 100644 --- a/modules/appendix/pages/section4.adoc +++ b/modules/appendix/pages/section4.adoc @@ -1,12 +1,12 @@ = Infrastructure VM Deployment without Automation -In this section, you will be creating three KVMs with name `sno1`, `sno2` and `sno3`. +In this section, you will be creating three KVMs with the name `sno1`, `sno2`, and `sno3`. image::MCAP_setup_4.png[] == Prerequisites -Copy the `rhel-9.4-x86_64-kvm.qcow2` image from `/root` and place it in `/var/lib/libvirt/images` directory with name as `rhel9-guest-sno.qcow2`. +Copy the `rhel-9.4-x86_64-kvm.qcow2` image from `/root` and place it in `/var/lib/libvirt/images` directory with the name `rhel9-guest-sno.qcow2`. .Sample output ---- @@ -48,7 +48,7 @@ cluster_size: 65536 ...output omitted... ---- -. Resize the image with additional size of 120G. +. Resize the image with an additional size of 120G. + [source,bash,role=execute] ---- @@ -57,7 +57,7 @@ qemu-img resize rhel9-guest-sno.qcow2 +120G + This increases the virtual size of the disk. + -Ensure virtual size is increased by 120GB. +Ensure the virtual size is increased by 120GB. + .Sample output ---- @@ -107,7 +107,7 @@ https://access.redhat.com/solutions/57263[How to extend a XFS filesytem using th virt-customize -a rhel9-guest-sno.qcow2 --root-password password:redhat ---- + -You can use this password for logging into VM via console. +You can use this password for logging into the VM via the console. . Disable the `cloud-init` service in qcow2 image. + @@ -165,7 +165,7 @@ Formatting '/var/lib/libvirt/images/sno2.qcow2', fmt=qcow2 cluster_size=65536 ex Formatting '/var/lib/libvirt/images/sno3.qcow2', fmt=qcow2 cluster_size=65536 extended_l2=off compression_type=zlib size=139586437120 backing_file=/var/lib/libvirt/images/rhel9-guest-sno.qcow2 backing_fmt=qcow2 lazy_refcounts=off refcount_bits=16 ---- -. Create the `sno1`, `sno2` and `sno3` VMs. +. Create the `sno1`, `sno2`, and `sno3` VMs. Disk path should be storage pool path i.e. `/var/lib/libvirt/images/`. mac address for the `sno1`, `sno2` and `sno3` VMs should be same as from the dhcp configuration. + @@ -229,7 +229,7 @@ You can restart your domain by running: virsh --connect qemu:///system start sno3 ---- + -Verify `sno1`, `sno2` and `sno3` VMs are created and in `shut off` state. +Verify that `sno1`, `sno2` and `sno3` VMs are created and are in `shut off` state. + .Sample output ---- @@ -244,7 +244,7 @@ virsh list --all - sno3 shut off ---- -. Start the `sno1`, `sno2` and `sno3` VMs. +. Start the `sno1`, `sno2`, and `sno3` VMs. + [source,bash,role=execute] ---- @@ -261,7 +261,7 @@ Domain 'sno3' started ---- + -Verify `sno1`, `sno2` and `sno3` VMs are in `running` state. +Verify `sno1`, `sno2`, and `sno3` VMs are in `running` state. + .Sample output ---- @@ -276,9 +276,9 @@ virsh list --all 25 sno3 running ---- -. Verify `sno1`, `sno2` and `sno3` VMs are booted successfully. +. Verify `sno1`, `sno2` and, `sno3` VMs are booted successfully. + -Take the console of the `sno1`, `sno2` and `sno3` VMs and login as _root_ user with _redhat_ as password. +Take the console of the `sno1`, `sno2`, and `sno3` VMs and login as _root_ user with _redhat_ as password. + [source,bash,role=execute] ---- @@ -297,4 +297,4 @@ Password: [root@sno1 ~]# ---- + -Similarly verify `sno2` and `sno3` VMs are booted successfully. \ No newline at end of file +Similarly, verify that `sno2` and `sno3` VMs are booted successfully. \ No newline at end of file diff --git a/modules/chapter1/pages/section1.adoc b/modules/chapter1/pages/section1.adoc index c3614ab..9fc7906 100644 --- a/modules/chapter1/pages/section1.adoc +++ b/modules/chapter1/pages/section1.adoc @@ -58,17 +58,18 @@ https://linux-kvm.org/page/Main_Page[Linux-KVM,window=read-later] === Utility and Services -The bare metal is acting as hypervisor, http server, dhcp server and dns server. +The bare metal is acting as a hypervisor, http server, dhcp server, and dns server. +// Should the above names - HTTP, DHCP be capitalized? Configure all these services on the bare metal. === Networking -In this example there are three Infrastructure clusters. +In this example, there are three Infrastructure clusters. This setup uses KVMs as the base infrastructure. All external communication between your clusters will happen via a https://developers.redhat.com/blog/2018/10/22/introduction-to-linux-interfaces-for-virtual-networking#bridge[virtual bridge,window=read-later] on the bare metal. Install the https://docs.openshift.com/container-platform/4.16/networking/k8s_nmstate/k8s-nmstate-about-the-k8s-nmstate-operator.html[Kubernetes NMState Operator,window=read-later] on the _Infrastructure_ clusters. -NMstate operator allows users to configure various network interface types, DNS and routing on cluster nodes. +NMstate operator allows users to configure various network interface types, DNS, and routing on cluster nodes. Two main object types drive the configuration. * NodeNetworkConfigurationPolicy (Policy) @@ -82,18 +83,18 @@ Non-volatile Memory Express (NVMe) disks are attached to this hypervisor. In this lab setup, use https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/9/html/configuring_and_managing_logical_volumes/index[LVM,window=read-later] to configure storage. [NOTE] -Non-volatile Memory Express (NVMe) is an interface that allows host software utility to communicate with solid state drives. +Non-volatile Memory Express (NVMe) is an interface that allows host software utility to communicate with solid-state drives. -On the hypervisor, first create 7TB (3.5TB + 3.5TB disks) logical volume (LV) and mount on storage pool directory path. +On the hypervisor, first create a 7TB (3.5TB + 3.5TB disks) logical volume (LV) and mount it on the storage pool directory path. This stores VMs images. There is a separate storage virtual machine (VM) for shared storage for _Tenant_ cluster. -Create three disks each of 2TB from the 7TB LV and attach it to the storage VM. -These disks are needed in Ceph deployment on storage VM. +Create three disks each of 2TB from the 7TB LV and attach them to the storage VM. +These disks are needed in Ceph deployment on the storage VM. Next, install https://docs.redhat.com/en/documentation/red_hat_openshift_data_foundation/4.16/html/red_hat_openshift_data_foundation_architecture/openshift_data_foundation_operators[OpenShift Data Foundation (ODF) operator,window=read-later] on the _Infrastructure_ clusters. Create an OpenShift Data Foundation cluster for https://docs.redhat.com/en/documentation/red_hat_openshift_data_foundation/4.16/html/deploying_openshift_data_foundation_in_external_mode/deploy-openshift-data-foundation-using-red-hat-ceph-storage#creating-an-openshift-data-foundation-cluster-service-for-external-storage_ceph-external[external Ceph storage system,window=read-later]. -This is the backend shared storage for _Tenant_ cluster. +This is the backend shared storage for the _Tenant_ cluster. === Hub Cluster @@ -101,18 +102,18 @@ The main role of _Hub_ cluster is to deploy _Infrastructure_ clusters and _Tenan Deploy _Hub_ cluster as https://docs.openshift.com/container-platform/4.16/installing/installing_sno/install-sno-installing-sno.html[Single Node OpenShift (SNO) cluster,window=read-later]. -Install following operators on _Hub_ cluster which acts as hub cluster. +Install the following operators on the _Hub_ cluster which acts as the hub cluster. * Multi cluster engine * Red Hat Advanced Cluster Management (RHACM) * Logical Volume Manager Storage (LVMS) -Ensure https://docs.redhat.com/en/documentation/red_hat_advanced_cluster_management_for_kubernetes/2.11/html-single/clusters/index#enable-cim[Provisioning and Central Infrastructure Management (CIM),window=read-later] services are deployed on _Hub_ cluster. +Ensure https://docs.redhat.com/en/documentation/red_hat_advanced_cluster_management_for_kubernetes/2.11/html-single/clusters/index#enable-cim[Provisioning and Central Infrastructure Management (CIM),window=read-later] services are deployed on the _Hub_ cluster. === Infrastructure Cluster The main role of _Infrastructure_ clusters is to deploy Nested OpenShift VMs using OpenShift Virtualization. -Each _Infrastructure_ cluster has one virtual machine that acts as OpenShift node in _Tenant_ cluster. +Each _Infrastructure_ cluster has one virtual machine that acts as an OpenShift node in the _Tenant_ cluster. There are three _Infrastructure_ clusters. Deploy _Infrastructure_ clusters as _Single Node OpenShift (SNO)_ cluster from _Hub_ cluster using RHACM. @@ -124,7 +125,7 @@ Upload discovery ISO to all _Infrastructure_ clusters to boot the _Tenant_ clust === Tenant Cluster There is only one _Tenant_ cluster. -Deploy _Tenant_ cluster as https://docs.openshift.com/container-platform/4.5/release_notes/ocp-4-5-release-notes.html#ocp-4-5-three-node-bare-metal-deployments[_Three-Node OpenShift Compact_ cluster,window=read-later] from _Hub_ cluster using RHACM. +Deploy _Tenant_ cluster as https://docs.openshift.com/container-platform/4.5/release_notes/ocp-4-5-release-notes.html#ocp-4-5-three-node-bare-metal-deployments[_Three-Node OpenShift Compact_ cluster,window=read-later] from the _Hub_ cluster using RHACM. These three nodes are the virtual machines running on _Infrastructure_ clusters. All applications and workloads are running on _Tenant_ cluster nodes. \ No newline at end of file diff --git a/modules/chapter1/pages/section2.adoc b/modules/chapter1/pages/section2.adoc index f2162c9..93869d5 100644 --- a/modules/chapter1/pages/section2.adoc +++ b/modules/chapter1/pages/section2.adoc @@ -1,10 +1,10 @@ = Initial Setup on Hypervisor [NOTE] -You can refer to how to configure manually i.e. without automation at the end appendix chapter. +You can refer to the appendix at the end for manual configuration instructions without automation. -Before proceeding with the actual MCAP deployment, let's first ensure the initial setup and configuration is done on the hypervisor. -You will need to perform few tasks before the actual deployment of OpenShift clusters. +Before proceeding with the actual MCAP deployment, let's first ensure that the initial setup and configuration are done on the hypervisor. +You will need to perform a few tasks before the actual deployment of OpenShift clusters. Few of the tasks are automated with the help of ansible playbooks. image::MCAP_setup_1.png[] @@ -13,7 +13,7 @@ image::MCAP_setup_1.png[] . Download the `rhel-9.4-x86_64-kvm.qcow2` image from the https://access.redhat.com/downloads/content/rhel[Red Hat Customer Portal,window=read-later] to your laptop/desktop. -. Use secured copy (scp) to copy the `rhel-9.4-x86_64-kvm.qcow2` image from your laptop/desktop to hypervisor and then place it in `/root` directory. +. Use secured copy (scp) to copy the `rhel-9.4-x86_64-kvm.qcow2` image from your laptop/desktop to the hypervisor and then place it in `/root` directory. + .Sample output: ---- @@ -57,7 +57,7 @@ ansible-galaxy collection install -r requirements.yml == Setup and Configuration -You can either use the *_setup_hypervisor.yaml_* playbook to install packages, configure swap, LV, DHCP, HTTP and DNS or else run commands manually on the hypervisor. +You can either use the *_setup_hypervisor.yaml_* playbook to install packages, configure swap, LV, DHCP, HTTP, and DNS, or else run commands manually on the hypervisor. Ensure you are in *_ansible_* directory of the repo. diff --git a/modules/chapter1/pages/section3.adoc b/modules/chapter1/pages/section3.adoc index 0b65d9f..606ace0 100644 --- a/modules/chapter1/pages/section3.adoc +++ b/modules/chapter1/pages/section3.adoc @@ -1,7 +1,7 @@ = Storage VM Deployment [NOTE] -You can refer how to configure manually i.e. without automation in at the end appendix chapter. +You can refer to the appendix at the end for manual configuration instructions without automation. In this section, you will be creating one KVM with name `storage`. @@ -9,7 +9,7 @@ image::MCAP_setup_2.png[] == Prerequisites -Ensure you have already executed *_setup_hypervisor.yaml_* playbook from `Initial Setup on Hypervisor` page. +Ensure you have already executed *_setup_hypervisor.yaml_* playbook from `Initial Setup on the Hypervisor` page. == Storage VM Deployment diff --git a/modules/chapter1/pages/section4.adoc b/modules/chapter1/pages/section4.adoc index 6667ebc..e5c47c8 100644 --- a/modules/chapter1/pages/section4.adoc +++ b/modules/chapter1/pages/section4.adoc @@ -1,7 +1,7 @@ = Ceph Storage Deployment -In this section, you will be deploying Ceph on `storage` VM. -This `storage` VM will be a single node with three disks Ceph cluster. +In this section, you will be deploying Ceph on a `storage` VM. +This `storage` VM will be a single node with three disks in the Ceph cluster. image::MCAP_setup_3.png[] @@ -11,7 +11,7 @@ image::MCAP_setup_3.png[] == Ceph Storage Deployment Prerequisites -. Take the console of the `storage` VM and login as _root_ user with _redhat_ as password. +. Take the console of the `storage` VM and log in as _root_ user with _redhat_ as the password. + [source,bash,role=execute] ---- @@ -29,7 +29,7 @@ Password: [root@storage ~]# ---- -. Register the `storage` VM with valid subscription. +. Register the `storage` VM with a valid subscription. You will need to provide your customer portal (access.redhat.com) credentials. + [source,bash,role=execute] @@ -37,7 +37,7 @@ You will need to provide your customer portal (access.redhat.com) credentials. subscription-manager register ---- + -Disable the all repos. +Disable all the repos. + [source,bash,role=execute] ---- @@ -51,14 +51,14 @@ Enable only required and Ceph repos. subscription-manager repos --enable=rhel-9-for-x86_64-baseos-rpms --enable=rhel-9-for-x86_64-appstream-rpms --enable=rhceph-7-tools-for-rhel-9-x86_64-rpms ---- + -Check the repo list and ensure all required repos are enabled. +Check the repo list and ensure that all the required repos are enabled. + [source,bash,role=execute] ---- dnf repolist ---- -. Update all packages on the `storage` VM to latest version. +. Update all packages on the `storage` VM to the latest version. + [source,bash,role=execute] ---- @@ -80,7 +80,7 @@ reboot ---- . Permit `root` login on `storage` VM. -This will allow the login to `storage` VM as `root` user using `ssh` connection. +This will allow the login to a `storage` VM as a `root` user using an `ssh` connection. + [source,bash,role=execute] ---- @@ -123,7 +123,7 @@ The `ssh-add` command prompts the user for a private key password and adds it to Once you add a password to `ssh-agent`, you will not be prompted for it when using SSH or scp to connect to hosts with your public key. . Create the `/etc/auth.json` file. -This file will be used in Ceph cluster deployment for getting access to container image catalog. +This file will be used in Ceph cluster deployment for getting access to the container image catalog. + [source,bash,role=execute] ---- @@ -141,9 +141,9 @@ Replace "yourusername" with your username and "yourpassword" with your password == Ceph Configuration and Deployment -. Create the Ceph spec file `initial-config.yaml`, which is used as initial configuration for Ceph cluster deployment. +. Create the Ceph spec file `initial-config.yaml`, which is used as the initial configuration for Ceph cluster deployment. There is only a single Ceph cluster node i.e. `storage` VM, used to deploy Ceph cluster. -You need to provide `storage` VM details in the spec file such as IP address, hostname, host and three disks attached to `storage` VM. +You need to provide `storage` VM details in the spec file such as IP address, hostname, host, and three disks attached to the `storage` VM. + [source,bash,role=execute] ---- @@ -181,8 +181,8 @@ data_devices: EOF ---- -. Deploy the Ceph storage cluster with following command. -You will need to pass the spec file as `initial-config.yaml`, mon IP as `storage` VM's IP and registry json file as `/etc/auth.json`. +. Deploy the Ceph storage cluster with the following command. +You will need to pass the spec file as `initial-config.yaml`, mon IP as `storage` VM's IP, and the registry json file as `/etc/auth.json`. To deploy a Ceph cluster running on a single host, use the `--single-host-defaults` flag when bootstrapping. + [source,bash,role=execute] @@ -227,10 +227,10 @@ registry.redhat.io/rhceph/rhceph-7-rhel9@sha256:75bd8969ab3f86f2203a1ceb187876f4 ---- + [NOTE] -You may have to wait for approximately 5 to 10 minutes for all the background processes needed for installation to complete and the cluster to be in `HEALTH_OK` state. +You may have to wait for approximately 5 to 10 minutes for all the background processes needed for installation to complete and for the cluster to be in the `HEALTH_OK` state. You may track the progress with watch `ceph -s` command. -. You may also run `ceph health` command to verify cluster status. +. You may also run the `ceph health` command to verify cluster status. + .Sample output: ---- @@ -238,7 +238,7 @@ You may track the progress with watch `ceph -s` command. HEALTH_OK ---- -. In case of failure, you can use following command to destroy the Ceph storage cluster. +. In case of failure, you can use the following command to destroy the Ceph storage cluster. + [source,bash,role=execute] ---- diff --git a/modules/chapter2/pages/index.adoc b/modules/chapter2/pages/index.adoc index 056d1ce..efe078f 100644 --- a/modules/chapter2/pages/index.adoc +++ b/modules/chapter2/pages/index.adoc @@ -5,5 +5,5 @@ This chapter covers deployment of the _Hub_ cluster. Chapter goals: * Deploy Single Node OpenShift (SNO) cluster as KVM on bare metal using assisted method. -* Install RHACM, LVMS, multicluster engine operators on _Hub_ cluster. +* Install RHACM, LVMS, and multicluster engine operators on the _Hub_ cluster. * Create Provisioning and AgentServiceConfig for enabling CIM (Central Infrastructure Management) service. \ No newline at end of file diff --git a/modules/chapter2/pages/section1.adoc b/modules/chapter2/pages/section1.adoc index e83aac9..f010538 100644 --- a/modules/chapter2/pages/section1.adoc +++ b/modules/chapter2/pages/section1.adoc @@ -1,7 +1,8 @@ = Hub VM Deployment [NOTE] -You can refer to how to configure manually i.e. without automation at the end appendix chapter. +You can refer to the appendix at the end for manual configuration instructions without automation. + In this section, you will be creating one KVM with the name `hub`. @@ -9,7 +10,7 @@ image::MCAP_setup.png[] == Prerequisites -Ensure you have already executed *_setup_hypervisor.yaml_* playbook from `Initial Setup on Hypervisor` page. +Ensure you have already executed *_setup_hypervisor.yaml_* playbook from `Initial Setup on the Hypervisor` page. == Hub VM Deployment diff --git a/modules/chapter2/pages/section2.adoc b/modules/chapter2/pages/section2.adoc index 8b29ab6..438f81d 100644 --- a/modules/chapter2/pages/section2.adoc +++ b/modules/chapter2/pages/section2.adoc @@ -1,7 +1,7 @@ = Assisted Clusters - Hub Cluster :experimental: -In this section, you will be deploying _Hub_ cluster using `hub` VM. +In this section, you will be deploying the _Hub_ cluster using the `hub` VM. This _Hub_ cluster will be Single Node OpenShift (SNO) cluster. image::MCAP_setup_1.png[] @@ -18,11 +18,11 @@ Click btn:[Create cluster] to create the Assisted Cluster. + image::console_redhat_screen.png[] -. Select `Bare Metal (x86_64)` as cluster type from `Datacenter` tab. +. Select `Bare Metal (x86_64)` as the cluster type from the `Datacenter` tab. + image::console_redhat_cluster_type.png[] -. Select `Interactive` web based mode. +. Select `Interactive` web-based mode. + image::console_redhat_baremetal_interactive.png[] @@ -58,7 +58,7 @@ image::console_redhat_host_discovery.png[] + Select menu:Provisioning type[Full image file - Download a self-contained ISO]. + -SSH public key: provide the `root` user's public rsa key from hypervisor. +SSH public key: provide the `root` user's public rsa key from the hypervisor. + Click btn:[Generate Discovery ISO]. + @@ -67,7 +67,7 @@ image::console_redhat_host_add.png[] . Download the discovery ISO. + To download the discovery ISO on your laptop or desktop, click btn:[Download Discovery ISO]. -In this case, you need to copy it to hypervisor manually. +In this case, you need to copy it to the hypervisor manually. + or + @@ -76,17 +76,17 @@ Use `Command to download the ISO` option and run the given `wget` command to dow image::console_redhat_discovery_iso_download.png[] + [NOTE] -You will need `virt-manager` to attach this discovery ISO to `hub` VM. -To access the GUI of hypervisor, you will need console access. +You will need `virt-manager` to attach this discovery ISO to the `hub` VM. +To access the GUI of the hypervisor, you will need console access. -. Access the vnc console of hypervisor. +. Access the vnc console of the hypervisor. + [NOTE] `tigervnc-server` is installed on the hypervisor. -Using vnc viewer, you can access the console of hypervisor. +Using vnc viewer, you can access the console of the hypervisor. .. Set password for vncserver. -You will use this password to access vnc console of hypervisor from your laptop/desktop. +You will use this password to access vnc console of the hypervisor from your laptop/desktop. + [source,bash,role=execute] ---- @@ -125,7 +125,7 @@ Log file is /root/.vnc/hypervisor:2.log ---- + [NOTE] -As per above output, you will mention `:2` as connection in VNC viewer. +As per the above output, you will mention `:2` as connection in VNC viewer. .. Download and install VNC viewer from https://www.realvnc.com/en/connect/download/viewer/ as per your laptop or desktop operating system. @@ -161,15 +161,15 @@ This command provides those missing menu bars. + image::vnc_menu_bar.png[] + -Running the command in background i.e. with `&` allows you to run other commands on same terminal later. +Running the command in the background i.e. with `&` allows you to run other commands on the same terminal later. .. Move the menu bar and place and resize the browser and terminal window when convenient. + image::vnc_menu_bar_1.png[] -. Attach the downloaded discovery iso to `hub` VM. +. Attach the downloaded discovery iso to the `hub` VM. -.. Run the `virt-manager &` command on terminal to launch virtual machine manager. +.. Run the `virt-manager &` command on the terminal to launch the virtual machine manager. + image::vnc_virt_manager.png[] + @@ -201,20 +201,20 @@ Boot the VM, and ensure it is booted with RHEL CoreOS (Live). + image::hub_vm_5.png[] -. Go back to https://console.redhat.com/openshift/cluster-list[console.redhat.com] to resume assisted installation of _Hub_ cluster. -Notice the host is getting discovered and status is `Ready`. +. Go back to https://console.redhat.com/openshift/cluster-list[console.redhat.com] to resume the assisted installation of the _Hub_ cluster. +Notice that the host is getting discovered and it's status is `Ready`. + Click btn:[Next]. + image::console_redhat_host_discovery_ready.png[] + -It may take few minutes to update status as `Ready`. +It may take a few minutes to update the status as `Ready`. -. In storage section, once status is `Ready`, click btn:[Next]. +. In the storage section, once the status is `Ready`, click btn:[Next]. + image::console_redhat_storage.png[] -. In networking section, once status is `Ready`, click btn:[Next]. +. In the networking section, once the status is `Ready`, click btn:[Next]. + image::console_redhat_networking.png[] @@ -228,11 +228,11 @@ image::console_redhat_review_create.png[] + image::console_redhat_cluster_installation_start.png[] -. After 7-10 minutes, it waits on pending user action. +. After 7-10 minutes, it waits for pending user action. + image::console_redhat_pending_user_actions.png[] + -This means you need to disconnect the discovery ISO from the `hub` VM and boot the `hub` VM from disk. +This means you need to disconnect the discovery ISO from the `hub` VM and boot the `hub` VM from the disk. .. You can notice the user config is applied from the `hub` VM's console. + @@ -250,10 +250,10 @@ image::hub_vm_8.png[] + image::console_redhat_install_proceed.png[] -. You will notice at `80%` the installation goes into finalizing state. +. You will notice at `80%` the installation goes into a finalizing state. + image::console_redhat_cluster_install_finalizing.png[] -. Installation completes in approximately 15 minutes. +. Installation is completed in approximately 15 minutes. + image::console_redhat_install_complete.png[] \ No newline at end of file diff --git a/modules/chapter2/pages/section3.adoc b/modules/chapter2/pages/section3.adoc index 0da0eac..d33bf75 100644 --- a/modules/chapter2/pages/section3.adoc +++ b/modules/chapter2/pages/section3.adoc @@ -1,15 +1,15 @@ = Access the Hub Cluster :experimental: -In this section, you will be accessing _Hub_ cluster. +In this section, you will be accessing the _Hub_ cluster. image::MCAP_setup_1.png[] == Prerequisites -. Ensure _Hub_ cluster is deployed successfully. +. Ensure that the _Hub_ cluster is deployed successfully. -. Ensure `/root/hub/` directory and file structure created. +. Ensure `/root/hub/` directory and file structure are created. + .Sample output: ---- @@ -22,13 +22,13 @@ dr-xr-x---. 13 root root 4096 Aug 22 15:18 .. -rw-r--r--. 1 root root 12127 Aug 22 15:20 kubeconfig ---- -. Get the `kubeconfig` file, the password for `kubeadmin` user, and the web console URL from console.redhat.com. +. Get the `kubeconfig` file, the password for the `kubeadmin` user, and the web console URL from console.redhat.com. + image::console_redhat_install_download.png[] -.. Download the `kubeconfig` file to your laptop or desktop and then copy to `/root/hub` directory on hypervisor. +.. Download the `kubeconfig` file to your laptop or desktop and then copy to `/root/hub` directory on the hypervisor. -.. Copy the password for `kubeadmin` user and paste it in `/root/hub/kubeadmin-password` file. +.. Copy the password for the `kubeadmin` user and paste it in `/root/hub/kubeadmin-password` file. .. Copy the web console url and paste it in `/root/hub/console-url` file. @@ -80,7 +80,7 @@ cp /root/hub/kubeconfig /root/.kube/config kubepass=$(cat /root/hub/kubeadmin-password) ---- -. Login to _Hub_ cluster with `oc login` command. +. Login to the _Hub_ cluster with `oc login` command. + [source,bash,role=execute] ---- @@ -109,15 +109,15 @@ NAME STATUS ROLES AGE VERSION hub.lab.example.com Ready control-plane,master,worker 10h v1.29.7+6abe8a1 ---- -== Access the _Hub_ Cluster from Web Console +== Access the _Hub_ Cluster from the Web Console -. Access the Firefox browser on console of hypervisor using VNC viewer. +. Access the Firefox browser on the console of the hypervisor using VNC viewer. + image::vnc_hub_cluster_access.png[] . Get the web console url from `/root/hub/console-url` file. + -Select the url and paste in firefox browser tab. +Select the url and paste it in the firefox browser tab. + Click btn:[Advanced...] to proceed. + @@ -130,16 +130,16 @@ image::vnc_hub_cluster_access_2.png[] [NOTE] You may need to accept the risk twice. -. Login as `kubadmin` user. +. Login as a `kubadmin` user. + -Get the `kubadmin` user's passwrod from `/root/hub/kubeadmin-password` file. +Get the `kubadmin` user's password from `/root/hub/kubeadmin-password` file. + image::vnc_hub_cluster_access_3.png[] -. Once you logged in as `kubadmin` user, this is what the first screen looks like. +. Once you log in as a `kubadmin` user, this is what the first screen looks like. + image::vnc_hub_cluster_access_4.png[] -. Verify your `local-cluster` i.e _Hub_ cluster is in `Ready` state. +. Verify your `local-cluster` i.e. _Hub_ cluster is in `Ready` state. + image::vnc_hub_cluster_access_5.png[] \ No newline at end of file diff --git a/modules/chapter2/pages/section4.adoc b/modules/chapter2/pages/section4.adoc index 756b589..35bc2ab 100644 --- a/modules/chapter2/pages/section4.adoc +++ b/modules/chapter2/pages/section4.adoc @@ -104,25 +104,25 @@ metadata: . Access the `local-cluster`. + -Click to menu:All Clusters[local-cluster] +Click on menu:All Clusters[local-cluster] + image::local_cluster_access.png[] . Access the operator hub. + -From left navigation pane, click menu:Operators[OperatorHub]. +From the left navigation pane, click menu:Operators[OperatorHub]. + image::operator_hub.png[] -. In search window, search _rhacm_ and select the `Advanced Cluster Management for Kubernetes`. +. In the search window, search _rhacm_ and select the `Advanced Cluster Management for Kubernetes`. + image::rhacm_search.png[] -. Click btn:[Install] to open install options. +. Click btn:[Install] to open the install options. + image::rhacm_install.png[] -. Keep all options as is, with no change in selected options and click btn:[Install] to install the operator. +. Keep all options as is, with no change in selected options and, click btn:[Install] to install the operator. + image::rhacm_install_1.png[] @@ -130,28 +130,28 @@ image::rhacm_install_1.png[] + image::rhacm_install_2.png[] -. Keep all options as is, with no change in selected options and click btn:[Create] to create the resource. +. Keep all options as is, with no change in selected options and, click btn:[Create] to create the resource. + image::rhacm_install_3.png[] -. Notice it goes into `Installing` phase. +. Notice it goes into the `Installing` phase. + image::rhacm_install_4.png[] -. After 2-3 minutes, notice the `Refresh web console` message on window. +. After 2-3 minutes, notice the `Refresh web console` message on the window. + image::rhacm_install_5.png[] -. After 3-4 minutes, refresh the page and _MultiClusterHub_ is in `Running` phase. +. After 3-4 minutes, refresh the page and _MultiClusterHub_ is in the `Running` phase. + image::rhacm_install_6.png[] -== Enable the Central Infrastructure Management service +== Enable the Central Infrastructure Management Service https://docs.redhat.com/en/documentation/red_hat_advanced_cluster_management_for_kubernetes/2.5/html/clusters/managing-your-clusters#enable-cim[The Central Infrastructure Management (CIM),window=read-later] service is provided with the `mce-short` and deploys OpenShift Container Platform clusters. CIM is deployed when you enable the _MultiClusterHub Operator_ on the hub cluster, but must be enabled. -This will help to generate discovery ISO which will be used for deploying _Infrastructure_ clusters from _Hub_ cluster using RHACM. +This will help to generate discovery ISO which will be used for deploying _Infrastructure_ clusters from the _Hub_ cluster using RHACM. Ensure `AgentServiceConfig` exists and running. diff --git a/modules/chapter3/pages/index.adoc b/modules/chapter3/pages/index.adoc index ca690c0..714b163 100644 --- a/modules/chapter3/pages/index.adoc +++ b/modules/chapter3/pages/index.adoc @@ -1,9 +1,9 @@ = Infrastructure Cluster Deployment -This chapter covers deployment of the _Infrastructure_ clusters. +This chapter covers the deployment of the _Infrastructure_ clusters. Chapter goals: -* Deploy Single Node OpenShift (SNO) clusters as KVMs on bare metal using RHACM from _Hub_ cluster. -* Install OpenShift Virtualization, OpenShift DataFoundation and NMState operators on _Infrastructure_ clusters. +* Deploy Single Node OpenShift (SNO) clusters as KVMs on bare metal using RHACM from the _Hub_ cluster. +* Install OpenShift Virtualization, OpenShift DataFoundation, and NMState operators on _Infrastructure_ clusters. * Configure necessary network settings on _Infrastructure_ clusters. diff --git a/modules/chapter3/pages/section1.adoc b/modules/chapter3/pages/section1.adoc index 4284f71..1834b26 100644 --- a/modules/chapter3/pages/section1.adoc +++ b/modules/chapter3/pages/section1.adoc @@ -1,15 +1,15 @@ = Infrastructure VMs Deployment [NOTE] -You can refer to how to configure manually i.e. without automation at the end appendix chapter. +You can refer to the appendix at the end for manual configuration instructions without automation. -In this section, you will be creating three KVMs with name `sno1`, `sno2` and `sno3`. +In this section, you will be creating three KVMs with the name `sno1`, `sno2`, and `sno3`. image::MCAP_setup.png[] == Prerequisites -Ensure you have already executed *_setup_hypervisor.yaml_* playbook from `Initial Setup on Hypervisor` page. +Ensure you have already executed *_setup_hypervisor.yaml_* playbook from `Initial Setup on the Hypervisor` page. == SNO VMs Deployment diff --git a/modules/chapter3/pages/section2.adoc b/modules/chapter3/pages/section2.adoc index 0566b29..cf743f8 100644 --- a/modules/chapter3/pages/section2.adoc +++ b/modules/chapter3/pages/section2.adoc @@ -1,20 +1,20 @@ = Assisted Clusters - Infrastructure Clusters :experimental: -In this section, you will be deploying _Infrastructure_ clusters using RHACM from _Hub_ cluster. +In this section, you will be deploying _Infrastructure_ clusters using RHACM from the _Hub_ cluster. These _Infrastructure_ clusters will be Single Node OpenShift (SNO) clusters. image::MCAP_setup_1.png[] == Prerequisites -. `sno1`, `sno2` and `sno3` VMs are up and running. +. `sno1`, `sno2`, and `sno3` VMs are up and running. -. Download or copy your https://console.redhat.com/openshift/install/pull-secret[pull-secret,window=read-later] and put it in `pull_secret.txt` file on hypervisor. +. Download or copy your https://console.redhat.com/openshift/install/pull-secret[pull-secret,window=read-later] and put it in `pull_secret.txt` file on the hypervisor. -. Set the OpenShift `clusterimageset` to 4.16.8 on stable channel. +. Set the OpenShift `clusterimageset` to 4.16.8 on a stable channel. + -This will allow you to deploy the _Infrastructure_ and _Tenant_ clusters with same OpenShift version. +This will allow you to deploy the _Infrastructure_ and _Tenant_ clusters with the same OpenShift version. .. Get the images with _visible_ label as _true_ from `clusterimageset`. + @@ -73,13 +73,13 @@ clusterimageset.hive.openshift.io/img4.16.8-multi-appsub patched == Deploy _Infrastructure_ clusters as SNO clusters -. Login to web console of _Hub_ cluster. +. Login to the web console of the _Hub_ cluster. + -Switch to the `All Clusters` from `local-cluster`. +Switch to the `All Clusters` from the `local-cluster`. + image::hub_console_switch.png[] -. Create cluster using btn:[Create cluster]. +. Create a cluster using btn:[Create cluster]. + image::hub_console_create_cluster.png[] @@ -113,7 +113,7 @@ image::hub_console_sno1_details.png[] + image::hub_console_sno1_pull_secret.png[] -.. Provide the pull secret in `Pull secret` field. +.. Provide the pull secret in the `Pull secret` field. + Click btn:[Next] + @@ -141,25 +141,25 @@ image::hub_console_sno1_add_host.png[] + image::hub_console_sno1_add_host_discovery_iso.png[] -.. Here you need to provide the public ssh key of `root` user. +.. Here you need to provide the public ssh key of the `root` user. + image::hub_console_sno1_public_key.png[] -.. Get the public ssh key of `root` user. +.. Get the public ssh key of the `root` user. + image::hub_console_sno1_public_key_1.png[] -.. Provide the public ssh key of `root` user in `SSH public key` field. +.. Provide the public ssh key of the `root` user in the `SSH public key` field. + Click btn:[Generate Discovery ISO] to generate discovery ISO. + image::hub_console_sno1_generate_discovery_iso.png[] -.. Click btn:[Download Discovery ISO] to download discovery ISO on hypervisor. +.. Click btn:[Download Discovery ISO] to download discovery ISO on the hypervisor. + image::hub_console_sno1_download_discovery_iso.png[] -.. This will open link in new tab. +.. This will open the link in the new tab. + Click btn:[Advanced...] to proceed. + @@ -200,7 +200,7 @@ Select the discovery ISO and click btn:[Finish]. + image::sno1_attach_iso_finish.png[] + -Update the `Boot device order` to boot system with discovery ISO. +Update the `Boot device order` to boot the system with discovery ISO. + Click btn:[Apply]. + @@ -210,7 +210,7 @@ Boot the `sno1` VM and ensure it is booted with RHEL CoreOS (Live). + image::sno1_rhcos_boot.png[] -.. In hub console, notice `sno1` VM as host is discovered and select `Approve host`. +.. In the hub console, notice `sno1` VM as host is discovered and select `Approve host`. + image::hub_console_sno1_approve_host.png[] @@ -228,7 +228,7 @@ image::hub_console_sno1_host_ready.png[] + image::hub_console_sno1_networking_ready.png[] + -After few minutes, the message goes away and notice the host is in `Ready` status. +After a few minutes, the message goes away and notice that the host is in the `Ready` status. + image::hub_console_sno1_networking_ready_1.png[] @@ -240,17 +240,17 @@ image::hub_console_sno1_review_create.png[] + image::hub_console_sno1_install_progress.png[] -. After 7-10 minutes, it waits on pending user action. +. After 7-10 minutes, it waits for pending user action. + image::hub_console_sno1_pending_user_actions.png[] + -This means you need to disconnect the discovery ISO from the `sno1` VM and boot the `sno1` VM from disk. +This means you need to disconnect the discovery ISO from the `sno1` VM and boot the `sno1` VM from the disk. .. Shutdown the `sno1` VM. + image::sno1_shutdown_1.png[] -.. Update the boot order to boot `sno1` VM from disk. +.. Update the boot order to boot `sno1` VM from the disk. + image::sno1_boot_order_1.png[] @@ -262,7 +262,7 @@ image::hub_console_sno1_install_proceed.png[] + image::hub_console_sno1_install_complete.png[] -. If you notice any failure in importing the cluster to _Hub_ cluster then wait for 35 to 40 minutes. +. If you notice any failure in importing the cluster to the _Hub_ cluster then wait for 35 to 40 minutes. + image::hub_console_sno1_import_fail.png[] @@ -282,13 +282,13 @@ Click btn:[Import] to import cluster. + image::hub_console_sno1_import_1.png[] -. Notice the `sno1` is added to cluster list in `default` cluster set. +. Notice that `sno1` is added to the cluster list in the `default` cluster set. + image::hub_console_sno1_ready.png[] + -This concludes the successful deployment of OpenShift cluster and added to hub cluster using RHACM. +This concludes the successful deployment of the OpenShift cluster and added to the hub cluster using RHACM. -== Install remaining _Infrastructure_ clusters as SNO clusters +== Install remaining _Infrastructure_ clusters as SNO Clusters -. You can deploy remaining `sno2` and `sno3` clusters by following steps from the previous section for `sno1` cluster deployment. +. You can deploy the remaining `sno2` and `sno3` clusters by following the steps from the previous section for `sno1` cluster deployment. . Each cluster deployment will take 35 to 40 minutes to complete. \ No newline at end of file diff --git a/modules/chapter3/pages/section3.adoc b/modules/chapter3/pages/section3.adoc index 5c72585..0976c35 100644 --- a/modules/chapter3/pages/section3.adoc +++ b/modules/chapter3/pages/section3.adoc @@ -7,9 +7,9 @@ image::MCAP_setup_1.png[] == Prerequisites -. Ensure that _Infrastructure_ clusters (`sno1`, `sno2` and `sno3`) are deployed successfully. +. Ensure that _Infrastructure_ clusters (`sno1`, `sno2`, and `sno3`) are deployed successfully. -. Ensure `/root/sno1/` directory and file structure created. +. Ensure `/root/sno1/` directory and file structure are created. + .Sample output: ---- @@ -21,18 +21,18 @@ dr-xr-x---. 13 root root 4096 Aug 22 15:18 .. -rw-r--r--. 1 root root 12127 Aug 22 15:20 kubeconfig ---- -. Get the `kubeconfig` file and password for `kubeadmin` user from the _Hub_ cluster console. +. Get the `kubeconfig` file and password for the `kubeadmin` user from the _Hub_ cluster console. + image::hub_console_sno1_install_download.png[] -.. Download the `kubeconfig` file to hypervisor, and then copy to `/root/sno1` directory on hypervisor. +.. Download the `kubeconfig` file to the hypervisor, and then copy to `/root/sno1` directory on the hypervisor. + .Sample output: ---- [root@hypervisor ~]# mv /root/Downloads/sno1-kubeconfig.yaml /root/sno1/kubeconfig ---- -.. Copy the password for `kubeadmin` user, and paste it in new tab of a Firefox browser. +.. Copy the password for the `kubeadmin` user, and paste it in a new tab of a Firefox browser. + image::hub_console_sno1_copy_password.png[] + @@ -52,14 +52,14 @@ Follow the same steps for `sno2` and `sno3` clusters. cp /root/sno1/kubeconfig /root/.kube/config ---- -. Set the `kubepass` variable as `kubeadmin` user's password. +. Set the `kubepass` variable as the `kubeadmin` user's password. + [source,bash,role=execute] ---- kubepass=$(cat /root/sno1/kubeadmin-password) ---- -. Login to _sno1_ cluster with `oc login` command. +. Login to the _sno1_ cluster with the `oc login` command. + [source,bash,role=execute] ---- @@ -93,11 +93,11 @@ Follow the same steps for `sno2` and `sno3` clusters. == Access the _sno1_ Cluster from Web Console -. Get the web console url from _Hub_ cluster console. +. Get the web console url from the _Hub_ cluster console. + image::hub_console_sno1_install_download.png[] + -. Click on the link from `Web Console URL`. +. Click on the link from the `Web Console URL`. + Click btn:[Advanced...] to proceed. + @@ -110,21 +110,21 @@ image::vnc_sno1_cluster_access_2.png[] [NOTE] You may need to accept the risk twice. -. Login as `kubadmin` user. +. Login as a `kubadmin` user. + Get the `kubadmin` user's password from from _Hub_ cluster console. + image::hub_console_sno1_install_download.png[] + -Copy the `kubadmin` user's password from from _Hub_ cluster console and paste it in `Password` field. +Copy the `kubadmin` user's password from from _Hub_ cluster console and paste it in the `Password` field. + image::vnc_sno1_cluster_access_3.png[] -. Once you have logged in as `kubadmin` user, this is how the first screen should look: +. Once you have logged in as a `kubadmin` user, this is how the first screen should look: + image::vnc_sno1_cluster_access_4.png[] -. Verify _sno1_ cluster is in `Ready` state in _Hub_ cluster console. +. Verify _sno1_ cluster is in a `Ready` state in the _Hub_ cluster console. + image::vnc_sno1_cluster_access_5.png[] diff --git a/modules/chapter3/pages/section4.adoc b/modules/chapter3/pages/section4.adoc index 26612fa..51a4ff2 100644 --- a/modules/chapter3/pages/section4.adoc +++ b/modules/chapter3/pages/section4.adoc @@ -1,13 +1,13 @@ = Install and Configure Operators :experimental: -In this section, you will be installing additional operators needed for deploying _Tenant_ cluster. +In this section, you will be installing additional operators needed for deploying the _Tenant_ cluster. image::MCAP_setup_2.png[] == Prerequisites -. Verify that _sno1_ cluster is deployed successfully. +. Verify that the _sno1_ cluster is deployed successfully. . Access the _sno1_ cluster via CLI and web console. @@ -28,7 +28,7 @@ version 4.16.8 True False 23h Cluster version is 4.16.8 == Install OpenShift Virtualization Operator -. Access the operator hub from the web console of _sno1_ cluster. +. Access the operator hub from the web console of the _sno1_ cluster. + From the left navigation pane, click menu:Operators[OperatorHub]. + @@ -42,7 +42,7 @@ image::sno1_console_operator_hub_1.png[] + image::sno1_console_ocpvirt_install.png[] -. Keep all options as is, with no change in selected options and then click btn:[Install] to install the operator. +. Keep all options as is, with no change in selected options, and then click btn:[Install] to install the operator. + image::sno1_console_ocpvirt_install_1.png[] @@ -58,7 +58,7 @@ image::sno1_console_ocpvirt_install_3.png[] + image::sno1_console_ocpvirt_install_4.png[] -. After 3-4 minutes, notice the `Refresh web console` message on window. +. After 3-4 minutes, notice the `Refresh web console` message on the window. + image::sno1_console_ocpvirt_install_5.png[] @@ -68,7 +68,7 @@ image::sno1_console_ocpvirt_install_6.png[] == Install OpenShift Data Foundation Operator -. Access the operator hub from the web console of _sno1_ cluster. +. Access the operator hub from the web console of the _sno1_ cluster. + From the left navigation pane, click menu:Operators[OperatorHub]. + @@ -78,7 +78,7 @@ image::sno1_console_operator_hub.png[] + image::sno1_console_odf_install.png[] -. Click btn:[Install] to open install options. +. Click btn:[Install] to open the install options. + image::sno1_console_odf_install_1.png[] @@ -86,7 +86,7 @@ image::sno1_console_odf_install_1.png[] + image::sno1_console_odf_install_2.png[] -. After 3-4 minutes, notice the `Refresh web console` message on window. +. After 3-4 minutes, notice the `Refresh web console` message on the window. + First, refresh the web console and then click btn:[Create StorageSystem] to create the resource. + @@ -123,7 +123,7 @@ You will need to run this script on the `storage` VM. scp /root/Downloads/ceph-external-cluster-details-exporter.py root@storage:. ---- + -Password for `root` user is `redhat`. +Password for the `root` user is `redhat`. + .Sample output: ---- @@ -157,9 +157,9 @@ root@storage's password: [root@storage ~]# python ceph-external-cluster-details-exporter.py --rbd-data-pool-name default.rgw.control > output.json ---- + -Copy the `output.json` file from `storage` VM to hypervisor. +Copy the `output.json` file from the `storage` VM to the hypervisor. + -Run following command on hypervisor. +Run the following command on the hypervisor. + [source,bash,role=execute] ---- @@ -206,7 +206,7 @@ image::sno1_console_odf_install_13.png[] == Install NMState Operator -. Access the operator hub from the web console of _sno1_ cluster. +. Access the operator hub from the web console of the _sno1_ cluster. + From the left navigation pane, click menu:Operators[OperatorHub]. + @@ -216,7 +216,7 @@ image::sno1_console_operator_hub.png[] + image::sno1_console_operator_hub_nmstate.png[] -. Click btn:[Install] to open install options. +. Click btn:[Install] to open the install options. + image::sno1_console_nmstate_install.png[] @@ -228,7 +228,7 @@ image::sno1_console_nmstate_install_1.png[] + image::sno1_console_nmstate_install_2.png[] -. In `NMState` tab and click btn:[Create NMState] to create the resource. +. In `NMState` tab, click btn:[Create NMState] to create the resource. + image::sno1_console_nmstate_install_3.png[] @@ -240,11 +240,11 @@ image::sno1_console_nmstate_install_4.png[] + image::sno1_console_nmstate_install_5.png[] -. After 3-4 minutes, notice the `Refresh web console` message on window. +. After 3-4 minutes, notice the `Refresh web console` message on the window. + image::sno1_console_nmstate_install_6.png[] == Install and Configure Operators on `sno2` and `sno3` Clusters -. Follow the same prerequisites from previous section for `sno2` and `sno3` clusters. -. Follow same steps from previous section for installing and configuring operators on the `sno2` and `sno3` clusters. \ No newline at end of file +. Follow the same prerequisites from the previous section for `sno2` and `sno3` clusters. +. Follow the same steps from the previous section for installing and configuring operators on the `sno2` and `sno3` clusters. \ No newline at end of file diff --git a/modules/chapter3/pages/section5.adoc b/modules/chapter3/pages/section5.adoc index ebc40a4..c4ae68d 100644 --- a/modules/chapter3/pages/section5.adoc +++ b/modules/chapter3/pages/section5.adoc @@ -1,22 +1,22 @@ = Network Settings :experimental: -In this section, you will be configuring additional network settings needed for deploying _Tenant_ cluster. +In this section, you will be configuring additional network settings needed for deploying the _Tenant_ cluster. After you install the Kubernetes NMState Operator, you can configure a Linux bridge network for live migration or external access to virtual machines (VMs). [IMPORTANT] ==== -In this POC setup, there is only one bridge, `virbr0` and each SNO VM has single NIC. -This NIC is used to deploy _Infrastructure_ SNO cluster on SNO VMs. -As NIC is already used for deploying _Infrastructure_ SNO cluster, NMState configuration will not work using the same NIC. +In this POC setup, there is only one bridge, `virbr0`, and each SNO VM has a single NIC. +This NIC is used to deploy the _Infrastructure_ SNO cluster on SNO VMs. +As NIC is already used for deploying the _Infrastructure_ SNO cluster, NMState configuration will not work using the same NIC. For example, two NICs are attached to SNO VMs (connected to the same bridge `virbr0`) at the time of creating SNO VMs. -In this case, deployment of _Infrastructure_ cluster from _Hub_ cluster using RHACM will not proceed. +In this case, deployment of the _Infrastructure_ cluster from the _Hub_ cluster using RHACM will not proceed. The reason behind this is that the two NICs are on the same network. Ideally, there should be separate networks for _Infrastructure_ and _Tenant_ clusters. -It is possible that in future releases or updates of this course, the separate network configurations may be included. +It is possible that in future releases or updates of this course, separate network configurations may be included. There is a workaround to address it in this POC setup, adding the second NIC to SNO VMs after the deployment of the SNO cluster. ==== @@ -25,11 +25,11 @@ image::MCAP_setup_2.png[] == Prerequisites -. Verify that _sno1_ cluster is deployed successfully. +. Verify that the _sno1_ cluster is deployed successfully. . Access the _sno1_ cluster via CLI and web console. -. Ensure that `sno1.lab.example.com` node is in `Ready` status and all cluster operators are available. +. Ensure that the `sno1.lab.example.com` node is in the `Ready` status and all cluster operators are available. + .Sample output: ---- @@ -44,7 +44,7 @@ NAME VERSION AVAILABLE PROGRESSING SINCE STATUS version 4.16.8 True False 23h Cluster version is 4.16.8 ---- -. Ensure that NMState operator is installed on _sno1_ cluster. +. Ensure that the NMState operator is installed on the _sno1_ cluster. == Add Additional Network Interface Card (NIC) to `sno1`, `sno2` and `sno3` VMs @@ -64,7 +64,7 @@ Click btn:[Finish] to add NIC. + image::sno1_add_nic_1.png[] -. Notice, NIC is added on the fly. +. Notice, that NIC is added on the fly. + image::sno1_add_nic_2.png[] diff --git a/modules/chapter4/pages/index.adoc b/modules/chapter4/pages/index.adoc index ecf66e7..0d1ce3a 100644 --- a/modules/chapter4/pages/index.adoc +++ b/modules/chapter4/pages/index.adoc @@ -1,10 +1,10 @@ = Tenant Cluster Deployment -This chapter covers deployment of _Tenant_ cluster. +This chapter covers the deployment of the _Tenant_ cluster. Chapter goals: * Deploy _Three-Node OpenShift Compact Tenant_ cluster as OpenShift Virtualization VMs using RHACM from _Hub_ cluster. -* Install OpenShift Data Foundation operator on _Tenant_ cluster. +* Install the OpenShift Data Foundation operator on the _Tenant_ cluster. * Deploy and configure applications on _Tenant_ Clusters. * Test the high availability and resilience of deployed workloads. \ No newline at end of file diff --git a/modules/chapter4/pages/section1.adoc b/modules/chapter4/pages/section1.adoc index d28905f..4cb7b7d 100644 --- a/modules/chapter4/pages/section1.adoc +++ b/modules/chapter4/pages/section1.adoc @@ -24,7 +24,7 @@ sno3 true https://api.sno3.lab.example.com:6443 True . Download the `virtctl` command line tool from any SNO’s console. -.. Visit the web console home page of `sno1` cluster. +.. Visit the web console home page of the `sno1` cluster. + image::sno1_console_home.png[] @@ -32,7 +32,7 @@ image::sno1_console_home.png[] + image::sno1_console_cli_tools.png[] -.. In this page, scroll down to the `virtctl - KubeVirt command line interface` section. +.. On this page, scroll down to the `virtctl - KubeVirt command line interface` section. + Select the `Download virtctl for Linux for x86_64` to open a download link in a new tab. + @@ -42,7 +42,7 @@ Click btn:[Advanced...] to proceed. + image::sno1_console_virtctl_1.png[] + -Click btn:[Accept the Risk and Continue] to proceed and download the `virtctl` command line tool on hypervisor. +Click btn:[Accept the Risk and Continue] to proceed and download the `virtctl` command line tool on the hypervisor. + image::sno1_console_virtctl_2.png[] @@ -62,7 +62,7 @@ mv virtctl /usr/local/bin/ == Tenant VMs Deployment -. Access the web console of _sno1_ cluster. +. Access the web console of the _sno1_ cluster. + image::sno1_console_home.png[] + @@ -70,7 +70,7 @@ From the left navigation pane, click menu:Virtualization[VirtualMachines]. + image::sno1_console_create_vm.png[] -. Create a virtual machine from template. +. Create a virtual machine from the template. + Click menu:Create VirtualMachine[From template] + @@ -86,7 +86,7 @@ image::sno1_console_create_vm_2.png[] + image::sno1_console_create_vm_3.png[] -. Scroll down in the VM create window and update the disk size from 30GB to 120GB. +. Scroll down in the VM create a window and update the disk size from 30GB to 120GB. + image::sno1_console_create_vm_4.png[] @@ -122,7 +122,7 @@ image::sno1_console_create_vm_11.png[] + Network: Bridge network (in the previous chapter you created the network attachment definition) + -Get the mac address for virtual machine from `/etc/dhcp/dhcpd.conf` file. +Get the mac address for the virtual machine from `/etc/dhcp/dhcpd.conf` file. + image::sno1_console_create_vm_12.png[] @@ -130,13 +130,13 @@ image::sno1_console_create_vm_12.png[] + image::sno1_console_create_vm_13.png[] -.. Ensure that all the network interface related details are updated. +.. Ensure that all the network interface-related details are updated. + Click btn:[Create VirtualMachine] to create the VM and start the VM. + image::sno1_console_create_vm_14.png[] -. In VM's overview tab, you can see that the virtual machine is in running state. +. In the VM's overview tab, you can see that the virtual machine is in the running state. + image::sno1_console_create_vm_15.png[] diff --git a/modules/chapter4/pages/section2.adoc b/modules/chapter4/pages/section2.adoc index 91c73fd..e6a109a 100644 --- a/modules/chapter4/pages/section2.adoc +++ b/modules/chapter4/pages/section2.adoc @@ -1,20 +1,20 @@ = Assisted Clusters - Tenant Cluster :experimental: -In this section, you will be deploying _Tenant_ cluster using RHACM from _Hub_ cluster. +In this section, you will be deploying the _Tenant_ cluster using RHACM from the _Hub_ cluster. The _Tenant_ cluster will be the _Three-Node OpenShift Compact_ cluster. image::MCAP_setup_1.png[] == Prerequisites -The `tcn1.lab.example.com`, `tcn2.lab.example.com` and `tcn3.lab.example.com` VMs (created using OpenShift Virtualization) are up and running. +The `tcn1.lab.example.com`, `tcn2.lab.example.com`, and `tcn3.lab.example.com` VMs (created using OpenShift Virtualization) are up and running. -== Deploy _Tenant_ cluster as _Three-Node OpenShift Compact_ cluster +== Deploy _Tenant_ cluster as _Three-Node OpenShift Compact_ Cluster -. Login to the web console of _Hub_ cluster. +. Login to the web console of the _Hub_ cluster. + -Ensure that you have switched to `All Clusters` from `local-cluster`. +Ensure that you have switched to `All Clusters` from the `local-cluster`. + image::hub_console_switch.png[] @@ -48,7 +48,7 @@ Select `4.16.8` version of OpenShift from the menu. + image::hub_console_tenant_details_1.png[] -.. Get the pull secret from `pull_secret.txt` and provide the pull secret in `Pull secret` field. +.. Get the pull secret from `pull_secret.txt` and provide the pull secret in the `Pull secret` field. + Click btn:[Next] + @@ -66,33 +66,33 @@ Click btn:[Save] + image::hub_console_tenant_review_save.png[] -. Add `tcn1.lab.example.com`, `tcn2.lab.example.com` and `tcn3.lab.example.com` VMs as host. +. Add `tcn1.lab.example.com`, `tcn2.lab.example.com`, and `tcn3.lab.example.com` VMs as host. .. Select menu:Add hosts[With Discovery ISO]. + image::hub_console_tenant_add_host_discovery_iso.png[] -.. Here you will need to provide the public SSH key of `root` user. +.. Here you will need to provide the public SSH key of the `root` user. + image::hub_console_tenant_public_key.png[] -.. Get the public SSH key of `root` user. +.. Get the public SSH key of the `root` user. + image::hub_console_tenant_public_key_1.png[] -.. Provide the public SSH key of `root` user in `SSH public key` field. +.. Provide the public SSH key of the `root` user in the `SSH public key` field. + Click btn:[Generate Discovery ISO] to generate discovery ISO. + image::hub_console_tenant_generate_discovery_iso.png[] -.. Click btn:[Download Discovery ISO] to download discovery ISO on hypervisor. +.. Click btn:[Download Discovery ISO] to download discovery ISO on the hypervisor. + image::hub_console_tenant_download_discovery_iso.png[] .. Upload ISO from `/root/Download` directory to _Infrastructure_ clusters. + -Login to `sno1` cluster. +Login to the `sno1` cluster. + .Sample output: ---- @@ -133,23 +133,23 @@ Uploading data to https://cdi-uploadproxy-openshift-cnv.apps.sno1.lab.example.co 107.46 MiB / 107.46 MiB [============================================================================================================================================================================] 100.00% 0s -Uploading data completed successfully, waiting for processing to complete, you can hit ctrl-c without interrupting the progress +Uploading data completed successfully, and waiting for processing to complete, you can hit ctrl-c without interrupting the progress. Processing completed successfully Uploading /root/Downloads/eea97cca-cda5-47b9-bfdf-51929b4a7067-discovery.iso completed successfully [root@hypervisor ~]# oc logout ---- + -Verify that the PVC is created on `sno1` cluster. +Verify that the PVC is created on the `sno1` cluster. + -In `sno1` cluster web console, from the left navigation pane; click menu:Storage[PersistentVolumeClaims]. +In the `sno1` cluster web console, from the left navigation pane; click menu:Storage[PersistentVolumeClaims]. + image::sno1_console_tenant_iso_pvc.png[] + [IMPORTANT] Upload the discovery ISO to `sno2` and `sno3` clusters by performing the above steps. -.. Boot the `tcn1.lab.example.com`, `tcn2.lab.example.com` and `tcn3.lab.example.com` VMs with discovery ISO. +.. Boot the `tcn1.lab.example.com`, `tcn2.lab.example.com`, and `tcn3.lab.example.com` VMs with discovery ISO. + In the `sno1` cluster web console, from the left navigation pane; click menu:Virtualization[VirtualMachines]. + @@ -179,7 +179,7 @@ Keep the interface as `VirtIO` and click btn:[Save] to add the disk. + image::sno1_console_vm_add_disk_iso_1.png[] + -Edit the boot order of the `tcn1.lab.example.com` VM from `Configuration` tab, and select `Details`. +Edit the boot order of the `tcn1.lab.example.com` VM from the `Configuration` tab, and select `Details`. + image::sno1_console_vm_boot_order.png[] + @@ -198,7 +198,7 @@ image::sno1_console_vm_boot_rhcos.png[] [IMPORTANT] Follow the same steps above for the `tcn2.lab.example.com` and `tcn3.lab.example.com` VMs to boot them with the discovery ISO. -.. Return to the web console of _Hub_ cluster to proceed cluster installation. +.. Return to the web console of the _Hub_ cluster to proceed with the cluster installation. + Approve the discovered host `tcn1.lab.example.com`. + @@ -216,7 +216,7 @@ image::hub_console_tenant_approve_host_ready_1.png[] . In the networking section, ensure all hosts are ready. + -Provide the `API IP` and `Ingress IP` from zone file. +Provide the `API IP` and `Ingress IP` from the zone file. + image::hub_console_tenant_networking.png[] + @@ -293,8 +293,8 @@ image::hub_console_tenant_install_progress_7.png[] + image::hub_console_tenant_install_complete.png[] -. Notice that the `tenant` cluster is added to the cluster list in `default` cluster set. +. Notice that the `tenant` cluster is added to the cluster list in the `default` cluster set. + image::hub_console_tenant_ready.png[] + -This concludes the successful deployment of OpenShift cluster and added to hub cluster using RHACM. +This concludes the successful deployment of the OpenShift cluster and added to hub cluster using RHACM. diff --git a/modules/chapter4/pages/section3.adoc b/modules/chapter4/pages/section3.adoc index 21dbbc0..8464d28 100644 --- a/modules/chapter4/pages/section3.adoc +++ b/modules/chapter4/pages/section3.adoc @@ -21,7 +21,7 @@ dr-xr-x---. 13 root root 4096 Aug 22 15:18 .. -rw-r--r--. 1 root root 12127 Aug 22 15:20 kubeconfig ---- -. Get the `kubeconfig` file and password for `kubeadmin` user from the _Hub_ cluster console. +. Get the `kubeconfig` file and password for the `kubeadmin` user from the _Hub_ cluster console. + image::hub_console_tenant_install_download.png[] @@ -32,7 +32,7 @@ image::hub_console_tenant_install_download.png[] [root@hypervisor ~]# mv /root/Downloads/tenant-kubeconfig.yaml /root/tenant/kubeconfig ---- -.. Copy the password for `kubeadmin` user, and paste it in a new tab of a Firefox browser. +.. Copy the password for the `kubeadmin` user, and paste it in a new tab of a Firefox browser. + image::hub_console_tenant_copy_password.png[] + @@ -59,7 +59,7 @@ cp /root/tenant/kubeconfig /root/.kube/config kubepass=$(cat /root/tenant/kubeadmin-password) ---- -. Login to the _tenant_ cluster with the `oc login` command. +. Log in to the _tenant_ cluster with the `oc login` command. + [source,bash,role=execute] ---- @@ -91,13 +91,13 @@ tenant.lab.example.com Ready control-plane,master,worker 10h v1.29.7+6a [NOTE] Follow the same steps for the `sno2` and `sno3` clusters. -== Access the _tenant_ Cluster from Web Console +== Access the _tenant_ Cluster from the Web Console -. Get the web console URL from _Hub_ cluster console. +. Get the web console URL from the _Hub_ cluster console. + image::hub_console_tenant_install_download.png[] + -. Click on the link from `Web Console URL`. +. Click on the link from the `Web Console URL`. + Click btn:[Advanced...] to proceed. + @@ -110,20 +110,20 @@ image::hub_console_tenant_accept_risk.png[] [NOTE] You may need to accept the risk twice. -. Login as `kubadmin` user. +. Login as a `kubadmin` user. + Get the `kubadmin` user's password from from _Hub_ cluster console. + image::hub_console_tenant_install_download.png[] + -Copy the `kubadmin` user's password from from _Hub_ cluster console and paste it in `Password` field. +Copy the `kubadmin` user's password from from _Hub_ cluster console and paste it in the `Password` field. + image::tenant_console_access.png[] -. Once you have logged in as `kubadmin` user, this is how the first screen should look: +. Once you have logged in as a `kubadmin` user, this is how the first screen should look: + image::tenant_console_access_1.png[] -. Verify _tenant_ cluster is in `Ready` state in _Hub_ cluster console. +. Verify _tenant_ cluster is in a `Ready` state in the _Hub_ cluster console. + image::hub_console_tenant_ready.png[] diff --git a/modules/chapter4/pages/section4.adoc b/modules/chapter4/pages/section4.adoc index 1552dc0..f27417f 100644 --- a/modules/chapter4/pages/section4.adoc +++ b/modules/chapter4/pages/section4.adoc @@ -1,15 +1,15 @@ = Install and Configure Operators :experimental: -In this section, you will be installing OpenShift Data Foundation operator on _Tenant_ cluster. +In this section, you will be installing the OpenShift Data Foundation operator on the _Tenant_ cluster. image::MCAP_setup_1.png[] == Prerequisites -. Verify that _tenant_ cluster is deployed successfully. +. Verify that the _tenant_ cluster is deployed successfully. -. Access the _tenant_ cluster via CLI and web console. +. Access the _tenant_ cluster via CLI and the web console. . Ensure that all nodes are in `Ready` status and that all cluster operators are available. + @@ -29,7 +29,7 @@ version 4.16.8 True False 23h Cluster version is 4.16.8 == Install OpenShift Data Foundation Operator -. Access the operator hub from the web console of _tenant_ cluster. +. Access the operator hub from the web console of the _tenant_ cluster. + From the left navigation pane, click menu:Operators[OperatorHub]. + @@ -39,7 +39,7 @@ image::tenant_console_operator_hub.png[] + image::tenant_console_odf_install.png[] -. Click btn:[Install] to open install options. +. Click btn:[Install] to open the install options. + image::tenant_console_odf_install_1.png[] @@ -47,7 +47,7 @@ image::tenant_console_odf_install_1.png[] + image::tenant_console_odf_install_2.png[] -. After 3-4 minutes, you should notice a message telling you to `Refresh web console` message on window. +. After 3-4 minutes, you should notice a message telling you to the `Refresh web console` message on the window. + First, refresh the web console and then click btn:[Create StorageSystem] to create the resource. + @@ -91,7 +91,7 @@ image::tenant_console_odf_install_10.png[] + image::tenant_console_odf_install_11.png[] -.. Note the conditions as `Available, VendorCsv Ready, Vendor System Present`. +.. Note the conditions as `Available, VendorCsv Ready, and Vendor System Present`. + image::tenant_console_odf_install_12.png[] diff --git a/modules/chapter4/pages/section5.adoc b/modules/chapter4/pages/section5.adoc index 35a92bd..c93e922 100644 --- a/modules/chapter4/pages/section5.adoc +++ b/modules/chapter4/pages/section5.adoc @@ -1,7 +1,7 @@ = Install Sample Application on Tenant Cluster :experimental: -In this section, you will be installing basic _Node.js_ application on _Tenant_ cluster. +In this section, you will be installing a basic _Node.js_ application on the _Tenant_ cluster. You will also be testing the high availability and resilience of deployed _Node.js_ application. image::MCAP_setup_1.png[] @@ -19,7 +19,7 @@ NAME VERSION AVAILABLE PROGRESSING DEGRADED SINCE MESSAGE image-registry 4.16.8 True False False 61s ---- -== Install _Node.js_ application on _Tenant_ cluster +== Install _Node.js_ application on _Tenant_ Cluster . From the left navigation pane, click menu:Administrator[Developer]. + @@ -27,7 +27,7 @@ image::tenant_console_switch_view.png[] . You will be redirected to the developer view. + -Click on text btn:[create a Project]. +Click on the text btn:[create a Project]. + image::tenant_console_developer_view.png[] @@ -37,23 +37,23 @@ Name: `sample`. + Display name: `sample`. + -Description: `This is sample project`. +Description: `This is a sample project`. + Click btn:[Create] to create the project. + image::tenant_console_create_project.png[] -. In the project view, click on text btn:[Add page] to add the sample application. +. In the project view, click on the text btn:[Add page] to add the sample application. + image::tenant_console_nodejs_app_add.png[] -. In the add page window, click on text btn:[View all samples]. +. In the add page window, click on the text btn:[View all samples]. + image::tenant_console_nodejs_app_sample.png[] -. In the search section, search for `basic node`. +. In the search section, search for the `basic node`. + -Select the `Basic Node.js` application from samples. +Select the `Basic Node.js` application from the samples. + image::tenant_console_nodejs_app_search.png[] @@ -69,7 +69,7 @@ image::tenant_console_nodejs_app_create_1.png[] . In resources tab, notice that the application is building. + -If you click on the URL from routes section, it will open the application page in new tab. +If you click on the URL from the routes section, it will open the application page in a new tab. + image::tenant_console_nodejs_app_build_running.png[] @@ -85,7 +85,7 @@ image::tenant_console_nodejs_app_accept_risk.png[] + image::tenant_console_nodejs_app_page_failure.png[] -. In a minute, you will notice the application is in running state. +. In a minute, you will notice the application is in the running state. + image::tenant_console_nodejs_app_build_success.png[] @@ -95,7 +95,7 @@ image::tenant_console_nodejs_app_page_success.png[] == Test the High Availability and Resilience -. Find out the where the application pod is running. +. Find out where the application pod is running. + .Sample output: ---- @@ -107,13 +107,13 @@ nodejs-basic-6d55569c9c-gps2d 1/1 Running 0 2m51s 10.130.0. . In this case, the application pod is running on the `tcn2.lab.example.com` node. + -This means that the `tcn2.lab.example.com` node is running on `sno2` _Infrastructure_ cluster. +This means that the `tcn2.lab.example.com` node is running on the `sno2` _Infrastructure_ cluster. . To test high availability and resilience, first shut down the `sno2` VM, which will shutdown the _Infrastructure_ cluster. + image::tenant_console_nodejs_app_shutdwon_vm.png[] + -Ensure that `sno2` VM is in shutoff state. +Ensure that the `sno2` VM is in a shutoff state. + image::tenant_console_nodejs_app_shutdwon_vm_1.png[] @@ -121,7 +121,7 @@ image::tenant_console_nodejs_app_shutdwon_vm_1.png[] + image::tenant_console_nodejs_app_page_failure.png[] -. The `pod-eviction-timeout` and `node-monitor-grace-period` parameters have the default value of `5m` and `40s` respectively. +. The `pod-eviction-timeout` and `node-monitor-grace-period` parameters have the default values of `5m` and `40s` respectively. This means it takes `5m40s` to trigger the pod eviction process after the last node status update. + After 5 minutes (eviction timeout), note that the application is successfully migrated to the `tcn1.lab.example.com` node. @@ -141,4 +141,4 @@ Eviction timeout - https://access.redhat.com/solutions/5359001[How to modify the + image::tenant_console_nodejs_app_page_success.png[] -. This test shows that even if one infrastructure node is down; application automatically migrate to other infrastructure node. \ No newline at end of file +. This test shows that even if one infrastructure node is down; the application automatically migrates to other infrastructure node. \ No newline at end of file From e02aba144d61bab9c5ad2ca15e13cdecfbfd35d6 Mon Sep 17 00:00:00 2001 From: Sarvesh Pandit Date: Fri, 27 Sep 2024 22:41:53 +0530 Subject: [PATCH 2/2] Rutuja's feedback incorporated --- modules/ROOT/pages/index.adoc | 4 ---- modules/chapter1/pages/section1.adoc | 5 ++--- modules/chapter3/pages/section2.adoc | 2 +- 3 files changed, 3 insertions(+), 8 deletions(-) diff --git a/modules/ROOT/pages/index.adoc b/modules/ROOT/pages/index.adoc index f9505df..34ba01b 100644 --- a/modules/ROOT/pages/index.adoc +++ b/modules/ROOT/pages/index.adoc @@ -22,7 +22,6 @@ The PTL team acknowledges the valuable contributions of the following Red Hat as This introductory course has a few, simple hands-on labs. You will use the https://demo.redhat.com/catalog?item=babylon-catalog-prod/equinix-metal.eqx-blank.prod&utm_source=webapp&utm_medium=share-link.ocp4-workshop-rhods-base-aws.prod[Equinix Metal baremetal blank server,window=read-later] catalog item in the Red Hat Demo Platform (RHDP) to run the hands-on exercises in this course. Update the Catalog link - ##FIX THIS## -// Can you fix the above link? When ordering this catalog item in RHDP: @@ -35,7 +34,6 @@ When ordering this catalog item in RHDP: Partners should be able to access the new https://partner.demo.redhat.com[Red Hat Demo Platform for Partners,window=read-later] by logging in with their RHN account credentials. For more information on this you can refer https://content.redhat.com/us/en/product/cross-portfolio-initiatives/rhdp.html#tabs-333fa7ebb9-item-b6fc845e73-tab[about Red Hat Demo Platform (RHDP),window=read-later] Catalog link - ##FIX THIS## -// Can you fix the above link? For partner support - https://connect.redhat.com/en/support[Help and support,window=read-later] @@ -46,9 +44,7 @@ For this course, you should have: * Red Hat Certified Systems Administrator certification, or equivalent knowledge of Linux system administration is recommended for all roles. * https://rol.redhat.com/rol/app/courses/do280-4.14[Red Hat OpenShift Administration II: Configuring a Production Cluster (DO280),window=read-later], or equivalent knowledge on configuring a production Red Hat OpenShift cluster. * https://rol.redhat.com/rol/app/technical-overview/do016-4.14[Red Hat OpenShift Virtualization (OCP Virt) Technical Overview (DO016),window=read-later] -// Above course showed - temporarily unavailable for me. Can you please check? * https://rol.redhat.com/rol/app/courses/do316-4.14[Managing Virtual Machines with Red Hat OpenShift Virtualization (DO316),window=read-later], or equivalent knowledge on how to create virtual machines on Red Hat OpenShift cluster using Red Hat OpenShift Virtualization. -// Avove course showed - temporarily unavailable for me. Can you please check? * Knowledge of installing Red Hat Advanced Cluster Management (RHACM) operator == Objectives diff --git a/modules/chapter1/pages/section1.adoc b/modules/chapter1/pages/section1.adoc index 9fc7906..908e194 100644 --- a/modules/chapter1/pages/section1.adoc +++ b/modules/chapter1/pages/section1.adoc @@ -58,8 +58,7 @@ https://linux-kvm.org/page/Main_Page[Linux-KVM,window=read-later] === Utility and Services -The bare metal is acting as a hypervisor, http server, dhcp server, and dns server. -// Should the above names - HTTP, DHCP be capitalized? +The bare metal is acting as a hypervisor, HTTP server, DHCP server, and DNS server. Configure all these services on the bare metal. === Networking @@ -69,7 +68,7 @@ This setup uses KVMs as the base infrastructure. All external communication between your clusters will happen via a https://developers.redhat.com/blog/2018/10/22/introduction-to-linux-interfaces-for-virtual-networking#bridge[virtual bridge,window=read-later] on the bare metal. Install the https://docs.openshift.com/container-platform/4.16/networking/k8s_nmstate/k8s-nmstate-about-the-k8s-nmstate-operator.html[Kubernetes NMState Operator,window=read-later] on the _Infrastructure_ clusters. -NMstate operator allows users to configure various network interface types, DNS, and routing on cluster nodes. +NMState operator allows users to configure various network interface types, DNS, and routing on cluster nodes. Two main object types drive the configuration. * NodeNetworkConfigurationPolicy (Policy) diff --git a/modules/chapter3/pages/section2.adoc b/modules/chapter3/pages/section2.adoc index cf743f8..d49509d 100644 --- a/modules/chapter3/pages/section2.adoc +++ b/modules/chapter3/pages/section2.adoc @@ -240,7 +240,7 @@ image::hub_console_sno1_review_create.png[] + image::hub_console_sno1_install_progress.png[] -. After 7-10 minutes, it waits for pending user action. +. After 7-10 minutes, it waits for _Pending user action_. + image::hub_console_sno1_pending_user_actions.png[] +