diff --git a/modules/chapter4/pages/section1.adoc b/modules/chapter4/pages/section1.adoc index 67333d0..f02956d 100644 --- a/modules/chapter4/pages/section1.adoc +++ b/modules/chapter4/pages/section1.adoc @@ -1,13 +1,13 @@ = Tenant VMs Deployment :experimental: -In this section, you will be creating three VMs using OpenShift Virtualization with name `tcn1.lab.example.com`, `tcn2.lab.example.com` and `tcn3.lab.example.com`. +In this section, you will be creating three VMs using OpenShift Virtualization with the names `tcn1.lab.example.com`, `tcn2.lab.example.com`, and `tcn3.lab.example.com`. image::MCAP_setup.png[] == Prerequisites -. Ensure the all sno clusters i.e. _Infrastructure_ clusters are deployed and available. +. Ensure that all sno clusters i.e. _Infrastructure_ clusters are deployed and available. + .Sample output: ---- @@ -20,9 +20,9 @@ sno2 true https://api.sno2.lab.example.com:6443 True sno3 true https://api.sno3.lab.example.com:6443 True True 68m ---- -. Ensure OpenShift Virtualization operator is installed on _Infrastructure_ clusters. +. Ensure that the OpenShift Virtualization operator is installed on _Infrastructure_ clusters. -. Download `virtctl` command line tool from any SNO’s console. +. Download the `virtctl` command line tool from any SNO’s console. .. Visit the web console home page of `sno1` cluster. + @@ -34,7 +34,7 @@ image::sno1_console_cli_tools.png[] .. In this page, scroll down to the `virtctl - KubeVirt command line interface` section. + -Select the `Download virtctl for Linux for x86_64` to open download link in new tab. +Select the `Download virtctl for Linux for x86_64` to open a download link in a new tab. + image::sno1_console_virtctl.png[] + @@ -53,7 +53,7 @@ image::sno1_console_virtctl_2.png[] tar -xzvf /root/Downloads/virtctl.tar.gz ---- -. Move `virtctl` binary to `/usr/local/bin` directory. +. Move `virtctl` binary to the `/usr/local/bin` directory. + [source,bash,role=execute] ---- @@ -66,19 +66,19 @@ mv virtctl /usr/local/bin/ + image::sno1_console_home.png[] + -From left navigation pane, click menu:Virtualization[VirtualMachines]. +From the left navigation pane, click menu:Virtualization[VirtualMachines]. + image::sno1_console_create_vm.png[] -. Create virtual machine from template. +. Create a virtual machine from template. + Click menu:Create VirtualMachine[From template] + image::sno1_console_create_vm_1.png[] -. Search `rhel9` in template catalog. +. Search `rhel9` in the template catalog. + -Select the `rhel9` bootable source template from catalog. +Select the `rhel9` bootable source template from the catalog. + image::sno1_console_create_vm_2.png[] @@ -86,11 +86,11 @@ image::sno1_console_create_vm_2.png[] + image::sno1_console_create_vm_3.png[] -. Scroll down in VM create window and update disk size from 30GB to 120GB. +. Scroll down in the VM create window and update the disk size from 30GB to 120GB. + image::sno1_console_create_vm_4.png[] -. Scroll down in VM create window and edit the CPU and memory. +. Scroll down in the VM create window and edit the CPU and memory. + image::sno1_console_create_vm_5.png[] @@ -100,7 +100,7 @@ Click btn:[Customize VirtualMachine] to customize the virtual machine. + image::sno1_console_create_vm_6.png[] -. In virtual machine's overview tab, edit the virtual machine name. +. In the virtual machine's overview tab, edit the virtual machine name. + image::sno1_console_create_vm_7.png[] @@ -112,7 +112,7 @@ image::sno1_console_create_vm_8.png[] + image::sno1_console_create_vm_9.png[] -. To update the network interface, change the tab to network interfaces tab. +. To update the network interface, change the tab to the network interfaces tab. + image::sno1_console_create_vm_10.png[] @@ -120,31 +120,31 @@ image::sno1_console_create_vm_10.png[] + image::sno1_console_create_vm_11.png[] + -Network: Bridge network (in previous chapter created the network attachment definition) +Network: Bridge network (in the previous chapter you created the network attachment definition) + Get the mac address for virtual machine from `/etc/dhcp/dhcpd.conf` file. + image::sno1_console_create_vm_12.png[] -.. Update the mac address of virtual machine. +.. Update the mac address of the virtual machine. + image::sno1_console_create_vm_13.png[] -.. Ensure all network interface related details are updated. +.. Ensure that all the network interface related details are updated. + Click btn:[Create VirtualMachine] to create the VM and start the VM. + image::sno1_console_create_vm_14.png[] -. In VM's overview tab, you can see virtual machine is in running state. +. In VM's overview tab, you can see that the virtual machine is in running state. + image::sno1_console_create_vm_15.png[] -. Once VM is booted, ensure IP address and hostname is assigned as per the `/etc/dhcp/dhcpd.conf` file. +. Once the VM is booted, ensure that the IP address and the hostname is assigned as per the `/etc/dhcp/dhcpd.conf` file. + image::sno1_console_create_vm_16.png[] == Deploy remaining _Tenant_ VMs on `sno2` and `sno3` clusters -. You can deploy remaining `tcn2.lab.example.com` and `tcn3.lab.example.com` VMs by following steps from previous section followed for `tcn1.lab.example.com` VM deployment. -. Each VM deployment takes 5 to 10 minutes to complete. \ No newline at end of file +. You can deploy the remaining `tcn2.lab.example.com` and `tcn3.lab.example.com` VMs by following the steps from previous section that you followed for a `tcn1.lab.example.com` VM deployment. +. Each VM deployment takes 5 to 10 minutes to complete. diff --git a/modules/chapter4/pages/section2.adoc b/modules/chapter4/pages/section2.adoc index 2723e71..c5ec11a 100644 --- a/modules/chapter4/pages/section2.adoc +++ b/modules/chapter4/pages/section2.adoc @@ -8,17 +8,17 @@ image::MCAP_setup_1.png[] == Prerequisites -`tcn1.lab.example.com`, `tcn2.lab.example.com` and `tcn3.lab.example.com` VMs (created using OpenShift Virtualization) are up and running. +The `tcn1.lab.example.com`, `tcn2.lab.example.com` and `tcn3.lab.example.com` VMs (created using OpenShift Virtualization) are up and running. == Deploy _Tenant_ cluster as _Three-Node OpenShift Compact_ cluster -. Login to web console of _Hub_ cluster. +. Login to the web console of _Hub_ cluster. + -Ensure you have switched to the `All Clusters` from `local-cluster`. +Ensure that you have switched to `All Clusters` from `local-cluster`. + image::hub_console_switch.png[] -. Create cluster using btn:[Create cluster]. +. Create the cluster using btn:[Create cluster]. + image::hub_console_create_cluster.png[] @@ -72,15 +72,15 @@ image::hub_console_tenant_review_save.png[] + image::hub_console_tenant_add_host_discovery_iso.png[] -.. Here you need to provide the public ssh key of `root` user. +.. Here you will need to provide the public SSH key of `root` user. + image::hub_console_tenant_public_key.png[] -.. Get the public ssh key of `root` user. +.. Get the public SSH key of `root` user. + image::hub_console_tenant_public_key_1.png[] -.. Provide the public ssh key of `root` user in `SSH public key` field. +.. Provide the public SSH key of `root` user in `SSH public key` field. + Click btn:[Generate Discovery ISO] to generate discovery ISO. + @@ -90,11 +90,11 @@ image::hub_console_tenant_generate_discovery_iso.png[] + image::hub_console_tenant_download_discovery_iso.png[] -.. Upload iso from `/root/Download` directory to _Infrastructure_ clusters. +.. Upload ISO from `/root/Download` directory to _Infrastructure_ clusters. + Login to `sno1` cluster. + -.Sample output +.Sample output: ---- [root@hypervisor ~]# oc get nodes NAME STATUS ROLES AGE VERSION @@ -119,9 +119,9 @@ NAME STATUS ROLES AGE VERSION sno1.lab.example.com Ready control-plane,master,worker 20h v1.29.7+4510e9c ---- + -Upload the discovery iso using `virtctl image-upload` command. +Upload the discovery ISO using the `virtctl image-upload` command. + -.Sample output +.Sample output: ---- [root@hypervisor ~]# ls /root/Downloads/ 3b6f60e8-ad5e-4466-a1ad-add735801ad1-discovery.iso ceph-external-cluster-details-exporter.py virtctl.tar.gz @@ -140,18 +140,18 @@ Uploading /root/Downloads/eea97cca-cda5-47b9-bfdf-51929b4a7067-discovery.iso com [root@hypervisor ~]# oc logout ---- + -Verify the PVC is created on `sno1` cluster. +Verify that the PVC is created on `sno1` cluster. + -In `sno1` cluster web console, from left navigation pane; click menu:Storage[PersistentVolumeClaims]. +In `sno1` cluster web console, from the left navigation pane; click menu:Storage[PersistentVolumeClaims]. + image::sno1_console_tenant_iso_pvc.png[] + [IMPORTANT] -Upload the discovery iso to `sno2` and `sno3` clusters by performing the above steps. +Upload the discovery ISO to `sno2` and `sno3` clusters by performing the above steps. .. Boot the `tcn1.lab.example.com`, `tcn2.lab.example.com` and `tcn3.lab.example.com` VMs with discovery ISO. + -In `sno1` cluster web console, from left navigation pane; click menu:Virtualization[VirtualMachines]. +In the `sno1` cluster web console, from the left navigation pane; click menu:Virtualization[VirtualMachines]. + image::sno1_console_create_vm.png[] + @@ -175,11 +175,11 @@ Select menu:Source[PVC] and then select menu:Select PersistentVolumeClaim[tenant + image::sno1_console_vm_add_disk_iso.png[] + -Keep interface as `VirtIO` and click btn:[Save] to add the disk. +Keep the interface as `VirtIO` and click btn:[Save] to add the disk. + image::sno1_console_vm_add_disk_iso_1.png[] + -Edit the boot order of the `tcn1.lab.example.com` VM from `Configuration` tab, select `Details`. +Edit the boot order of the `tcn1.lab.example.com` VM from `Configuration` tab, and select `Details`. + image::sno1_console_vm_boot_order.png[] + @@ -196,25 +196,25 @@ Ensure the `tcn1.lab.example.com` VM boots with discovery ISO. image::sno1_console_vm_boot_rhcos.png[] + [IMPORTANT] -Follow the same above steps for `tcn2.lab.example.com` and `tcn3.lab.example.com` VMs to boot them with discovery ISO. +Follow the same steps above for the `tcn2.lab.example.com` and `tcn3.lab.example.com` VMs to boot them with discovery ISO. -.. Back to web console of _Hub_ cluster to proceed cluster installation. +.. Return to web console of _Hub_ cluster to proceed cluster installation. + Approve the discovered host `tcn1.lab.example.com`. + image::hub_console_tenant_approve_host.png[] + -Ensure the discovered host `tcn1.lab.example.com` is ready. +Ensure that the discovered host `tcn1.lab.example.com` is ready. + image::hub_console_tenant_approve_host_ready.png[] + -Similarly approve remaining hosts `tcn2.lab.example.com` and `tcn3.lab.example.com`. +Approve the remaining hosts `tcn2.lab.example.com` and `tcn3.lab.example.com`. + Click btn:[Next] to proceed. + image::hub_console_tenant_approve_host_ready_1.png[] -. In networking section, ensure all hosts are ready. +. In the networking section, ensure all hosts are ready. + Provide the `API IP` and `Ingress IP` from zone file. + @@ -224,11 +224,11 @@ Click btn:[Next] to proceed. + image::hub_console_tenant_networking_ready.png[] -. If you notice `All checks passed` for cluster and host validations then click btn:[Install cluster]. +. If you notice `All checks passed` for the cluster and host validations, then click btn:[Install cluster]. + image::hub_console_tenant_review_create.png[] -. Notice the installation has started. +. Notice that the installation has started. + image::hub_console_tenant_install_progress.png[] + @@ -242,11 +242,11 @@ image::hub_console_tenant_install_progress_3.png[] + image::hub_console_tenant_pending_user_actions.png[] + -This means you need to disconnect the discovery ISO from the `tcn3.lab.example.com` VM and boot the `tcn3.lab.example.com` VM from disk. +This means that you will need to disconnect the discovery ISO from the `tcn3.lab.example.com` VM and boot the `tcn3.lab.example.com` VM from disk. + image::hub_console_tenant_pending_user_actions_1.png[] + -This means you need to disconnect the discovery ISO from the `tcn2.lab.example.com` VM and boot the `tcn2.lab.example.com` VM from disk. +This means you will also need to disconnect the discovery ISO from the `tcn2.lab.example.com` VM and boot the `tcn2.lab.example.com` VM from disk. .. Shutdown the `tcn2.lab.example.com` VM. + @@ -265,15 +265,15 @@ Ensure the `tcn2.lab.example.com` VM boots from disk. image::sno1_console_vm_boot_tcn2.png[] + [IMPORTANT] -Follow the same above steps to boot the `tcn3.lab.example.com` VM from disk. +Follow the same steps above to boot the `tcn3.lab.example.com` VM from disk. . After 2 minutes, installation proceeds and you will notice the progress. + -After 5 minutes, `tcn2.lab.example.com` and `tcn3.lab.example.com` nodes are installed. +After 5 minutes, the `tcn2.lab.example.com` and `tcn3.lab.example.com` nodes are installed. + image::hub_console_tenant_install_progress_4.png[] -. Installation proceeds and continue with `tcn1.lab.example.com` node. +. Installation proceeds and continues with `tcn1.lab.example.com` node. + image::hub_console_tenant_install_progress_5.png[] + @@ -283,18 +283,18 @@ image::hub_console_tenant_install_progress_6.png[] + image::hub_console_tenant_pending_user_actions_2.png[] + -This means you need to disconnect the discovery ISO from the `tcn1.lab.example.com` VM and boot the `tcn1.lab.example.com` VM from disk. +This means you will need to disconnect the discovery ISO from the `tcn1.lab.example.com` VM and boot the `tcn1.lab.example.com` VM from disk. + -Follow the same steps which were followed for `tcn2.lab.example.com` VM to boot the `tcn1.lab.example.com` VM from disk. +Follow the same steps you followed for `tcn2.lab.example.com` VM to boot the `tcn1.lab.example.com` VM from disk. -. Installation completes in approximately in 20 minutes. +. Installation will complete in approximately in 20 minutes. + image::hub_console_tenant_install_progress_7.png[] + image::hub_console_tenant_install_complete.png[] ##FIX THIS## -. Notice the `tenant` cluster is added to cluster list in `default` cluster set. +. Notice that the `tenant` cluster is added to the cluster list in `default` cluster set. + image::hub_console_tenant_ready.png[] + -This concludes successful deployment of OpenShift cluster and added to hub cluster using RHACM. \ No newline at end of file +This concludes successful deployment of OpenShift cluster and added to hub cluster using RHACM. diff --git a/modules/chapter4/pages/section3.adoc b/modules/chapter4/pages/section3.adoc index 26ae361..21dbbc0 100644 --- a/modules/chapter4/pages/section3.adoc +++ b/modules/chapter4/pages/section3.adoc @@ -7,9 +7,9 @@ image::MCAP_setup_1.png[] == Prerequisites -. Ensure that _Tenant_ cluster is deployed successfully. +. Ensure that the _Tenant_ cluster is deployed successfully. -. Ensure `/root/tenant/` directory and file structure created. +. Ensure that the `/root/tenant/` directory and file structure are created. + .Sample output: ---- @@ -25,23 +25,23 @@ dr-xr-x---. 13 root root 4096 Aug 22 15:18 .. + image::hub_console_tenant_install_download.png[] -.. Download the `kubeconfig` file to hypervisor, and then copy to `/root/tenant` directory on hypervisor. +.. Download the `kubeconfig` file to the hypervisor, and then copy it to the `/root/tenant` directory on the hypervisor. + .Sample output: ---- [root@hypervisor ~]# mv /root/Downloads/tenant-kubeconfig.yaml /root/tenant/kubeconfig ---- -.. Copy the password for `kubeadmin` user, and paste it in new tab of a Firefox browser. +.. Copy the password for `kubeadmin` user, and paste it in a new tab of a Firefox browser. + image::hub_console_tenant_copy_password.png[] + -Copy the password from the tab of Firefox browser, and paste it in `/root/tenant/kubeadmin-password` file. +Copy the password from the tab of the Firefox browser, and paste it in the `/root/tenant/kubeadmin-password` file. + image::hub_console_tenant_copy_password_1.png[] [IMPORTANT] -Follow the same steps for `sno2` and `sno3` clusters. +Follow the same steps for the `sno2` and `sno3` clusters. == Access the _tenant_ Cluster via CLI @@ -52,14 +52,14 @@ Follow the same steps for `sno2` and `sno3` clusters. cp /root/tenant/kubeconfig /root/.kube/config ---- -. Set the `kubepass` variable as `kubeadmin` user's password. +. Set the `kubepass` variable as the `kubeadmin` user's password. + [source,bash,role=execute] ---- kubepass=$(cat /root/tenant/kubeadmin-password) ---- -. Login to _tenant_ cluster with `oc login` command. +. Login to the _tenant_ cluster with the `oc login` command. + [source,bash,role=execute] ---- @@ -89,11 +89,11 @@ tenant.lab.example.com Ready control-plane,master,worker 10h v1.29.7+6a ---- [NOTE] -Follow the same steps for `sno2` and `sno3` clusters. +Follow the same steps for the `sno2` and `sno3` clusters. == Access the _tenant_ Cluster from Web Console -. Get the web console url from _Hub_ cluster console. +. Get the web console URL from _Hub_ cluster console. + image::hub_console_tenant_install_download.png[] + @@ -126,4 +126,4 @@ image::tenant_console_access_1.png[] . Verify _tenant_ cluster is in `Ready` state in _Hub_ cluster console. + -image::hub_console_tenant_ready.png[] \ No newline at end of file +image::hub_console_tenant_ready.png[]