Skip to content

Annie ch 4 #7

New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Merged
merged 5 commits into from
Sep 12, 2024
Merged
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
42 changes: 21 additions & 21 deletions modules/chapter4/pages/section1.adoc
Original file line number Diff line number Diff line change
@@ -1,13 +1,13 @@
= Tenant VMs Deployment
:experimental:

In this section, you will be creating three VMs using OpenShift Virtualization with name `tcn1.lab.example.com`, `tcn2.lab.example.com` and `tcn3.lab.example.com`.
In this section, you will be creating three VMs using OpenShift Virtualization with the names `tcn1.lab.example.com`, `tcn2.lab.example.com`, and `tcn3.lab.example.com`.

image::MCAP_setup.png[]

== Prerequisites

. Ensure the all sno clusters i.e. _Infrastructure_ clusters are deployed and available.
. Ensure that all sno clusters i.e. _Infrastructure_ clusters are deployed and available.
+
.Sample output:
----
Expand All @@ -20,9 +20,9 @@ sno2 true https://api.sno2.lab.example.com:6443 True
sno3 true https://api.sno3.lab.example.com:6443 True True 68m
----

. Ensure OpenShift Virtualization operator is installed on _Infrastructure_ clusters.
. Ensure that the OpenShift Virtualization operator is installed on _Infrastructure_ clusters.

. Download `virtctl` command line tool from any SNO’s console.
. Download the `virtctl` command line tool from any SNO’s console.

.. Visit the web console home page of `sno1` cluster.
+
Expand All @@ -34,7 +34,7 @@ image::sno1_console_cli_tools.png[]

.. In this page, scroll down to the `virtctl - KubeVirt command line interface` section.
+
Select the `Download virtctl for Linux for x86_64` to open download link in new tab.
Select the `Download virtctl for Linux for x86_64` to open a download link in a new tab.
+
image::sno1_console_virtctl.png[]
+
Expand All @@ -53,7 +53,7 @@ image::sno1_console_virtctl_2.png[]
tar -xzvf /root/Downloads/virtctl.tar.gz
----

. Move `virtctl` binary to `/usr/local/bin` directory.
. Move `virtctl` binary to the `/usr/local/bin` directory.
+
[source,bash,role=execute]
----
Expand All @@ -66,31 +66,31 @@ mv virtctl /usr/local/bin/
+
image::sno1_console_home.png[]
+
From left navigation pane, click menu:Virtualization[VirtualMachines].
From the left navigation pane, click menu:Virtualization[VirtualMachines].
+
image::sno1_console_create_vm.png[]

. Create virtual machine from template.
. Create a virtual machine from template.
+
Click menu:Create VirtualMachine[From template]
+
image::sno1_console_create_vm_1.png[]

. Search `rhel9` in template catalog.
. Search `rhel9` in the template catalog.
+
Select the `rhel9` bootable source template from catalog.
Select the `rhel9` bootable source template from the catalog.
+
image::sno1_console_create_vm_2.png[]

. This is the VM create window.
+
image::sno1_console_create_vm_3.png[]

. Scroll down in VM create window and update disk size from 30GB to 120GB.
. Scroll down in the VM create window and update the disk size from 30GB to 120GB.
+
image::sno1_console_create_vm_4.png[]

. Scroll down in VM create window and edit the CPU and memory.
. Scroll down in the VM create window and edit the CPU and memory.
+
image::sno1_console_create_vm_5.png[]

Expand All @@ -100,7 +100,7 @@ Click btn:[Customize VirtualMachine] to customize the virtual machine.
+
image::sno1_console_create_vm_6.png[]

. In virtual machine's overview tab, edit the virtual machine name.
. In the virtual machine's overview tab, edit the virtual machine name.
+
image::sno1_console_create_vm_7.png[]

Expand All @@ -112,39 +112,39 @@ image::sno1_console_create_vm_8.png[]
+
image::sno1_console_create_vm_9.png[]

. To update the network interface, change the tab to network interfaces tab.
. To update the network interface, change the tab to the network interfaces tab.
+
image::sno1_console_create_vm_10.png[]

.. Edit the network interface of the virtual machine.
+
image::sno1_console_create_vm_11.png[]
+
Network: Bridge network (in previous chapter created the network attachment definition)
Network: Bridge network (in the previous chapter you created the network attachment definition)
+
Get the mac address for virtual machine from `/etc/dhcp/dhcpd.conf` file.
+
image::sno1_console_create_vm_12.png[]

.. Update the mac address of virtual machine.
.. Update the mac address of the virtual machine.
+
image::sno1_console_create_vm_13.png[]

.. Ensure all network interface related details are updated.
.. Ensure that all the network interface related details are updated.
+
Click btn:[Create VirtualMachine] to create the VM and start the VM.
+
image::sno1_console_create_vm_14.png[]

. In VM's overview tab, you can see virtual machine is in running state.
. In VM's overview tab, you can see that the virtual machine is in running state.
+
image::sno1_console_create_vm_15.png[]

. Once VM is booted, ensure IP address and hostname is assigned as per the `/etc/dhcp/dhcpd.conf` file.
. Once the VM is booted, ensure that the IP address and the hostname is assigned as per the `/etc/dhcp/dhcpd.conf` file.
+
image::sno1_console_create_vm_16.png[]

== Deploy remaining _Tenant_ VMs on `sno2` and `sno3` clusters

. You can deploy remaining `tcn2.lab.example.com` and `tcn3.lab.example.com` VMs by following steps from previous section followed for `tcn1.lab.example.com` VM deployment.
. Each VM deployment takes 5 to 10 minutes to complete.
. You can deploy the remaining `tcn2.lab.example.com` and `tcn3.lab.example.com` VMs by following the steps from previous section that you followed for a `tcn1.lab.example.com` VM deployment.
. Each VM deployment takes 5 to 10 minutes to complete.
68 changes: 34 additions & 34 deletions modules/chapter4/pages/section2.adoc
Original file line number Diff line number Diff line change
Expand Up @@ -8,17 +8,17 @@ image::MCAP_setup_1.png[]

== Prerequisites

`tcn1.lab.example.com`, `tcn2.lab.example.com` and `tcn3.lab.example.com` VMs (created using OpenShift Virtualization) are up and running.
The `tcn1.lab.example.com`, `tcn2.lab.example.com` and `tcn3.lab.example.com` VMs (created using OpenShift Virtualization) are up and running.

== Deploy _Tenant_ cluster as _Three-Node OpenShift Compact_ cluster

. Login to web console of _Hub_ cluster.
. Login to the web console of _Hub_ cluster.
+
Ensure you have switched to the `All Clusters` from `local-cluster`.
Ensure that you have switched to `All Clusters` from `local-cluster`.
+
image::hub_console_switch.png[]

. Create cluster using btn:[Create cluster].
. Create the cluster using btn:[Create cluster].
+
image::hub_console_create_cluster.png[]

Expand Down Expand Up @@ -72,15 +72,15 @@ image::hub_console_tenant_review_save.png[]
+
image::hub_console_tenant_add_host_discovery_iso.png[]

.. Here you need to provide the public ssh key of `root` user.
.. Here you will need to provide the public SSH key of `root` user.
+
image::hub_console_tenant_public_key.png[]

.. Get the public ssh key of `root` user.
.. Get the public SSH key of `root` user.
+
image::hub_console_tenant_public_key_1.png[]

.. Provide the public ssh key of `root` user in `SSH public key` field.
.. Provide the public SSH key of `root` user in `SSH public key` field.
+
Click btn:[Generate Discovery ISO] to generate discovery ISO.
+
Expand All @@ -90,11 +90,11 @@ image::hub_console_tenant_generate_discovery_iso.png[]
+
image::hub_console_tenant_download_discovery_iso.png[]

.. Upload iso from `/root/Download` directory to _Infrastructure_ clusters.
.. Upload ISO from `/root/Download` directory to _Infrastructure_ clusters.
+
Login to `sno1` cluster.
+
.Sample output
.Sample output:
----
[root@hypervisor ~]# oc get nodes
NAME STATUS ROLES AGE VERSION
Expand All @@ -119,9 +119,9 @@ NAME STATUS ROLES AGE VERSION
sno1.lab.example.com Ready control-plane,master,worker 20h v1.29.7+4510e9c
----
+
Upload the discovery iso using `virtctl image-upload` command.
Upload the discovery ISO using the `virtctl image-upload` command.
+
.Sample output
.Sample output:
----
[root@hypervisor ~]# ls /root/Downloads/
3b6f60e8-ad5e-4466-a1ad-add735801ad1-discovery.iso ceph-external-cluster-details-exporter.py virtctl.tar.gz
Expand All @@ -140,18 +140,18 @@ Uploading /root/Downloads/eea97cca-cda5-47b9-bfdf-51929b4a7067-discovery.iso com
[root@hypervisor ~]# oc logout
----
+
Verify the PVC is created on `sno1` cluster.
Verify that the PVC is created on `sno1` cluster.
+
In `sno1` cluster web console, from left navigation pane; click menu:Storage[PersistentVolumeClaims].
In `sno1` cluster web console, from the left navigation pane; click menu:Storage[PersistentVolumeClaims].
+
image::sno1_console_tenant_iso_pvc.png[]
+
[IMPORTANT]
Upload the discovery iso to `sno2` and `sno3` clusters by performing the above steps.
Upload the discovery ISO to `sno2` and `sno3` clusters by performing the above steps.

.. Boot the `tcn1.lab.example.com`, `tcn2.lab.example.com` and `tcn3.lab.example.com` VMs with discovery ISO.
+
In `sno1` cluster web console, from left navigation pane; click menu:Virtualization[VirtualMachines].
In the `sno1` cluster web console, from the left navigation pane; click menu:Virtualization[VirtualMachines].
+
image::sno1_console_create_vm.png[]
+
Expand All @@ -175,11 +175,11 @@ Select menu:Source[PVC] and then select menu:Select PersistentVolumeClaim[tenant
+
image::sno1_console_vm_add_disk_iso.png[]
+
Keep interface as `VirtIO` and click btn:[Save] to add the disk.
Keep the interface as `VirtIO` and click btn:[Save] to add the disk.
+
image::sno1_console_vm_add_disk_iso_1.png[]
+
Edit the boot order of the `tcn1.lab.example.com` VM from `Configuration` tab, select `Details`.
Edit the boot order of the `tcn1.lab.example.com` VM from `Configuration` tab, and select `Details`.
+
image::sno1_console_vm_boot_order.png[]
+
Expand All @@ -196,25 +196,25 @@ Ensure the `tcn1.lab.example.com` VM boots with discovery ISO.
image::sno1_console_vm_boot_rhcos.png[]
+
[IMPORTANT]
Follow the same above steps for `tcn2.lab.example.com` and `tcn3.lab.example.com` VMs to boot them with discovery ISO.
Follow the same steps above for the `tcn2.lab.example.com` and `tcn3.lab.example.com` VMs to boot them with discovery ISO.

.. Back to web console of _Hub_ cluster to proceed cluster installation.
.. Return to web console of _Hub_ cluster to proceed cluster installation.
+
Approve the discovered host `tcn1.lab.example.com`.
+
image::hub_console_tenant_approve_host.png[]
+
Ensure the discovered host `tcn1.lab.example.com` is ready.
Ensure that the discovered host `tcn1.lab.example.com` is ready.
+
image::hub_console_tenant_approve_host_ready.png[]
+
Similarly approve remaining hosts `tcn2.lab.example.com` and `tcn3.lab.example.com`.
Approve the remaining hosts `tcn2.lab.example.com` and `tcn3.lab.example.com`.
+
Click btn:[Next] to proceed.
+
image::hub_console_tenant_approve_host_ready_1.png[]

. In networking section, ensure all hosts are ready.
. In the networking section, ensure all hosts are ready.
+
Provide the `API IP` and `Ingress IP` from zone file.
+
Expand All @@ -224,11 +224,11 @@ Click btn:[Next] to proceed.
+
image::hub_console_tenant_networking_ready.png[]

. If you notice `All checks passed` for cluster and host validations then click btn:[Install cluster].
. If you notice `All checks passed` for the cluster and host validations, then click btn:[Install cluster].
+
image::hub_console_tenant_review_create.png[]

. Notice the installation has started.
. Notice that the installation has started.
+
image::hub_console_tenant_install_progress.png[]
+
Expand All @@ -242,11 +242,11 @@ image::hub_console_tenant_install_progress_3.png[]
+
image::hub_console_tenant_pending_user_actions.png[]
+
This means you need to disconnect the discovery ISO from the `tcn3.lab.example.com` VM and boot the `tcn3.lab.example.com` VM from disk.
This means that you will need to disconnect the discovery ISO from the `tcn3.lab.example.com` VM and boot the `tcn3.lab.example.com` VM from disk.
+
image::hub_console_tenant_pending_user_actions_1.png[]
+
This means you need to disconnect the discovery ISO from the `tcn2.lab.example.com` VM and boot the `tcn2.lab.example.com` VM from disk.
This means you will also need to disconnect the discovery ISO from the `tcn2.lab.example.com` VM and boot the `tcn2.lab.example.com` VM from disk.

.. Shutdown the `tcn2.lab.example.com` VM.
+
Expand All @@ -265,15 +265,15 @@ Ensure the `tcn2.lab.example.com` VM boots from disk.
image::sno1_console_vm_boot_tcn2.png[]
+
[IMPORTANT]
Follow the same above steps to boot the `tcn3.lab.example.com` VM from disk.
Follow the same steps above to boot the `tcn3.lab.example.com` VM from disk.

. After 2 minutes, installation proceeds and you will notice the progress.
+
After 5 minutes, `tcn2.lab.example.com` and `tcn3.lab.example.com` nodes are installed.
After 5 minutes, the `tcn2.lab.example.com` and `tcn3.lab.example.com` nodes are installed.
+
image::hub_console_tenant_install_progress_4.png[]

. Installation proceeds and continue with `tcn1.lab.example.com` node.
. Installation proceeds and continues with `tcn1.lab.example.com` node.
+
image::hub_console_tenant_install_progress_5.png[]
+
Expand All @@ -283,18 +283,18 @@ image::hub_console_tenant_install_progress_6.png[]
+
image::hub_console_tenant_pending_user_actions_2.png[]
+
This means you need to disconnect the discovery ISO from the `tcn1.lab.example.com` VM and boot the `tcn1.lab.example.com` VM from disk.
This means you will need to disconnect the discovery ISO from the `tcn1.lab.example.com` VM and boot the `tcn1.lab.example.com` VM from disk.
+
Follow the same steps which were followed for `tcn2.lab.example.com` VM to boot the `tcn1.lab.example.com` VM from disk.
Follow the same steps you followed for `tcn2.lab.example.com` VM to boot the `tcn1.lab.example.com` VM from disk.

. Installation completes in approximately in 20 minutes.
. Installation will complete in approximately in 20 minutes.
+
image::hub_console_tenant_install_progress_7.png[]
+
image::hub_console_tenant_install_complete.png[] ##FIX THIS##

. Notice the `tenant` cluster is added to cluster list in `default` cluster set.
. Notice that the `tenant` cluster is added to the cluster list in `default` cluster set.
+
image::hub_console_tenant_ready.png[]
+
This concludes successful deployment of OpenShift cluster and added to hub cluster using RHACM.
This concludes successful deployment of OpenShift cluster and added to hub cluster using RHACM.
Loading
Loading