Skip to content

Commit 02ef3c6

Browse files
authored
Merge pull request #80716 from lahinson/osdocs-11001-move-hcp-virt
[OSDOCS-11001]: Move HCP virt content to OCP
2 parents e9c06a6 + 33223aa commit 02ef3c6

17 files changed

+945
-0
lines changed

hosted_control_planes/hcp-deploy/hcp-deploy-virt.adoc

Lines changed: 73 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -6,3 +6,76 @@ include::_attributes/common-attributes.adoc[]
66

77
toc::[]
88

9+
With {hcp} and {VirtProductName}, you can create {product-title} clusters with worker nodes that are hosted by KubeVirt virtual machines. {hcp-capital} on {VirtProductName} provides several benefits:
10+
11+
* Enhances resource usage by packing {hcp} and hosted clusters in the same underlying bare metal infrastructure
12+
* Separates {hcp} and hosted clusters to provide strong isolation
13+
* Reduces cluster provision time by eliminating the bare metal node bootstrapping process
14+
* Manages many releases under the same base {product-title} cluster
15+
16+
The {hcp} feature is enabled by default.
17+
18+
You can use the hosted control plane command line interface, hcp, to create an {product-title} hosted cluster. The hosted cluster is automatically imported as a managed cluster. If you want to disable this automatic import feature, see _Disabling the automatic import of hosted clusters into multicluster engine operator_.
19+
20+
include::modules/hcp-virt-reqs.adoc[leveloffset=+1]
21+
22+
[role="_additional-resources"]
23+
.Additional resources
24+
25+
* xref:../../scalability_and_performance/recommended-performance-scale-practices/recommended-etcd-practices.adoc#recommended-etcd-practices[Recommended etcd practices]
26+
* xref:../../storage/persistent_storage/persistent_storage_local/persistent-storage-using-lvms.adoc[Persistent storage using LVM Storage]
27+
* To disable the {hcp} feature or, if you already disabled it and want to manually enable it, see _Enabling or disabling the {hcp} feature_.
28+
* To manage hosted clusters by running Ansible Automation Platform jobs, see link:https://docs.redhat.com/en/documentation/red_hat_advanced_cluster_management_for_kubernetes/2.11/html/clusters/cluster_mce_overview#ansible-config-hosted-cluster[Configuring Ansible Automation Platform jobs to run on hosted clusters].
29+
* If you want to disable the automatic import feature, see _Disabling the automatic import of hosted clusters into {mce-short}_.
30+
31+
[id="hcp-virt-create-hc"]
32+
== Creating a hosted cluster with the KubeVirt platform
33+
34+
With {product-title} 4.14 and later, you can create a cluster with KubeVirt, to include creating with an external infrastructure.
35+
36+
include::modules/hcp-virt-create-hc-cli.adoc[leveloffset=+2]
37+
include::modules/hcp-virt-create-hc-ext-infra.adoc[leveloffset=+2]
38+
include::modules/hcp-virt-create-hc-console.adoc[leveloffset=+2]
39+
40+
[role="_additional-resources"]
41+
.Additional resources
42+
43+
* To create credentials that you can reuse when you create a hosted cluster with the console, see link:https://docs.redhat.com/en/documentation/red_hat_advanced_cluster_management_for_kubernetes/2.11/html/clusters/cluster_mce_overview#creating-a-credential-for-an-on-premises-environment[Creating a credential for an on-premises environment].
44+
45+
include::modules/hcp-virt-ingress-dns.adoc[leveloffset=+1]
46+
47+
[id="hcp-virt-ingress-dns-custom"]
48+
== Customizing ingress and DNS behavior
49+
50+
If you do not want to use the default ingress and DNS behavior, you can configure a KubeVirt hosted cluster with a unique base domain at creation time. This option requires manual configuration steps during creation and involves three main steps: cluster creation, load balancer creation, and wildcard DNS configuration.
51+
52+
include::modules/hcp-virt-hc-base-domain.adoc[leveloffset=+2]
53+
include::modules/hcp-virt-load-balancer.adoc[leveloffset=+2]
54+
include::modules/hcp-virt-wildcard-dns.adoc[leveloffset=+2]
55+
56+
include::modules/hcp-metallb.adoc[leveloffset=+1]
57+
58+
[role="_additional-resources"]
59+
.Additional resources
60+
61+
* For more information about MetalLB, see xref:../../networking/metallb/metallb-operator-install.adoc#metallb-operator-install[Installing the MetalLB Operator].
62+
63+
[id="hcp-virt-addl-resources"]
64+
== Configuring additional networks, guaranteed CPUs, and VM scheduling for node pools
65+
66+
If you need to configure additional networks for node pools, request a guaranteed CPU access for Virtual Machines (VMs), or manage scheduling of KubeVirt VMs, see the following procedures.
67+
68+
include::modules/hcp-virt-add-networks.adoc[leveloffset=+2]
69+
include::modules/hcp-virt-addl-network.adoc[leveloffset=+3]
70+
include::modules/hcp-virt-guaranteed-cpus.adoc[leveloffset=+2]
71+
include::modules/hcp-virt-sched-vms.adoc[leveloffset=+2]
72+
73+
include::modules/hcp-virt-scale-nodepool.adoc[leveloffset=+1]
74+
include::modules/hcp-virt-add-node.adoc[leveloffset=+2]
75+
76+
[role="_additional-resources"]
77+
.Additional resources
78+
79+
* To scale down the data plane to zero, see link:https://access.redhat.com/documentation/en-us/openshift_container_platform/4.15/html/hosted_control_planes/troubleshooting-hosted-control-planes#scale-down-data-plane_hcp-troubleshooting[Scaling down the data plane to zero].
80+
81+
include::modules/hcp-virt-verify-hc.adoc[leveloffset=+1]

modules/hcp-metallb.adoc

Lines changed: 90 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,90 @@
1+
// Module included in the following assemblies:
2+
//
3+
// * hosted_control_planes/hcp-deploy-virt.adoc
4+
5+
:_mod-docs-content-type: PROCEDURE
6+
[id="hcp-metallb_{context}"]
7+
= Optional: Configuring MetalLB
8+
9+
You must install the MetalLB Operator before you configure MetalLB.
10+
11+
.Procedure
12+
13+
Complete the following steps to configure MetalLB on your hosted cluster:
14+
15+
. Create a `MetalLB` resource by saving the following sample YAML content in the `configure-metallb.yaml` file:
16+
+
17+
[source,yaml]
18+
----
19+
apiVersion: metallb.io/v1beta1
20+
kind: MetalLB
21+
metadata:
22+
name: metallb
23+
namespace: metallb-system
24+
----
25+
26+
. Apply the YAML content by entering the following command:
27+
+
28+
[source,terminal]
29+
----
30+
$ oc apply -f configure-metallb.yaml
31+
----
32+
+
33+
.Example output
34+
----
35+
metallb.metallb.io/metallb created
36+
----
37+
38+
. Create a `IPAddressPool` resource by saving the following sample YAML content in the `create-ip-address-pool.yaml` file:
39+
+
40+
[source,yaml]
41+
----
42+
apiVersion: metallb.io/v1beta1
43+
kind: IPAddressPool
44+
metadata:
45+
name: metallb
46+
namespace: metallb-system
47+
spec:
48+
addresses:
49+
- 192.168.216.32-192.168.216.122 <1>
50+
----
51+
+
52+
<1> Create an address pool with an available range of IP addresses within the node network. Replace the IP address range with an unused pool of available IP addresses in your network.
53+
54+
. Apply the YAML content by entering the following command:
55+
+
56+
[source,terminal]
57+
----
58+
oc apply -f create-ip-address-pool.yaml
59+
----
60+
+
61+
.Example output
62+
----
63+
ipaddresspool.metallb.io/metallb created
64+
----
65+
66+
. Create a `L2Advertisement` resource by saving the following sample YAML content in the `l2advertisement.yaml` file:
67+
+
68+
[source,yaml]
69+
----
70+
apiVersion: metallb.io/v1beta1
71+
kind: L2Advertisement
72+
metadata:
73+
name: l2advertisement
74+
namespace: metallb-system
75+
spec:
76+
ipAddressPools:
77+
- metallb
78+
----
79+
80+
. Apply the YAML content by entering the following command:
81+
+
82+
[source,terminal]
83+
----
84+
$ oc apply -f l2advertisement.yaml
85+
----
86+
+
87+
.Example output
88+
----
89+
l2advertisement.metallb.io/metallb created
90+
----

modules/hcp-virt-add-networks.adoc

Lines changed: 32 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,32 @@
1+
// Module included in the following assemblies:
2+
//
3+
// * hosted_control_planes/hcp-deploy-disconnected.adoc
4+
5+
:_mod-docs-content-type: PROCEDURE
6+
[id="hcp-virt-add-networks_{context}"]
7+
= Adding multiple networks to a node pool
8+
9+
By default, nodes generated by a node pool are attached to the pod network. You can attach additional networks to the nodes by using Multus and NetworkAttachmentDefinitions.
10+
11+
.Procedure
12+
13+
To add multiple networks to nodes, use the `--additional-network` argument by running the following command:
14+
15+
[source,terminal]
16+
----
17+
$ hcp create cluster kubevirt \
18+
--name <hosted_cluster_name> \ <1>
19+
--node-pool-replicas <worker_node_count> \ <2>
20+
--pull-secret <path_to_pull_secret> \ <3>
21+
--memory <memory> \ <4>
22+
--cores <cpu> \ <5>
23+
--additional-network name:<namespace/name> \ <6>
24+
–-additional-network name:<namespace/name>
25+
----
26+
27+
<1> Specify the name of your hosted cluster, for instance, `example`.
28+
<2> Specify your worker node count, for example, `2`.
29+
<3> Specify the path to your pull secret, for example, `/user/name/pullsecret`.
30+
<4> Specify the memory value, for example, `8Gi`.
31+
<5> Specify the CPU value, for example, `2`.
32+
<6> Set the value of the `–additional-network` argument to `name:<namespace/name>`. Replace `<namespace/name>` with a namespace and name of your NetworkAttachmentDefinitions.

modules/hcp-virt-add-node.adoc

Lines changed: 84 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,84 @@
1+
// Module included in the following assemblies:
2+
//
3+
// * hosted_control_planes/hcp-deploy-virt.adoc
4+
5+
:_mod-docs-content-type: PROCEDURE
6+
[id="hcp-virt-add-node_{context}"]
7+
= Adding node pools
8+
9+
You can create node pools for a hosted cluster by specifying a name, number of replicas, and any additional information, such as memory and CPU requirements.
10+
11+
.Procedure
12+
13+
. To create a node pool, enter the following information. In this example, the node pool has more CPUs assigned to the VMs:
14+
+
15+
[source,terminal]
16+
----
17+
export NODEPOOL_NAME=${CLUSTER_NAME}-extra-cpu
18+
export WORKER_COUNT="2"
19+
export MEM="6Gi"
20+
export CPU="4"
21+
export DISK="16"
22+
23+
$ hcp create nodepool kubevirt \
24+
--cluster-name $CLUSTER_NAME \
25+
--name $NODEPOOL_NAME \
26+
--node-count $WORKER_COUNT \
27+
--memory $MEM \
28+
--cores $CPU \
29+
--root-volume-size $DISK
30+
----
31+
32+
. Check the status of the node pool by listing `nodepool` resources in the `clusters` namespace:
33+
+
34+
[source,terminal]
35+
----
36+
$ oc get nodepools --namespace clusters
37+
----
38+
+
39+
.Example output
40+
[source,terminal]
41+
----
42+
NAME CLUSTER DESIRED NODES CURRENT NODES AUTOSCALING AUTOREPAIR VERSION UPDATINGVERSION UPDATINGCONFIG MESSAGE
43+
example example 5 5 False False 4.x.0
44+
example-extra-cpu example 2 False False True True Minimum availability requires 2 replicas, current 0 available
45+
----
46+
+
47+
Replace `4.x.0` with the supported {product-title} version that you want to use.
48+
49+
. After some time, you can check the status of the node pool by entering the following command:
50+
+
51+
[source,terminal]
52+
----
53+
$ oc --kubeconfig $CLUSTER_NAME-kubeconfig get nodes
54+
----
55+
+
56+
.Example output
57+
[source,terminal]
58+
----
59+
NAME STATUS ROLES AGE VERSION
60+
example-9jvnf Ready worker 97s v1.27.4+18eadca
61+
example-n6prw Ready worker 116m v1.27.4+18eadca
62+
example-nc6g4 Ready worker 117m v1.27.4+18eadca
63+
example-thp29 Ready worker 4m17s v1.27.4+18eadca
64+
example-twxns Ready worker 88s v1.27.4+18eadca
65+
example-extra-cpu-zh9l5 Ready worker 2m6s v1.27.4+18eadca
66+
example-extra-cpu-zr8mj Ready worker 102s v1.27.4+18eadca
67+
----
68+
69+
. Verify that the node pool is in the status that you expect by entering this command:
70+
+
71+
[source,terminal]
72+
----
73+
$ oc get nodepools --namespace clusters
74+
----
75+
+
76+
.Example output
77+
[source,terminal]
78+
----
79+
NAME CLUSTER DESIRED NODES CURRENT NODES AUTOSCALING AUTOREPAIR VERSION UPDATINGVERSION UPDATINGCONFIG MESSAGE
80+
example example 5 5 False False 4.x.0
81+
example-extra-cpu example 2 2 False False 4.x.0
82+
----
83+
+
84+
Replace `4.x.0` with the supported {product-title} version that you want to use.

modules/hcp-virt-addl-network.adoc

Lines changed: 33 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,33 @@
1+
// Module included in the following assemblies:
2+
//
3+
// * hosted_control_planes/hcp-deploy-disconnected.adoc
4+
5+
:_mod-docs-content-type: PROCEDURE
6+
[id="hcp-virt-addl-network_{context}"]
7+
= Using an additional network as default
8+
9+
You can add your additional network as a default network for the nodes by disabling the default pod network.
10+
11+
.Procedure
12+
13+
* To add an additional network as default to your nodes, run the following command:
14+
+
15+
[source,bash]
16+
----
17+
$ hcp create cluster kubevirt \
18+
--name <hosted_cluster_name> \ <1>
19+
--node-pool-replicas <worker_node_count> \ <2>
20+
--pull-secret <path_to_pull_secret> \ <3>
21+
--memory <memory> \ <4>
22+
--cores <cpu> \ <5>
23+
--attach-default-network false \ <6>
24+
--additional-network name:<namespace>/<network_name> <7>
25+
----
26+
+
27+
<1> Specify the name of your hosted cluster, for instance, `example`.
28+
<2> Specify your worker node count, for example, `2`.
29+
<3> Specify the path to your pull secret, for example, `/user/name/pullsecret`.
30+
<4> Specify the memory value, for example, `8Gi`.
31+
<5> Specify the CPU value, for example, `2`.
32+
<6> The `--attach-default-network false` argument disables the default pod network.
33+
<7> Specify the additional network that you want to add to your nodes, for example, `name:my-namespace/my-network`.

0 commit comments

Comments
 (0)