You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
With {hcp} and {VirtProductName}, you can create {product-title} clusters with worker nodes that are hosted by KubeVirt virtual machines. {hcp-capital} on {VirtProductName} provides several benefits:
10
+
11
+
* Enhances resource usage by packing {hcp} and hosted clusters in the same underlying bare metal infrastructure
12
+
* Separates {hcp} and hosted clusters to provide strong isolation
13
+
* Reduces cluster provision time by eliminating the bare metal node bootstrapping process
14
+
* Manages many releases under the same base {product-title} cluster
15
+
16
+
The {hcp} feature is enabled by default.
17
+
18
+
You can use the hosted control plane command line interface, hcp, to create an {product-title} hosted cluster. The hosted cluster is automatically imported as a managed cluster. If you want to disable this automatic import feature, see _Disabling the automatic import of hosted clusters into multicluster engine operator_.
* xref:../../storage/persistent_storage/persistent_storage_local/persistent-storage-using-lvms.adoc[Persistent storage using LVM Storage]
27
+
* To disable the {hcp} feature or, if you already disabled it and want to manually enable it, see _Enabling or disabling the {hcp} feature_.
28
+
* To manage hosted clusters by running Ansible Automation Platform jobs, see link:https://docs.redhat.com/en/documentation/red_hat_advanced_cluster_management_for_kubernetes/2.11/html/clusters/cluster_mce_overview#ansible-config-hosted-cluster[Configuring Ansible Automation Platform jobs to run on hosted clusters].
29
+
* If you want to disable the automatic import feature, see _Disabling the automatic import of hosted clusters into {mce-short}_.
30
+
31
+
[id="hcp-virt-create-hc"]
32
+
== Creating a hosted cluster with the KubeVirt platform
33
+
34
+
With {product-title} 4.14 and later, you can create a cluster with KubeVirt, to include creating with an external infrastructure.
* To create credentials that you can reuse when you create a hosted cluster with the console, see link:https://docs.redhat.com/en/documentation/red_hat_advanced_cluster_management_for_kubernetes/2.11/html/clusters/cluster_mce_overview#creating-a-credential-for-an-on-premises-environment[Creating a credential for an on-premises environment].
If you do not want to use the default ingress and DNS behavior, you can configure a KubeVirt hosted cluster with a unique base domain at creation time. This option requires manual configuration steps during creation and involves three main steps: cluster creation, load balancer creation, and wildcard DNS configuration.
* For more information about MetalLB, see xref:../../networking/metallb/metallb-operator-install.adoc#metallb-operator-install[Installing the MetalLB Operator].
62
+
63
+
[id="hcp-virt-addl-resources"]
64
+
== Configuring additional networks, guaranteed CPUs, and VM scheduling for node pools
65
+
66
+
If you need to configure additional networks for node pools, request a guaranteed CPU access for Virtual Machines (VMs), or manage scheduling of KubeVirt VMs, see the following procedures.
* To scale down the data plane to zero, see link:https://access.redhat.com/documentation/en-us/openshift_container_platform/4.15/html/hosted_control_planes/troubleshooting-hosted-control-planes#scale-down-data-plane_hcp-troubleshooting[Scaling down the data plane to zero].
You must install the MetalLB Operator before you configure MetalLB.
10
+
11
+
.Procedure
12
+
13
+
Complete the following steps to configure MetalLB on your hosted cluster:
14
+
15
+
. Create a `MetalLB` resource by saving the following sample YAML content in the `configure-metallb.yaml` file:
16
+
+
17
+
[source,yaml]
18
+
----
19
+
apiVersion: metallb.io/v1beta1
20
+
kind: MetalLB
21
+
metadata:
22
+
name: metallb
23
+
namespace: metallb-system
24
+
----
25
+
26
+
. Apply the YAML content by entering the following command:
27
+
+
28
+
[source,terminal]
29
+
----
30
+
$ oc apply -f configure-metallb.yaml
31
+
----
32
+
+
33
+
.Example output
34
+
----
35
+
metallb.metallb.io/metallb created
36
+
----
37
+
38
+
. Create a `IPAddressPool` resource by saving the following sample YAML content in the `create-ip-address-pool.yaml` file:
39
+
+
40
+
[source,yaml]
41
+
----
42
+
apiVersion: metallb.io/v1beta1
43
+
kind: IPAddressPool
44
+
metadata:
45
+
name: metallb
46
+
namespace: metallb-system
47
+
spec:
48
+
addresses:
49
+
- 192.168.216.32-192.168.216.122 <1>
50
+
----
51
+
+
52
+
<1> Create an address pool with an available range of IP addresses within the node network. Replace the IP address range with an unused pool of available IP addresses in your network.
53
+
54
+
. Apply the YAML content by entering the following command:
55
+
+
56
+
[source,terminal]
57
+
----
58
+
oc apply -f create-ip-address-pool.yaml
59
+
----
60
+
+
61
+
.Example output
62
+
----
63
+
ipaddresspool.metallb.io/metallb created
64
+
----
65
+
66
+
. Create a `L2Advertisement` resource by saving the following sample YAML content in the `l2advertisement.yaml` file:
67
+
+
68
+
[source,yaml]
69
+
----
70
+
apiVersion: metallb.io/v1beta1
71
+
kind: L2Advertisement
72
+
metadata:
73
+
name: l2advertisement
74
+
namespace: metallb-system
75
+
spec:
76
+
ipAddressPools:
77
+
- metallb
78
+
----
79
+
80
+
. Apply the YAML content by entering the following command:
By default, nodes generated by a node pool are attached to the pod network. You can attach additional networks to the nodes by using Multus and NetworkAttachmentDefinitions.
10
+
11
+
.Procedure
12
+
13
+
To add multiple networks to nodes, use the `--additional-network` argument by running the following command:
14
+
15
+
[source,terminal]
16
+
----
17
+
$ hcp create cluster kubevirt \
18
+
--name <hosted_cluster_name> \ <1>
19
+
--node-pool-replicas <worker_node_count> \ <2>
20
+
--pull-secret <path_to_pull_secret> \ <3>
21
+
--memory <memory> \ <4>
22
+
--cores <cpu> \ <5>
23
+
--additional-network name:<namespace/name> \ <6>
24
+
–-additional-network name:<namespace/name>
25
+
----
26
+
27
+
<1> Specify the name of your hosted cluster, for instance, `example`.
28
+
<2> Specify your worker node count, for example, `2`.
29
+
<3> Specify the path to your pull secret, for example, `/user/name/pullsecret`.
30
+
<4> Specify the memory value, for example, `8Gi`.
31
+
<5> Specify the CPU value, for example, `2`.
32
+
<6> Set the value of the `–additional-network` argument to `name:<namespace/name>`. Replace `<namespace/name>` with a namespace and name of your NetworkAttachmentDefinitions.
You can create node pools for a hosted cluster by specifying a name, number of replicas, and any additional information, such as memory and CPU requirements.
10
+
11
+
.Procedure
12
+
13
+
. To create a node pool, enter the following information. In this example, the node pool has more CPUs assigned to the VMs:
14
+
+
15
+
[source,terminal]
16
+
----
17
+
export NODEPOOL_NAME=${CLUSTER_NAME}-extra-cpu
18
+
export WORKER_COUNT="2"
19
+
export MEM="6Gi"
20
+
export CPU="4"
21
+
export DISK="16"
22
+
23
+
$ hcp create nodepool kubevirt \
24
+
--cluster-name $CLUSTER_NAME \
25
+
--name $NODEPOOL_NAME \
26
+
--node-count $WORKER_COUNT \
27
+
--memory $MEM \
28
+
--cores $CPU \
29
+
--root-volume-size $DISK
30
+
----
31
+
32
+
. Check the status of the node pool by listing `nodepool` resources in the `clusters` namespace:
33
+
+
34
+
[source,terminal]
35
+
----
36
+
$ oc get nodepools --namespace clusters
37
+
----
38
+
+
39
+
.Example output
40
+
[source,terminal]
41
+
----
42
+
NAME CLUSTER DESIRED NODES CURRENT NODES AUTOSCALING AUTOREPAIR VERSION UPDATINGVERSION UPDATINGCONFIG MESSAGE
43
+
example example 5 5 False False 4.x.0
44
+
example-extra-cpu example 2 False False True True Minimum availability requires 2 replicas, current 0 available
45
+
----
46
+
+
47
+
Replace `4.x.0` with the supported {product-title} version that you want to use.
48
+
49
+
. After some time, you can check the status of the node pool by entering the following command:
50
+
+
51
+
[source,terminal]
52
+
----
53
+
$ oc --kubeconfig $CLUSTER_NAME-kubeconfig get nodes
0 commit comments