Skip to content

Commit 4cb1af8

Browse files
authored
Merge pull request #92314 from sr1kar99/2122-two-node-arbiter
TELCODOCS#2122: Configuring a local arbiter node
2 parents 3b8940e + e8e00a2 commit 4cb1af8

File tree

4 files changed

+130
-0
lines changed

4 files changed

+130
-0
lines changed

installing/installing_bare_metal/ipi/ipi-install-installation-workflow.adoc

Lines changed: 12 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -57,6 +57,18 @@ include::modules/nw-osp-configuring-external-load-balancer.adoc[leveloffset=+2]
5757
// Setting the cluster node hostnames through DHCP
5858
include::modules/ipi-install-setting-cluster-node-hostnames-dhcp.adoc[leveloffset=+1]
5959

60+
// Configuring a local arbiter node
61+
include::modules/ipi-install-config-local-arbiter-node.adoc[leveloffset=+1]
62+
63+
.Next steps
64+
65+
* xref:../../../installing/installing_bare_metal/ipi/ipi-install-installing-a-cluster.adoc#ipi-install-installing-a-cluster[Installing a cluster]
66+
67+
[role="_additional-resources"]
68+
.Additional resources
69+
70+
* xref:../../../nodes/clusters/nodes-cluster-enabling-features.adoc#nodes-cluster-enabling-features-about_nodes-cluster-enabling[Understanding feature gates]
71+
6072
[id="ipi-install-configuration-files"]
6173
[id="additional-resources_config"]
6274
== Configuring the install-config.yaml file

modules/ipi-install-additional-install-config-parameters.adoc

Lines changed: 17 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -112,6 +112,23 @@ controlPlane:
112112
|
113113
|Replicas sets the number of control plane nodes included as part of the {product-title} cluster.
114114

115+
a|
116+
----
117+
arbiter:
118+
name: arbiter
119+
----
120+
|
121+
|The {product-title} cluster requires a name for arbiter nodes.
122+
123+
124+
a|
125+
----
126+
arbiter:
127+
replicas: 1
128+
----
129+
|
130+
|The `replicas` parameter sets the number of arbiter nodes for the {product-title} cluster.
131+
115132
a| `provisioningNetworkInterface` | | The name of the network interface on nodes connected to the provisioning network. For {product-title} 4.9 and later releases, use the `bootMACAddress` configuration setting to enable Ironic to identify the IP address of the NIC instead of using the `provisioningNetworkInterface` configuration setting to identify the name of the NIC.
116133

117134

Lines changed: 100 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,100 @@
1+
// Module included in the following assemblies:
2+
//
3+
// *installing/installing_bare_metal/ipi/ipi-install-installation-workflow.adoc
4+
5+
:_mod-docs-content-type: PROCEDURE
6+
[id="ipi-install-config-local-arbiter-node_{context}"]
7+
= Configuring a local arbiter node
8+
9+
You can configure an {product-title} cluster with two control plane nodes and one local arbiter node so to retain high availability (HA) while reducing infrastructure costs for your cluster. This configuration is supported only for bare-metal installations.
10+
11+
:FeatureName: Configuring a local arbiter node
12+
include::snippets/technology-preview.adoc[]
13+
14+
A local arbiter node is a lower-cost, co-located machine that participates in control plane quorum decisions. Unlike a standard control plane node, the arbiter node does not run the full set of control plane services. You can use this configuration to maintain HA in your cluster with only two fully provisioned control plane nodes instead of three.
15+
16+
[IMPORTANT]
17+
====
18+
You can configure a local arbiter node only. Remote arbiter nodes are not supported.
19+
====
20+
21+
To deploy a cluster with two control plane nodes and one local arbiter node, you must define the following nodes in the `install-config.yaml` file:
22+
23+
* 2 control plane nodes
24+
* 1 arbiter node
25+
26+
You must enable the `TechPreviewNoUpgrade` feature set in the `FeatureGate` custom resource (CR) to enable the arbiter node feature.
27+
For more information about feature gates, see "Understanding feature gates".
28+
29+
The arbiter node must meet the following minimum system requirements:
30+
31+
* 2 threads
32+
* 8 GB of RAM
33+
* 120 GB of SSD or equivalent storage
34+
35+
The arbiter node must be located in a network environment with an end-to-end latency of less than 500 milliseconds, including disk I/O. In high-latency environments, you might need to apply the `etcd` slow profile.
36+
37+
The control plane nodes must meet the following minimum system requirements:
38+
39+
* 4 threads
40+
* 16 GB of RAM
41+
* 120 GB of SSD or equivalent storage
42+
43+
Additionally, the control plane nodes must also have enough storage for the workload.
44+
45+
.Prerequisites
46+
47+
* You have downloaded {oc-first} and the installation program.
48+
* You have logged into the {oc-first}.
49+
50+
.Procedure
51+
52+
. Edit the `install-config.yaml` file to define the arbiter node alongside control plane nodes.
53+
+
54+
.Example `install-config.yaml` configuration for deploying an arbiter node
55+
[source,yaml]
56+
----
57+
apiVersion: v1
58+
baseDomain: devcluster.openshift.com
59+
compute:
60+
- architecture: amd64
61+
hyperthreading: Enabled
62+
name: worker
63+
platform: {}
64+
replicas: 0
65+
arbiter: <1>
66+
architecture: amd64
67+
hyperthreading: Enabled
68+
replicas: 1 <2>
69+
name: arbiter <3>
70+
platform:
71+
baremetal: {}
72+
controlPlane: <4>
73+
architecture: amd64
74+
hyperthreading: Enabled
75+
name: master
76+
platform:
77+
baremetal: {}
78+
replicas: 2 <5>
79+
featureSet: TechPreviewNoUpgrade
80+
platform:
81+
baremetal:
82+
# ...
83+
hosts:
84+
- name: cluster-master-0
85+
role: master
86+
# ...
87+
- name: cluster-master-1
88+
role: master
89+
...
90+
- name: cluster-arbiter-0
91+
role: arbiter
92+
# ...
93+
----
94+
<1> Defines the arbiter machine pool. You must configure this field to deploy a cluster with an arbiter node.
95+
<2> Set the `replicas` field to `1` for the arbiter pool. You cannot set this field to a value that is greater than 1.
96+
<3> Specifies a name for the arbiter machine pool.
97+
<4> Defines the control plane machine pool.
98+
<5> When an arbiter pool is defined, two control plane replicas are valid.
99+
100+
. Save the modified `install-config.yaml` file.

modules/nodes-cluster-enabling-features-about.adoc

Lines changed: 1 addition & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -28,6 +28,7 @@ The following Technology Preview features are enabled by this feature set:
2828
** Pod security admission enforcement. Enables the restricted enforcement mode for pod security admission. Instead of only logging a warning, pods are rejected if they violate pod security standards. (`OpenShiftPodSecurityAdmission`)
2929
** StatefulSet pod availability upgrading limits. Enables users to define the maximum number of statefulset pods unavailable during updates which reduces application downtime. (`MaxUnavailableStatefulSet`)
3030
** Image mode behavior of image streams. Enables a new API for controlling the import mode behavior of image streams. (`imageStreamImportMode`)
31+
** Configuring a local arbiter node. You can configure an {product-title} cluster with two control plane nodes and one local arbiter node to retain high availability (HA) while reducing infrastructure costs. This configuration is supported only for bare-metal installations.
3132
** `OVNObservability` resource allows you to verify expected network behavior. Supports the following network APIs: `NetworkPolicy`, `AdminNetworkPolicy`, `BaselineNetworkPolicy`, `UserDefinesdNetwork` isolation, multicast ACLs, and egress firewalls. When enabled, you can view network events in the terminal.
3233
** `gcpLabelsTags`
3334
** `vSphereStaticIPs`

0 commit comments

Comments
 (0)