Skip to content

Commit c001d8d

Browse files
authored
Merge branch 'main' into ajp-io-patch-1
2 parents b24e242 + f0e9236 commit c001d8d

File tree

3 files changed

+79
-95
lines changed

3 files changed

+79
-95
lines changed

docs/enterprise/embedded-manage-nodes.mdx

Lines changed: 34 additions & 26 deletions
Original file line numberDiff line numberDiff line change
@@ -16,6 +16,8 @@ Multi-node clusters with Embedded Cluster have the following limitations:
1616

1717
* More than one controller node should not be joined at the same time. When joining a controller node, a warning is printed that explains that the user should not attempt to join another node until the controller node joins successfully.
1818

19+
* Setting node roles with the Embedded Cluster Config [roles](/reference/embedded-config#roles) key is Beta.
20+
1921
## Add Nodes to a Cluster (Beta) {#add-nodes}
2022

2123
You can add nodes to create a multi-node cluster in online (internet-connected) and air-gapped (limited or no outbound internet access) environments. The Admin Console provides the join command that you use to join nodes to the cluster.
@@ -50,11 +52,7 @@ To add nodes to a cluster:
5052

5153
* If the Embedded Cluster Config [roles](/reference/embedded-config#roles) key is not configured, all new nodes joined to the cluster are assigned the `controller` role by default. The `controller` role designates nodes that run the Kubernetes control plane. Controller nodes can also run other workloads, such as application or Replicated KOTS workloads.
5254

53-
* Roles are not updated or changed after a node is added. If you need to change a node's role, reset the node and add it again with the new role.
54-
55-
* For multi-node clusters with high availability (HA), at least three `controller` nodes are required. You can assign both the `controller` role and one or more `custom` roles to the same node. For more information about creating HA clusters with Embedded Cluster, see [Enable High Availability for Multi-Node Clusters (Alpha)](#ha) below.
56-
57-
* To add non-controller or _worker_ nodes that do not run the Kubernetes control plane, select one or more `custom` roles for the node and deselect the `controller` role.
55+
* The role cannot be changed after a node is added. If you need to change a node’s role, reset the node and add it again with the new role.
5856

5957
1. Do one of the following to make the Embedded Cluster installation assets available on the machine that you will join to the cluster:
6058

@@ -83,14 +81,16 @@ To add nodes to a cluster:
8381

8482
1. Repeat these steps for each node you want to add.
8583

86-
## Enable High Availability for Multi-Node Clusters (Alpha) {#ha}
87-
88-
Multi-node clusters are not highly available by default. The first node of the cluster is special and holds important data for Kubernetes and KOTS, such that the loss of this node would be catastrophic for the cluster. Enabling high availability (HA) requires that at least three controller nodes are present in the cluster. Users can enable HA when joining the third node.
84+
## High Availability for Multi-Node Clusters (Alpha) {#ha}
8985

9086
:::important
9187
High availability for Embedded Cluster in an Alpha feature. This feature is subject to change, including breaking changes. For more information about this feature, reach out to Alex Parker at [alexp@replicated.com](mailto:alexp@replicated.com).
9288
:::
9389

90+
Multi-node clusters are not highly available by default. The first node of the cluster holds important data for Kubernetes and KOTS, such that the loss of this node would be catastrophic for the cluster. Enabling high availability requires that at least three controller nodes are present in the cluster.
91+
92+
Users are automatically prompted to enable HA when joining the third controller node to a cluster. Alternatively, users can enable HA with the `enable-ha` command after adding three or more controller nodes.
93+
9494
### HA Architecture
9595

9696
<HaArchitecture/>
@@ -101,19 +101,11 @@ For more information about the Embedded Cluster built-in extensions, see [Built-
101101

102102
Enabling high availability has the following requirements:
103103

104-
* High availability is supported with Embedded Cluster 1.4.1 or later.
105-
106-
* High availability is supported only for clusters where at least three nodes with the `controller` role are present.
104+
* High availability is supported with Embedded Cluster 1.4.1 and later.
107105

108-
### Limitations
106+
* The [`enable-ha`](#enable-ha-existing) command is available with Embedded Cluster 2.3.0 and later.
109107

110-
Enabling high availability has the following limitations:
111-
112-
* High availability for Embedded Cluster in an Alpha feature. This feature is subject to change, including breaking changes. For more information about this feature, reach out to Alex Parker at [alexp@replicated.com](mailto:alexp@replicated.com).
113-
114-
* The `--enable-ha` flag serves as a feature flag during the Alpha phase. In the future, the prompt about migrating to high availability will display automatically if the cluster is not yet HA and you are adding the third or more controller node.
115-
116-
* HA multi-node clusters use rqlite to store support bundles up to 100 MB in size. Bundles over 100 MB can cause rqlite to crash and restart.
108+
* High availability is supported only for clusters where at least three nodes with the `controller` role are present.
117109

118110
### Best Practices for High Availability
119111

@@ -125,23 +117,39 @@ Consider the following best practices and recommendations for creating HA cluste
125117

126118
* You can have any number of _worker_ nodes in HA clusters. Worker nodes do not run the Kubernetes control plane, but can run workloads such as application or Replicated KOTS workloads.
127119

128-
### Create a Multi-Node HA Cluster
120+
### Create a Multi-Node Cluster with High Availability {#create-ha}
121+
122+
You can enable high availability for a multi-node cluster when joining the third controller node. Alternatively, you can enable HA for an existing cluster with three or more controller nodes. For more information, see [Enable High Availability For an Existing Cluster](#enable-ha-existing) below.
129123

130124
To create a multi-node HA cluster:
131125

132126
1. Set up a cluster with at least two controller nodes. You can do an online (internet-connected) or air gap installation. For more information, see [Online Installation with Embedded Cluster](/enterprise/installing-embedded) or [Air Gap Installation with Embedded Cluster](/enterprise/installing-embedded-air-gap).
133127

134128
1. SSH onto a third node that you want to join to the cluster as a controller.
135129

136-
1. Run the join command provided in the Admin Console **Cluster Management** tab and pass the `--enable-ha` flag. For example:
130+
1. On the third node, run the join command provided in the Admin Console **Cluster Management** tab.
131+
132+
**Example:**
137133

138134
```bash
139-
sudo ./APP_SLUG join --enable-ha 10.128.0.80:30000 tI13KUWITdIerfdMcWTA4Hpf
135+
sudo ./APP_SLUG join 10.128.0.80:30000 tI13KUWITdIerfdMcWTA4Hpf
140136
```
137+
Where `APP_SLUG` is the unique slug for the application.
138+
139+
:::note
140+
For Embedded Cluster versions earlier than 2.3.0, pass the `--enable-ha` flag with the `join` command.
141+
:::
142+
143+
1. In response to the prompt asking if you want to enable high availability, type `y` or `yes`.
144+
145+
1. Wait for the migration to HA to complete.
146+
147+
### Enable High Availability For an Existing Cluster {#enable-ha-existing}
141148

142-
1. After the third node joins the cluster, type `y` in response to the prompt asking if you want to enable high availability.
149+
To enable high availability for an existing Embedded Cluster installation with three or more controller nodes, run the following command:
143150

144-
![high availability command line prompt](/images/embedded-cluster-ha-prompt.png)
145-
[View a larger version of this image](/images/embedded-cluster-ha-prompt.png)
151+
```bash
152+
sudo ./APP_SLUG enable-ha
153+
```
146154

147-
1. Wait for the migration to complete.
155+
Where `APP_SLUG` is the unique slug for the application.

docs/reference/embedded-config.mdx

Lines changed: 45 additions & 69 deletions
Original file line numberDiff line numberDiff line change
@@ -22,17 +22,20 @@ kind: Config
2222
spec:
2323
version: 2.1.3+k8s-1.30
2424
roles:
25-
controller:
26-
name: management
27-
labels:
28-
management: "true"
25+
controller:
26+
name: app
27+
labels:
28+
app: "true"
2929
custom:
30-
- name: app
30+
- name: gpu
31+
labels:
32+
gpu: "true"
33+
- name: database
3134
labels:
32-
app: "true"
33-
domains:
34-
proxyRegistryDomain: proxy.yourcompany.com
35-
replicatedAppDomain: updates.yourcompany.com
35+
database: "true"
36+
domains:
37+
proxyRegistryDomain: proxy.yourcompany.com
38+
replicatedAppDomain: updates.yourcompany.com
3639
extensions:
3740
helm:
3841
repositories:
@@ -68,29 +71,20 @@ You must specify which version of Embedded Cluster to install. Each version of E
6871
6972
For a full list of versions, see the Embedded Cluster [releases page](https://github.com/replicatedhq/embedded-cluster/releases) in GitHub. It's recommended to keep this version as up to date as possible because Embedded Cluster is changing rapidly.
7073
71-
## roles
74+
## roles (Beta)
75+
76+
:::note
77+
Support for setting node roles is Beta.
78+
:::
7279
7380
You can optionally customize node roles in the Embedded Cluster Config using the `roles` key.
7481

75-
If the `roles` key is configured, users select one or more roles to assign to a node when it is joined to the cluster. A single node can be assigned:
76-
* The `controller` role, which designates nodes that run the Kubernetes control plane
77-
* One or more `custom` roles
78-
* Both the `controller` role _and_ one or more `custom` roles
82+
A common use case for customizing node roles is to assign workloads to specific nodes. For example, if your application has graphics processing unit (GPU) workloads, you could create a `custom` role that will add a `gpu=true` label to any node that is assigned the role. This allows you to then schedule GPU workloads on nodes labled `gpu=true`.
7983

80-
For more information about how to assign node roles in the Admin Console, see [Manage Multi-Node Clusters with Embedded Cluster](/enterprise/embedded-manage-nodes).
84+
When the `roles` key is configured, users select one or more roles to assign to a node when it is joined to the cluster. For more information, see [Managing Multi-Node Clusters with Embedded Cluster](/enterprise/embedded-manage-nodes).
8185

8286
If the `roles` key is _not_ configured, all nodes joined to the cluster are assigned the `controller` role. The `controller` role designates nodes that run the Kubernetes control plane. Controller nodes can also run other workloads, such as application or Replicated KOTS workloads.
8387

84-
For more information, see the sections below.
85-
86-
### controller
87-
88-
By default, all nodes joined to a cluster are assigned the `controller` role.
89-
90-
You can customize the `controller` role in the following ways:
91-
* Change the `name` that is assigned to controller nodes. By default, controller nodes are named “controller”. If you plan to create any `custom` roles, Replicated recommends that you change the default name for the `controller` role to a term that is easy to understand, such as "management". This is because, when you add `custom` roles, both the name of the `controller` role and the names of any `custom` roles are displayed to the user when they join a node.
92-
* Add one or more `labels` to be assigned to all controller nodes. See [labels](#labels).
93-
9488
#### Example
9589

9690
```yaml
@@ -99,45 +93,11 @@ kind: Config
9993
spec:
10094
roles:
10195
controller:
102-
name: management
103-
labels:
104-
management: "true" # Label applied to "management" nodes
105-
```
106-
107-
### custom
108-
109-
You can add `custom` roles that users can assign to one or more nodes in the cluster. Each `custom` role that you add must have a `name` and can also have one or more `labels`. See [labels](#labels).
110-
111-
Adding `custom` node roles is useful if you need to assign application workloads to specific nodes in multi-node clusters. For example, if your application has graphics processing unit (GPU) workloads, you could create a `custom` role that will add a `gpu=true` label to any node that is assigned the role. This allows you to then schedule GPU workloads on nodes labled `gpu=true`. Or, if your application includes any resource-intensive workloads (such as a database) that must be run on dedicated nodes, you could create a `custom` role that adds a `db=true` label to the node. This way, the database workload could be assigned to a certain node or nodes.
112-
113-
#### Example
114-
115-
```yaml
116-
apiVersion: embeddedcluster.replicated.com/v1beta1
117-
kind: Config
118-
spec:
119-
roles:
120-
custom:
121-
- name: app
96+
# Optionally change the name for the default controller role
97+
name: app
12298
labels:
12399
app: "true" # Label applied to "app" nodes
124-
```
125-
126-
### labels
127-
128-
You can define Kubernetes labels for the default `controller` role and any `custom` roles that you add. When `labels` are defined, Embedded Cluster applies the label to any node in the cluster that is assigned the given role. Labels are useful for tasks like assigning workloads to nodes.
129-
130-
#### Example
131-
132-
```yaml
133-
apiVersion: embeddedcluster.replicated.com/v1beta1
134-
kind: Config
135-
spec:
136-
roles:
137-
controller:
138-
name: management
139-
labels:
140-
management: "true" # Label applied to "management" nodes
100+
# Custom roles
141101
custom:
142102
- name: db
143103
labels:
@@ -147,6 +107,22 @@ spec:
147107
gpu: "true" # Label applied to "gpu" nodes
148108
```
149109

110+
### roles.controller
111+
112+
In the `roles.controller` key, you can set the following fields to customize the default controller role:
113+
* `name`: Set the name that is assigned to controller nodes. By default, controller nodes are named “controller”.
114+
:::note
115+
If you plan to create any custom roles, Replicated recommends that you change the default name for the controller role to a term that is easy to understand, such as "app". This is because, when you add custom roles, both the name of the controller role and the names of any custom roles are displayed to the user when they join a node.
116+
:::
117+
* `labels`: Kubernetes labels that Embedded Cluster will apply to any node in the cluster that is assigned the given role.
118+
119+
120+
### roles.custom
121+
122+
In the `roles.custom` key, you can add custom roles. Each custom role includes the following fields:
123+
* `name`: (Required) A name for the custom role.
124+
* `labels`: Kubernetes labels that Embedded Cluster will apply to any node in the cluster that is assigned the given role.
125+
150126
## domains
151127

152128
Configure the `domains` key so that Embedded Cluster uses your custom domains for the Replicated proxy registry and Replicated app service.
@@ -178,13 +154,7 @@ Helm extensions are updated when new versions of your application are deployed f
178154

179155
The format for specifying Helm extensions uses the same k0s Helm extensions format from the k0s configuration. For more information about these fields, see the [k0s documentation](https://docs.k0sproject.io/stable/helm-charts/#example).
180156

181-
### Requirements
182-
183-
* The `version` field is required. Failing to specify a chart version will cause problems for upgrades.
184-
185-
* If you need to install multiple charts in a particular order, set the `order` field to a value greater than or equal to 10. Numbers below 10 are reserved for use by Embedded Cluster to deploy things like a storage provider and the Admin Console. If an `order` is not provided, Helm extensions are installed with order 10.
186-
187-
### Example
157+
#### Example
188158

189159
```yaml
190160
apiVersion: embeddedcluster.replicated.com/v1beta1
@@ -219,6 +189,12 @@ spec:
219189
digest: ""
220190
```
221191

192+
### Requirements
193+
194+
* The `version` field is required. Failing to specify a chart version will cause problems for upgrades.
195+
196+
* If you need to install multiple charts in a particular order, set the `order` field to a value greater than or equal to 10. Numbers below 10 are reserved for use by Embedded Cluster to deploy things like a storage provider and the Admin Console. If an `order` is not provided, Helm extensions are installed with order 10.
197+
222198
## unsupportedOverrides
223199

224200
:::important
-266 KB
Binary file not shown.

0 commit comments

Comments
 (0)