Skip to content

Commit e2ecbaf

Browse files
authored
Merge pull request #81800 from xJustin/OSDOCS-11826-classic-249-scale
OSDOCS-11826 ROSA/OSD increase to 249 nodes
2 parents bc1855b + 3bfd1da commit e2ecbaf

File tree

6 files changed

+14
-8
lines changed

6 files changed

+14
-8
lines changed

cloud_experts_tutorials/cloud-experts-getting-started/cloud-experts-getting-started-what-is-rosa.adoc

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -60,7 +60,7 @@ For a complete list of supported instances for worker nodes see xref:../../rosa_
6060
Autoscaling allows you to automatically adjust the size of the cluster based on the current workload. See xref:../../rosa_cluster_admin/rosa_nodes/rosa-nodes-about-autoscaling-nodes.adoc#rosa-nodes-about-autoscaling-nodes[About autoscaling nodes on a cluster] for more details.
6161

6262
=== Maximum number of worker nodes
63-
The maximum number of worker nodes is 180 worker nodes for each ROSA cluster. See xref:../../rosa_planning/rosa-limits-scalability.adoc#rosa-limits-scalability[limits and scalability] for more details on node counts.
63+
The maximum number of worker nodes in ROSA clusters versions 4.14.14 and later is 249. For earlier versions, the limit is 180 nodes. See xref:../../rosa_planning/rosa-limits-scalability.adoc#rosa-limits-scalability[limits and scalability] for more details on node counts.
6464

6565
A list of the account-wide and per-cluster roles is provided in the xref:../../rosa_architecture/rosa-sts-about-iam-resources.adoc#rosa-sts-account-wide-roles-and-policies-creation-methods_rosa-sts-about-iam-resources[ROSA documentation].
6666

modules/rosa-sts-interactive-cluster-creation-mode-options.adoc

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -105,7 +105,7 @@ Tags that are added by Red{nbsp}Hat are required for clusters to stay in complia
105105
|Select the additional custom security group IDs that are used with the control plane nodes created along side the cluster. The default is none selected. Only security groups associated with the selected VPC are displayed. You can select a maximum of 5 additional security groups.
106106

107107
|`Compute nodes`
108-
|Specify the number of compute nodes to provision into each availability zone. Clusters deployed in a single availability zone require at least 2 nodes. Clusters deployed in multiple zones must have at least 3 nodes. The maximum number of worker nodes is 180 nodes. The default value is `2`.
108+
|Specify the number of compute nodes to provision into each availability zone. Clusters deployed in a single availability zone require at least 2 nodes. Clusters deployed in multiple zones must have at least 3 nodes. The maximum number of worker nodes is 249 nodes. The default value is `2`.
109109

110110
|`Default machine pool labels (optional)`
111111
|Specify the labels for the default machine pool. The label format should be a comma-separated list of key-value pairs. This list will overwrite any modifications made to node labels on an ongoing basis.

modules/sd-planning-cluster-maximums.adoc

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -19,7 +19,7 @@ ifdef::openshift-dedicated[]
1919
endif::[]
2020
cluster.
2121

22-
These guidelines are based on a cluster of 180 compute (also known as worker) nodes in a multiple availability zone configuration. For smaller clusters, the maximums are lower.
22+
These guidelines are based on a cluster of 249 compute (also known as worker) nodes in a multiple availability zone configuration. For smaller clusters, the maximums are lower.
2323

2424

2525
.Tested cluster maximums

modules/sd-planning-considerations.adoc

Lines changed: 5 additions & 5 deletions
Original file line numberDiff line numberDiff line change
@@ -43,7 +43,7 @@ endif::[]
4343
|m5.4xlarge
4444
|r5.2xlarge
4545

46-
|101 to 180
46+
|101 to 249
4747
|m5.8xlarge
4848
|r5.4xlarge
4949
|===
@@ -65,7 +65,7 @@ GCP control plane and infrastructure node size:
6565
|custom-16-65536
6666
|custom-8-65536-ext
6767

68-
|101 to 180
68+
|101 to 249
6969
|custom-32-131072
7070
|custom-16-131072-ext
7171
|===
@@ -85,7 +85,7 @@ GCP control plane and infrastructure node size for clusters created on or after
8585
|n2-standard-16
8686
|n2-highmem-8
8787

88-
|101 to 180
88+
|101 to 249
8989
|n2-standard-32
9090
|n2-highmem-16
9191
|===
@@ -101,7 +101,7 @@ endif::[]
101101
ifdef::openshift-dedicated[]
102102
{product-title}
103103
endif::[]
104-
is 180.
104+
clusters version 4.14.14 and later is 249. For earlier versions, the limit is 180.
105105
====
106106

107107
[id="node-scaling-after-installation_{context}"]
@@ -145,7 +145,7 @@ endif::[]
145145
ifdef::openshift-dedicated[]
146146
{product-title}
147147
endif::[]
148-
is 180.
148+
cluster versions 4.14.14 and later is 249. For earlier versions, the limit is 180.
149149
150150
The resizing alerts only appear after sustained periods of high utilization. Short usage spikes, such as a node temporarily going down causing the other node to scale up, do not trigger these alerts.
151151
====

osd_whats_new/osd-whats-new.adoc

Lines changed: 3 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -17,6 +17,9 @@ With its foundation in Kubernetes, {product-title} is a complete {OCP} cluster p
1717

1818
[id="osd-q1-2025_{context}"]
1919
=== Q1 2025
20+
21+
* **Cluster node limit update.** {product-title} clusters versions 4.14.14 and greater can now scale to 249 worker nodes. This is an increase from the previous limit of 180 nodes. For more information, see xref:../osd_planning/osd-limits-scalability.adoc#osd-limits-scalability[limits and scalability].
22+
2023
* **Red{nbsp}Hat SRE log-based alerting endpoints have been updated.** {product-title} customers who are using a firewall to control egress traffic can now remove all references to `*.osdsecuritylogs.splunkcloud.com:9997` from your firewall allowlist. {product-title} clusters still require the `http-inputs-osdsecuritylogs.splunkcloud.com:443` log-based alerting endpoint to be accessible from the cluster.
2124

2225
[id="osd-q4-2024_{context}"]

rosa_release_notes/rosa-release-notes.adoc

Lines changed: 3 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -22,6 +22,8 @@ endif::openshift-rosa-hcp[]
2222

2323
// These notes need to be duplicated until the ROSA with HCP split out is completed.
2424
ifdef::openshift-rosa[]
25+
* **{rosa-classic} cluster node limit update.** {rosa-classic} clusters versions 4.14.14 and greater can now scale to 249 worker nodes. This is an increase from the previous limit of 180 nodes. For more information, see xref:../rosa_planning/rosa-limits-scalability.adoc#rosa-limits-scalability[Limits and scalability].
26+
2527
[IMPORTANT]
2628
====
2729
Egress lockdown is a Technology Preview feature.
@@ -40,6 +42,7 @@ Egress lockdown is a Technology Preview feature.
4042
* **Egress lockdown is now available as a Technology Preview on {product-title} clusters.** You can create a fully operational cluster that does not require a public egress by configuring a virtual private cloud (VPC) and using the `--properties zero_egress:true` flag when creating your cluster. For more information, see xref:../rosa_hcp/rosa-hcp-egress-lockdown-install.adoc#rosa-hcp-egress-lockdown-install[Creating a {product-title} cluster with egress lockdown].
4143
endif::openshift-rosa-hcp[]
4244
ifdef::openshift-rosa[]
45+
4346
[id="rosa-q4-2024_{context}"]
4447
=== Q4 2024
4548

0 commit comments

Comments
 (0)