Skip to content

Commit 16db23b

Browse files
committed
OSDOCS-11640 HCP 250 node scale
1 parent 25b8a58 commit 16db23b

File tree

7 files changed

+87
-7
lines changed

7 files changed

+87
-7
lines changed

_topic_maps/_topic_map_rosa.yml

Lines changed: 2 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -238,6 +238,8 @@ Topics:
238238
File: rosa-sts-ocm-role
239239
- Name: Limits and scalability
240240
File: rosa-limits-scalability
241+
- Name: ROSA with HCP limits and scalability
242+
File: rosa-hcp-limits-scalability
241243
- Name: Planning your environment
242244
File: rosa-planning-environment
243245
- Name: Required AWS service quotas

_topic_maps/_topic_map_rosa_hcp.yml

Lines changed: 2 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -201,6 +201,8 @@ Topics:
201201
# File: rosa-sts-ocm-role
202202
# - Name: Limits and scalability
203203
# File: rosa-limits-scalability
204+
#- Name: ROSA with HCP limits and scalability
205+
# File: rosa-hcp-limits-scalability
204206
# - Name: Planning your environment
205207
# File: rosa-planning-environment
206208
# - Name: Required AWS service quotas

modules/rosa-sdpolicy-instance-types.adoc

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -13,7 +13,7 @@ endif::[]
1313
= Instance types
1414

1515
ifdef::rosa-with-hcp[]
16-
All {hcp-title} clusters require a minimum of 2 worker nodes. All {hcp-title} clusters support a maximum of 180 worker nodes. Shutting down the underlying infrastructure through the cloud provider console is unsupported and can lead to data loss.
16+
All {hcp-title} clusters require a minimum of 2 worker nodes. All {hcp-title} clusters support a maximum of 250 worker nodes. Shutting down the underlying infrastructure through the cloud provider console is unsupported and can lead to data loss.
1717
endif::rosa-with-hcp[]
1818
ifndef::rosa-with-hcp[]
1919
Single availability zone clusters require a minimum of 3 control plane nodes, 2 infrastructure nodes, and 2 worker nodes deployed to a single availability zone.
Lines changed: 52 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,52 @@
1+
:_mod-docs-content-type: CONCEPT
2+
// Module included in the following assemblies:
3+
//
4+
// * rosa_planning/rosa-hcp-limits-scalability.adoc
5+
6+
[id="tested-cluster-maximums-hcp-sd_{context}"]
7+
= {hcp-title} cluster maximums
8+
9+
Consider the following tested object maximums when you plan a {hcp-title-first} cluster installation. The table specifies the maximum limits for each tested type in a {hcp-title} cluster.
10+
11+
These guidelines are based on a cluster of 250 compute (also known as worker) nodes. For smaller clusters, the maximums are lower.
12+
13+
14+
.Tested cluster maximums
15+
[options="header",cols="50,50"]
16+
|===
17+
|Maximum type |4.x tested maximum
18+
19+
|Number of pods ^[1]^
20+
|25,000
21+
22+
|Number of pods per node
23+
|250
24+
25+
|Number of pods per core
26+
|There is no default value
27+
28+
|Number of namespaces ^[2]^
29+
|5,000
30+
31+
|Number of pods per namespace ^[3]^
32+
|25,000
33+
34+
|Number of services ^[4]^
35+
|10,000
36+
37+
|Number of services per namespace
38+
|5,000
39+
40+
|Number of back ends per service
41+
|5,000
42+
43+
|Number of deployments per namespace ^[3]^
44+
|2,000
45+
|===
46+
[.small]
47+
--
48+
1. The pod count displayed here is the number of test pods. The actual number of pods depends on the memory, CPU, and storage requirements of the application.
49+
2. When there are a large number of active projects, etcd can suffer from poor performance if the keyspace grows excessively large and exceeds the space quota. Periodic maintenance of etcd, including defragmentation, is highly recommended to make etcd storage available.
50+
3. There are several control loops in the system that must iterate over all objects in a given namespace as a reaction to some changes in state. Having a large number of objects of a type, in a single namespace, can make those loops expensive and slow down processing the state changes. The limit assumes that the system has enough CPU, memory, and disk to satisfy the application requirements.
51+
4. Each service port and each service back end has a corresponding entry in `iptables`. The number of back ends of a given service impacts the size of the endpoints objects, which then impacts the size of data sent throughout the system.
52+
--

rosa_architecture/rosa_policy_service_definition/rosa-hcp-instance-types.adoc

Lines changed: 5 additions & 5 deletions
Original file line numberDiff line numberDiff line change
@@ -7,15 +7,15 @@ include::_attributes/attributes-openshift-dedicated.adoc[]
77
toc::[]
88
{hcp-title} offers the following worker node instance types and sizes:
99

10-
include::modules/rosa-sdpolicy-am-aws-compute-types.adoc[leveloffset=+1]
11-
12-
include::modules/rosa-sdpolicy-am-aws-compute-types-graviton.adoc[leveloffset=+1]
13-
1410
[NOTE]
1511
====
16-
Currently, {hcp-title} supports a maximum of 180 worker nodes.
12+
Currently, {hcp-title} supports a maximum of 250 worker nodes.
1713
====
1814

15+
include::modules/rosa-sdpolicy-am-aws-compute-types.adoc[leveloffset=+1]
16+
17+
include::modules/rosa-sdpolicy-am-aws-compute-types-graviton.adoc[leveloffset=+1]
18+
1919
[role="_additional-resources"]
2020
.Additional Resources
2121

Lines changed: 24 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,24 @@
1+
:_mod-docs-content-type: ASSEMBLY
2+
include::_attributes/attributes-openshift-dedicated.adoc[]
3+
4+
[id="rosa-hcp-limits-scalability"]
5+
= {hcp-title} limits and scalability
6+
:context: rosa-hcp-limits-scalability
7+
8+
toc::[]
9+
10+
This document details the tested cluster maximums for {hcp-title-first} clusters, along with information about the test environment and configuration used to test the maximums. For {hcp-title} clusters, the control plane is fully managed in the service AWS account and will automatically scale with the cluster.
11+
12+
include::modules/sd-hcp-planning-cluster-maximums.adoc[leveloffset=+1]
13+
14+
15+
[id="next-steps_configuring-alert-notifications-hcp"]
16+
== Next steps
17+
18+
* xref:../rosa_planning/rosa-planning-environment.adoc#rosa-planning-environment[Planning your environment]
19+
20+
[role="_additional-resources"]
21+
[id="additional-resources_rosa-hcp-limits-scalability"]
22+
== Additional resources
23+
24+
* xref:../rosa_cluster_admin/rosa-cluster-notifications.adoc#managed-cluster-notification-view-hcc_rosa-cluster-notifications[Viewing cluster notifications using the {hybrid-console}]

rosa_release_notes/rosa-release-notes.adoc

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -16,7 +16,7 @@ toc::[]
1616
[id="rosa-q3-2024_{context}"]
1717
=== Q3 2024
1818

19-
* **{hcp-title} cluster node limit update.** {hcp-title} clusters can now scale to 180 worker nodes. This is an increase from the previous limit of 90 nodes. For more information, see xref:../rosa_planning/rosa-limits-scalability.html[Limits and scalability].
19+
* **{hcp-title} cluster node limit update.** {hcp-title} clusters can now scale to 250 worker nodes. This is an increase from the previous limit of 180 nodes. For more information, see xref:../rosa_planning/rosa-hcp-limits-scalability.adoc#tested-cluster-maximums-hcp-sd_rosa-hcp-limits-scalability[ROSA with HCP limits and scalability].
2020

2121
* **IMDSv2 support in {hcp-title}.** You can now enforce the use of the IMDSv2 endpoint for default machine pool worker nodes on new {hcp-title} clusters and for new machine pools on existing clusters. For more information, see xref:../rosa_hcp/terraform/rosa-hcp-creating-a-cluster-quickly-terraform.adoc#rosa-hcp-creating-a-cluster-quickly-terraform[Creating a default ROSA cluster using Terraform].
2222

0 commit comments

Comments
 (0)