You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Copy file name to clipboardExpand all lines: hosted_control_planes/hosted-control-planes-release-notes.adoc
+11-6Lines changed: 11 additions & 6 deletions
Original file line number
Diff line number
Diff line change
@@ -28,6 +28,11 @@ The {product-title} documentation now highlights the differences between {hcp} a
28
28
29
29
Configuring proxy support for {hcp} has a few differences from configuring proxy support for standalone {product-title}. For more information, see xref:../hosted_control_planes/hcp-networking.adoc[Networking for {hcp}].
30
30
31
+
[id="hcp-4-18-runc-crun_{context}"]
32
+
==== Default container runtime for worker nodes is crun
33
+
34
+
In {hcp} for {product-title} 4.18 or later, the default container runtime for worker nodes is changed from runC to crun.
35
+
31
36
[id="bug-fixes-hcp-rn-4-18_{context}"]
32
37
=== Bug fixes
33
38
@@ -37,7 +42,7 @@ Configuring proxy support for {hcp} has a few differences from configuring proxy
37
42
38
43
* Previously, incorrect addresses were being passed to the Kubernetes EndpointSlice on a cluster, and this issue prevented the installation of the MetalLB Operator on an Agent-based cluster in an IPv6 disconnected environment. With this release, a fix modifies the address evaluation method. Red{nbsp}Hat Marketplace pods can now successfully connect to the cluster API server, so that the installation of MetalLB Operator and handling of ingress traffic in IPv6 disconnected environments can occur. (link:https://issues.redhat.com/browse/OCPBUGS-46665[*OCPBUGS-46665*])
39
44
40
-
* Previously, in {hcp} on the Agent platform, ARM64 architecture was not allowed in the NodePool API. As a consequence, heterogeneous clusters could not be deployed on the Agent platform. In this release, the API now allows ARM64 architecture node pools on the Agent platform. (link:https://issues.redhat.com/browse/OCPBUGS-46373[*OCPBUGS-4673*])
45
+
* Previously, in {hcp} on the Agent platform, ARM64 architecture was not allowed in the `NodePool` API. As a consequence, heterogeneous clusters could not be deployed on the Agent platform. In this release, the API now allows ARM64 architecture node pools on the Agent platform. (link:https://issues.redhat.com/browse/OCPBUGS-46373[*OCPBUGS-4673*])
41
46
42
47
* Previously, the default `node-monitor-grace-period` value was 50 seconds. As a consequence, nodes did not stay ready for the duration of time that Kubernetes components needed to reconnect, coordinate, and complete their requests. With this release, the default `node-monitor-grace-period` value is 55 seconds. As a result, the issue is resolved and deployments have enough time to be completed. (link:https://issues.redhat.com/browse/OCPBUGS-46008[*OCPBUGS-46008*])
43
48
@@ -63,7 +68,7 @@ Configuring proxy support for {hcp} has a few differences from configuring proxy
63
68
* When a node pool is scaled down to 0 workers, the list of hosts in the console still shows nodes in a `Ready` state. You can verify the number of nodes in two ways:
64
69
65
70
** In the console, go to the node pool and verify that it has 0 nodes.
66
-
** On the command-rline interface, run the following commands:
71
+
** On the command-line interface, run the following commands:
67
72
68
73
*** Verify that 0 nodes are in the node pool by running the following command:
69
74
+
@@ -89,17 +94,17 @@ $ oc get agents -A
89
94
* When you create a hosted cluster in an environment that uses the dual-stack network, you might encounter the following DNS-related issues:
90
95
91
96
** `CrashLoopBackOff` state in the `service-ca-operator` pod: When the pod tries to reach the Kubernetes API server through the hosted control plane, the pod cannot reach the server because the data plane proxy in the `kube-system` namespace cannot resolve the request. This issue occurs because in the HAProxy setup, the front end uses an IP address and the back end uses a DNS name that the pod cannot resolve.
92
-
** Pods stuck in `ContainerCreating` state: This issue occurs because the `openshift-service-ca-operator` cannot generate the `metrics-tls` secret that the DNS pods need for DNS resolution. As a result, the pods cannot resolve the Kubernetes API server.
97
+
** Pods stuck in the `ContainerCreating` state: This issue occurs because the `openshift-service-ca-operator` resource cannot generate the `metrics-tls` secret that the DNS pods need for DNS resolution. As a result, the pods cannot resolve the Kubernetes API server.
93
98
To resolve these issues, configure the DNS server settings for a dual stack network.
94
99
95
100
* On the Agent platform, the {hcp} feature periodically rotates the token that the Agent uses to pull ignition. As a result, if you have an Agent resource that was created some time ago, it might fail to pull ignition. As a workaround, in the Agent specification, delete the secret of the `IgnitionEndpointTokenReference` property then add or modify any label on the Agent resource. The system re-creates the secret with the new token.
96
101
97
102
* If you created a hosted cluster in the same namespace as its managed cluster, detaching the managed hosted cluster deletes everything in the managed cluster namespace including the hosted cluster. The following situations can create a hosted cluster in the same namespace as its managed cluster:
98
103
99
104
** You created a hosted cluster on the Agent platform through the {mce} console by using the default hosted cluster cluster namespace.
100
-
** You created a hosted cluster through the command-line interface or API by specifying the hosted cluster namespace to be the same as the hosted cluster name.
105
+
** You created a hosted cluster through the command-line interface or API by specifying the hosted cluster namespace to be the same as the hosted cluster name.
101
106
102
-
* When you use the console or API to specify an IPv6 address for the `spec.services.servicePublishingStrategy.nodePort.address` of a hosted cluster, a full IPv6 address with 8 hextets is required. For example, instead of specifying `2620:52:0:1306::30`, you need to specify `2620:52:0:1306:0:0:0:30`.
107
+
* When you use the console or API to specify an IPv6 address for the `spec.services.servicePublishingStrategy.nodePort.address` field of a hosted cluster, a full IPv6 address with 8 hextets is required. For example, instead of specifying `2620:52:0:1306::30`, you need to specify `2620:52:0:1306:0:0:0:30`.
103
108
104
109
[id="hcp-tech-preview-features_{context}"]
105
110
=== General Availability and Technology Preview features
@@ -155,4 +160,4 @@ For {ibm-power-title} and {ibm-z-title}, you must run the control plane on machi
0 commit comments