Skip to content

Commit 09b1f4c

Browse files
authored
Merge pull request #90755 from xenolinux/hcp-rns-418
OSDOCS#13713: Release notes entries for HCP for OCP 4.18
2 parents 158c2eb + c586b4e commit 09b1f4c

File tree

1 file changed

+11
-6
lines changed

1 file changed

+11
-6
lines changed

hosted_control_planes/hosted-control-planes-release-notes.adoc

Lines changed: 11 additions & 6 deletions
Original file line numberDiff line numberDiff line change
@@ -28,6 +28,11 @@ The {product-title} documentation now highlights the differences between {hcp} a
2828

2929
Configuring proxy support for {hcp} has a few differences from configuring proxy support for standalone {product-title}. For more information, see xref:../hosted_control_planes/hcp-networking.adoc[Networking for {hcp}].
3030

31+
[id="hcp-4-18-runc-crun_{context}"]
32+
==== Default container runtime for worker nodes is crun
33+
34+
In {hcp} for {product-title} 4.18 or later, the default container runtime for worker nodes is changed from runC to crun.
35+
3136
[id="bug-fixes-hcp-rn-4-18_{context}"]
3237
=== Bug fixes
3338

@@ -37,7 +42,7 @@ Configuring proxy support for {hcp} has a few differences from configuring proxy
3742

3843
* Previously, incorrect addresses were being passed to the Kubernetes EndpointSlice on a cluster, and this issue prevented the installation of the MetalLB Operator on an Agent-based cluster in an IPv6 disconnected environment. With this release, a fix modifies the address evaluation method. Red{nbsp}Hat Marketplace pods can now successfully connect to the cluster API server, so that the installation of MetalLB Operator and handling of ingress traffic in IPv6 disconnected environments can occur. (link:https://issues.redhat.com/browse/OCPBUGS-46665[*OCPBUGS-46665*])
3944

40-
* Previously, in {hcp} on the Agent platform, ARM64 architecture was not allowed in the NodePool API. As a consequence, heterogeneous clusters could not be deployed on the Agent platform. In this release, the API now allows ARM64 architecture node pools on the Agent platform. (link:https://issues.redhat.com/browse/OCPBUGS-46373[*OCPBUGS-4673*])
45+
* Previously, in {hcp} on the Agent platform, ARM64 architecture was not allowed in the `NodePool` API. As a consequence, heterogeneous clusters could not be deployed on the Agent platform. In this release, the API now allows ARM64 architecture node pools on the Agent platform. (link:https://issues.redhat.com/browse/OCPBUGS-46373[*OCPBUGS-4673*])
4146

4247
* Previously, the default `node-monitor-grace-period` value was 50 seconds. As a consequence, nodes did not stay ready for the duration of time that Kubernetes components needed to reconnect, coordinate, and complete their requests. With this release, the default `node-monitor-grace-period` value is 55 seconds. As a result, the issue is resolved and deployments have enough time to be completed. (link:https://issues.redhat.com/browse/OCPBUGS-46008[*OCPBUGS-46008*])
4348

@@ -63,7 +68,7 @@ Configuring proxy support for {hcp} has a few differences from configuring proxy
6368
* When a node pool is scaled down to 0 workers, the list of hosts in the console still shows nodes in a `Ready` state. You can verify the number of nodes in two ways:
6469

6570
** In the console, go to the node pool and verify that it has 0 nodes.
66-
** On the command-rline interface, run the following commands:
71+
** On the command-line interface, run the following commands:
6772

6873
*** Verify that 0 nodes are in the node pool by running the following command:
6974
+
@@ -89,17 +94,17 @@ $ oc get agents -A
8994
* When you create a hosted cluster in an environment that uses the dual-stack network, you might encounter the following DNS-related issues:
9095

9196
** `CrashLoopBackOff` state in the `service-ca-operator` pod: When the pod tries to reach the Kubernetes API server through the hosted control plane, the pod cannot reach the server because the data plane proxy in the `kube-system` namespace cannot resolve the request. This issue occurs because in the HAProxy setup, the front end uses an IP address and the back end uses a DNS name that the pod cannot resolve.
92-
** Pods stuck in `ContainerCreating` state: This issue occurs because the `openshift-service-ca-operator` cannot generate the `metrics-tls` secret that the DNS pods need for DNS resolution. As a result, the pods cannot resolve the Kubernetes API server.
97+
** Pods stuck in the `ContainerCreating` state: This issue occurs because the `openshift-service-ca-operator` resource cannot generate the `metrics-tls` secret that the DNS pods need for DNS resolution. As a result, the pods cannot resolve the Kubernetes API server.
9398
To resolve these issues, configure the DNS server settings for a dual stack network.
9499

95100
* On the Agent platform, the {hcp} feature periodically rotates the token that the Agent uses to pull ignition. As a result, if you have an Agent resource that was created some time ago, it might fail to pull ignition. As a workaround, in the Agent specification, delete the secret of the `IgnitionEndpointTokenReference` property then add or modify any label on the Agent resource. The system re-creates the secret with the new token.
96101

97102
* If you created a hosted cluster in the same namespace as its managed cluster, detaching the managed hosted cluster deletes everything in the managed cluster namespace including the hosted cluster. The following situations can create a hosted cluster in the same namespace as its managed cluster:
98103

99104
** You created a hosted cluster on the Agent platform through the {mce} console by using the default hosted cluster cluster namespace.
100-
** You created a hosted cluster through the command-line interface or API by specifying the hosted cluster namespace to be the same as the hosted cluster name.
105+
** You created a hosted cluster through the command-line interface or API by specifying the hosted cluster namespace to be the same as the hosted cluster name.
101106

102-
* When you use the console or API to specify an IPv6 address for the `spec.services.servicePublishingStrategy.nodePort.address` of a hosted cluster, a full IPv6 address with 8 hextets is required. For example, instead of specifying `2620:52:0:1306::30`, you need to specify `2620:52:0:1306:0:0:0:30`.
107+
* When you use the console or API to specify an IPv6 address for the `spec.services.servicePublishingStrategy.nodePort.address` field of a hosted cluster, a full IPv6 address with 8 hextets is required. For example, instead of specifying `2620:52:0:1306::30`, you need to specify `2620:52:0:1306:0:0:0:30`.
103108

104109
[id="hcp-tech-preview-features_{context}"]
105110
=== General Availability and Technology Preview features
@@ -155,4 +160,4 @@ For {ibm-power-title} and {ibm-z-title}, you must run the control plane on machi
155160
|Not Available
156161
|Developer Preview
157162
|Developer Preview
158-
|===
163+
|===

0 commit comments

Comments
 (0)