|
| 1 | +:_mod-docs-content-type: ASSEMBLY |
| 2 | +[id="hosted-control-planes-release-notes"] |
| 3 | +include::_attributes/common-attributes.adoc[] |
| 4 | += {hcp-capital} release notes |
| 5 | +:context: hosted-control-planes-release-notes |
| 6 | + |
| 7 | +toc::[] |
| 8 | + |
| 9 | +Release notes contain information about new and deprecated features, changes, and known issues. |
| 10 | + |
| 11 | +[id="hcp-4-19-release-notes_{context}"] |
| 12 | +== {hcp-capital} release notes for {product-title} 4.19 |
| 13 | + |
| 14 | +With this release, {hcp} for {product-title} 4.19 is available. {hcp-capital} for {product-title} 4.19 supports {mce} version 2.9. |
| 15 | + |
| 16 | +[id="hcp-4-19-new-features-and-enhancements_{context}"] |
| 17 | +=== New features and enhancements |
| 18 | + |
| 19 | + |
| 20 | +[id="bug-fixes-hcp-rn-4-19_{context}"] |
| 21 | +=== Bug fixes |
| 22 | + |
| 23 | +* Previously, when an IDMS or ICSP in the management OpenShift cluster defined a source that pointed to registry.redhat.io or registry.redhat.io/redhat, and the mirror registry did not contain the required OLM catalog images, provisioning for the `HostedCluster` resource stalled due to unauthorized image pulls. As a consequence, the `HostedCluster` resource was not deployed, and it remained blocked, where it could not pull essential catalog images from the mirrored registry. |
| 24 | ++ |
| 25 | +With this release, if a required image cannot be pulled due to authorization errors, the provisioning now explicitly fails. The logic for registry override is improved to allow matches on the root of the registry, such as registry.redhat.io, for OLM CatalogSource image resolution. A fallback mechanism is also introduced to use the original `ImageReference` if the registry override does not yield a working image. |
| 26 | ++ |
| 27 | +As a result, the `HostedCluster` resource can be deployed successfully, even in scenarios where the mirror registry lacks the required OLM catalog images, as the system correctly falls back to pulling from the original source when appropriate. (link:https://issues.redhat.com/browse/OCPBUGS-56492[OCPBUGS-56492]) |
| 28 | + |
| 29 | + |
| 30 | +[id="known-issues-hcp-rn-4-19_{context}"] |
| 31 | +=== Known issues |
| 32 | + |
| 33 | +* If the annotation and the `ManagedCluster` resource name do not match, the {mce} console displays the cluster as `Pending import`. The cluster cannot be used by the {mce-short}. The same issue happens when there is no annotation and the `ManagedCluster` name does not match the `Infra-ID` value of the `HostedCluster` resource. |
| 34 | + |
| 35 | +* When you use the {mce} console to add a new node pool to an existing hosted cluster, the same version of {product-title} might appear more than once in the list of options. You can select any instance in the list for the version that you want. |
| 36 | + |
| 37 | +* When a node pool is scaled down to 0 workers, the list of hosts in the console still shows nodes in a `Ready` state. You can verify the number of nodes in two ways: |
| 38 | + |
| 39 | +** In the console, go to the node pool and verify that it has 0 nodes. |
| 40 | +** On the command-line interface, run the following commands: |
| 41 | + |
| 42 | +*** Verify that 0 nodes are in the node pool by running the following command: |
| 43 | ++ |
| 44 | +[source,terminal] |
| 45 | +---- |
| 46 | +$ oc get nodepool -A |
| 47 | +---- |
| 48 | + |
| 49 | +*** Verify that 0 nodes are in the cluster by running the following command: |
| 50 | ++ |
| 51 | +[source,terminal] |
| 52 | +---- |
| 53 | +$ oc get nodes --kubeconfig |
| 54 | +---- |
| 55 | + |
| 56 | +*** Verify that 0 agents are reported as bound to the cluster by running the following command: |
| 57 | ++ |
| 58 | +[source,terminal] |
| 59 | +---- |
| 60 | +$ oc get agents -A |
| 61 | +---- |
| 62 | + |
| 63 | +* When you create a hosted cluster in an environment that uses the dual-stack network, you might encounter the following DNS-related issues: |
| 64 | + |
| 65 | +** `CrashLoopBackOff` state in the `service-ca-operator` pod: When the pod tries to reach the Kubernetes API server through the hosted control plane, the pod cannot reach the server because the data plane proxy in the `kube-system` namespace cannot resolve the request. This issue occurs because in the HAProxy setup, the front end uses an IP address and the back end uses a DNS name that the pod cannot resolve. |
| 66 | +** Pods stuck in the `ContainerCreating` state: This issue occurs because the `openshift-service-ca-operator` resource cannot generate the `metrics-tls` secret that the DNS pods need for DNS resolution. As a result, the pods cannot resolve the Kubernetes API server. |
| 67 | +To resolve these issues, configure the DNS server settings for a dual stack network. |
| 68 | + |
| 69 | +* On the Agent platform, the {hcp} feature periodically rotates the token that the Agent uses to pull ignition. As a result, if you have an Agent resource that was created some time ago, it might fail to pull ignition. As a workaround, in the Agent specification, delete the secret of the `IgnitionEndpointTokenReference` property then add or modify any label on the Agent resource. The system re-creates the secret with the new token. |
| 70 | + |
| 71 | +* If you created a hosted cluster in the same namespace as its managed cluster, detaching the managed hosted cluster deletes everything in the managed cluster namespace including the hosted cluster. The following situations can create a hosted cluster in the same namespace as its managed cluster: |
| 72 | + |
| 73 | +** You created a hosted cluster on the Agent platform through the {mce} console by using the default hosted cluster cluster namespace. |
| 74 | +** You created a hosted cluster through the command-line interface or API by specifying the hosted cluster namespace to be the same as the hosted cluster name. |
| 75 | + |
| 76 | +* When you use the console or API to specify an IPv6 address for the `spec.services.servicePublishingStrategy.nodePort.address` field of a hosted cluster, a full IPv6 address with 8 hextets is required. For example, instead of specifying `2620:52:0:1306::30`, you need to specify `2620:52:0:1306:0:0:0:30`. |
| 77 | + |
| 78 | +[id="hcp-tech-preview-features_{context}"] |
| 79 | +=== General Availability and Technology Preview features |
| 80 | + |
| 81 | +Some features in this release are currently in Technology Preview. These experimental features are not intended for production use. For more information about the scope of support for these features, see link:https://access.redhat.com/support/offerings/techpreview[Technology Preview Features Support Scope] on the Red{nbsp}Hat Customer Portal. |
| 82 | + |
| 83 | +[IMPORTANT] |
| 84 | +==== |
| 85 | +For {ibm-power-title} and {ibm-z-title}, you must run the control plane on machine types based on 64-bit x86 architecture, and node pools on {ibm-power-title} or {ibm-z-title}. |
| 86 | +==== |
| 87 | + |
| 88 | +.{hcp-capital} GA and TP tracker |
| 89 | +[cols="4,1,1,1",options="header"] |
| 90 | +|=== |
| 91 | +|Feature |4.17 |4.18 |4.19 |
| 92 | + |
| 93 | +|{hcp-capital} for {product-title} using non-bare-metal agent machines |
| 94 | +|Technology Preview |
| 95 | +|Technology Preview |
| 96 | +|Technology Preview |
| 97 | + |
| 98 | +|{hcp-capital} for an ARM64 {product-title} cluster on {aws-full} |
| 99 | +|General Availability |
| 100 | +|General Availability |
| 101 | +|General Availability |
| 102 | + |
| 103 | +|{hcp-capital} for {product-title} on {ibm-power-title} |
| 104 | +|General Availability |
| 105 | +|General Availability |
| 106 | +|General Availability |
| 107 | + |
| 108 | +|{hcp-capital} for {product-title} on {ibm-z-title} |
| 109 | +|General Availability |
| 110 | +|General Availability |
| 111 | +|General Availability |
| 112 | + |
| 113 | +|{hcp-capital} for {product-title} on {rh-openstack} |
| 114 | +|Developer Preview |
| 115 | +|Developer Preview |
| 116 | +|Technology Preview |
| 117 | +|=== |
0 commit comments