Skip to content

Commit 7dbded0

Browse files
committed
adding bug fix text
1 parent bbe3ff8 commit 7dbded0

File tree

1 file changed

+30
-0
lines changed

1 file changed

+30
-0
lines changed

_unused_topics/hosted-control-planes-release-notes.adoc

Lines changed: 30 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -24,12 +24,42 @@ Cluster administrators can now define a custom DNS name for a hosted cluster to
2424
[id="bug-fixes-hcp-rn-4-19_{context}"]
2525
=== Bug fixes
2626

27+
//FYI - OCPBUGS-56792 is a duplicate of this bug
2728
* Previously, when an IDMS or ICSP in the management OpenShift cluster defined a source that pointed to registry.redhat.io or registry.redhat.io/redhat, and the mirror registry did not contain the required OLM catalog images, provisioning for the `HostedCluster` resource stalled due to unauthorized image pulls. As a consequence, the `HostedCluster` resource was not deployed, and it remained blocked, where it could not pull essential catalog images from the mirrored registry.
2829
+
2930
With this release, if a required image cannot be pulled due to authorization errors, the provisioning now explicitly fails. The logic for registry override is improved to allow matches on the root of the registry, such as registry.redhat.io, for OLM CatalogSource image resolution. A fallback mechanism is also introduced to use the original `ImageReference` if the registry override does not yield a working image.
3031
+
3132
As a result, the `HostedCluster` resource can be deployed successfully, even in scenarios where the mirror registry lacks the required OLM catalog images, as the system correctly falls back to pulling from the original source when appropriate. (link:https://issues.redhat.com/browse/OCPBUGS-56492[OCPBUGS-56492])
3233

34+
* Previously, the control plane controller did not properly select the correct CVO manifests for a feature set. As a consequence, the incorrect CVO manifests for a feature set might have been deployed for hosted clusters. In practice, CVO manifests never differed between feature sets, so this issue had no actual impact. With this release, the control plane controller properly selects the correct CVO manifests for a feature set. As a result, the correct CVO manifests for a feature set are deployed for the hosted cluster. (link:https://issues.redhat.com/browse/OCPBUGS-44438[OCPBUGS-44438])
35+
36+
* Previously, when you set a secure proxy for a `HostedCluster` resource that served a certificate signed by a custom CA, that CA was not included in the initial ignition configuration for the node. As a result, the node did not boot due to failed ignition. This release fixes the issue by including the trusted CA for the proxy in the initial ignition configuration, which results in a successful node boot and ignition. (link:https://issues.redhat.com/browse/OCPBUGS-56896[OCPBUGS-56896])
37+
38+
* Previously, the IDMS or ICSP resources from the management cluster were processed without considering that a user might specify only the root registry name as a mirror or source for image replacement. As a consequence, any IDMS or ICSP entries that used only the root registry name did not work as expected. With this release, the mirror replacement logic now correctly handles cases where only the root registry name is provided. As a result, the issue no longer occurs, and the root registry mirror replacements are now supported. (link:https://issues.redhat.com/browse/OCPBUGS-55693[OCPBUGS-55693])
39+
40+
* Previously, the OADP plugin looked for the `DataUpload` object in the wrong namespace. As a consequence, the backup process was stalled indefinitely. In this release, the plugin uses the source namespace of the backup object, so this problem no longer occurs. (link:https://issues.redhat.com/browse/OCPBUGS-55469[OCPBUGS-55469])
41+
42+
* Previously, the SAN of the custom certificate that the user added to the `hc.spec.configuration.apiServer.servingCerts.namedCertificates` field conflicted with the hostname that was set in the `hc.spec.services.servicePublishingStrategy` field for the Kubernetes agent server (KAS). As a consequence, the KAS certificate was not added to the set of certificates to generate a new payload, and any new nodes that attempted to join the `HostedCluster` resource had issues with certificate validation. This release adds a validation step to fail earlier and warn the user about the issue, so that the problem no longer occurs. (link:https://issues.redhat.com/browse/OCPBUGS-53261[OCPBUGS-53261])
43+
44+
* Previously, when you created a hosted cluster in a shared VPC, the private link controller sometimes failed to assume the shared VPC role to manage the VPC endpoints in the shared VPC. With this release, a client is created for every reconciliation in the private link controller so that you can recover from invalid clients. As a result, the hosted cluster endpoints and the hosted cluster are created successfully. (link:https://issues.redhat.com/browse/OCPBUGS-45184[*OCPBUGS-45184*])
45+
46+
* Previously, ARM64 architecture was not allowed in the `NodePool` API on the Agent platform. As a consequence, you could not deploy heterogeneous clusters on the Agent platform. In this release, the API allows ARM64-based `NodePool` resources on the Agent platform. (link:https://issues.redhat.com/browse/OCPBUGS-46342[OCPBUGS-46342])
47+
48+
* Previously, the HyperShift Operator always validated the subject alternative names (SANs) for the Kubernetes API server. With this release, the Operator validates the SANs only if PKI reconciliation is enabled. (link:https://issues.redhat.com/browse/OCPBUGS-56562[OCPBUGS-56562])
49+
50+
* Previously, in a hosted cluster that existed for more than 1 year, when the internal serving certificates were renewed, the control plane workloads did not restart to pick up the renewed certificates. As a consequence, the control plane became degraded. With this release, when certificates are renewed, the control plane workloads are automatically restarted. As a result, the control plane remains stable. (link:https://issues.redhat.com/browse/OCPBUGS-52331[OCPBUGS-52331])
51+
52+
* Previously, when you created a validating webhook on a resource that the OpenShift OAuth API server managed, such as a user or a group, the validating webhook was not executed. This release fixes the communication between the OpenShift OAuth API server and the data plane by adding a `Konnectivity` proxy sidecar. As a result, the process to validate webhooks on users and groups works as expected. (link:https://issues.redhat.com/browse/OCPBUGS-52190[OCPBUGS-52190])
53+
54+
* Previously, when the `HostedCluster` resource was not available, the reason was not propagated correctly from `HostedControlPlane` resource in the condition. The `Status` and the `Message` information was propagated for the `Available` condition in the `HostedCluster` custom resource, but the `Resource` value was not propagated. In this release, the reason is also propagated, so you have more information to identify the root cause of unavailability. (link:https://issues.redhat.com/browse/OCPBUGS-50907[OCPBUGS-50907])
55+
56+
* Previously, the `managed-trust-bundle` volume mount and the `trusted-ca-bundle-managed` config map were introduced as mandatory components. This requirement caused deployment failures if you used your own Public Key Infrastructure (PKI), because the OpenShift API server expected the presence of the `trusted-ca-bundle-managed` config map. To address this issue, these components are now optional, so that clusters can deploy successfully without the `trusted-ca-bundle-managed` config map when you are using a custom PKI. (link:https://issues.redhat.com/browse/OCPBUGS-52323[OCPBUGS-52323])
57+
58+
* Previously, there was no way to verify that an `IBMPowerVSImage` resource was deleted, which lead to unnecessary cluster retrieval attempts. As a consequence, hosted clusters on IBM Power Virtual Server were stuck in the destroy state. In this release, you can retrieve and process a cluster that is associated with an image only if the image is not in the process of being deleted. (link:https://issues.redhat.com/browse/OCPBUGS-46037[OCPBUGS-46037])
59+
60+
* Previously, when you created a cluster with secure proxy enabled and set the certificate configuration to `configuration.proxy.trustCA`, the cluster installation failed. In addition, the OpenShift OAuth API server could not use the management cluster proxy to reach cloud APIs. This release introduces fixes to prevent these issues. (link:https://issues.redhat.com/browse/OCPBUGS-51050[*OCPBUGS-51050*])
61+
62+
* Previously, both the `NodePool` controller and the cluster API controller set the `updatingConfig` status condition on the `NodePool` custom resource. As a consequence, the `updatingConfig` status was constantly changing. With this release, the logic to update the `updatingConfig` status is consolidated between the two controllers. As a result, the `updatingConfig` status is correctly set. (link:https://issues.redhat.com/browse/OCPBUGS-45322[OCPBUGS-45322])
3363

3464
[id="known-issues-hcp-rn-4-19_{context}"]
3565
=== Known issues

0 commit comments

Comments
 (0)