|
| 1 | +:_mod-docs-content-type: ASSEMBLY |
| 2 | +[id="hosted-control-planes-release-notes"] |
| 3 | +include::_attributes/common-attributes.adoc[] |
| 4 | += {hcp-capital} release notes |
| 5 | +:context: hosted-control-planes-release-notes |
| 6 | + |
| 7 | +toc::[] |
| 8 | + |
| 9 | +Release notes contain information about new and deprecated features, changes, and known issues. |
| 10 | + |
| 11 | +With this release, {hcp} for {product-title} 4.17 is available. {hcp-capital} for {product-title} 4.17 supports the {mce} version 2.7. |
| 12 | + |
| 13 | +[id="hcp-4-17-new-features-and-enhancements_{context}"] |
| 14 | +== New features and enhancements |
| 15 | + |
| 16 | +This release adds improvements related to the following concepts: |
| 17 | + |
| 18 | +[id="hcp-4-17-taints-tolerations_{context}"] |
| 19 | +=== Custom taints and tolerations (Technology Preview) |
| 20 | +For {hcp} on {VirtProductName}, you can now apply tolerations to hosted control plane pods by using the `hcp` CLI `-tolerations` argument or by using the `hc.Spec.Tolerations` API file. This feature is available as a Technology Preview feature. For more information, see xref:../hosted_control_planes/hcp-prepare/hcp-distribute-workloads.adoc#hcp-virt-taints-tolerations_hcp-distribute-workloads[Custom taints and tolerations]. |
| 21 | + |
| 22 | +[id="hcp-4-17-nvidia-gpu_{context}"] |
| 23 | +=== Support for NVIDIA GPU devices on {VirtProductName} (Technology Preview) |
| 24 | +For {hcp} on {VirtProductName}, you can attach one or more NVIDIA graphics processing unit (GPU) devices to node pools. This feature is available as a Technology Preview feature. For more information, see xref:../hosted_control_planes/hcp-manage/hcp-manage-virt.adoc#hcp-virt-attach-nvidia-gpus_hcp-manage-virt[Attaching NVIDIA GPU devices by using the hcp CLI] and xref:../hosted_control_planes/hcp-manage/hcp-manage-virt.adoc#hcp-virt-attach-nvidia-gpus-np-api_hcp-manage-virt[Attaching NVIDIA GPU devices by using the NodePool resource]. |
| 25 | + |
| 26 | +[id="hcp-4-17-tenancy-aws_{context}"] |
| 27 | +=== Support for tenancy on {aws-short} |
| 28 | +When you create a hosted cluster on {aws-short}, you can indicate whether the EC2 instance should run on shared or single-tenant hardware. For more information, see xref:../hosted_control_planes/hcp-deploy/hcp-deploy-aws.adoc#hcp-aws-deploy-hc_hcp-deploy-aws[Creating a hosted cluster on {aws-short}]. |
| 29 | + |
| 30 | +[id="hcp-4-17-ocp-version-support_{context}"] |
| 31 | +=== Support for {product-title} versions in hosted clusters |
| 32 | +You can deploy a range of supported {product-title} versions in a hosted cluster. For more information, see xref:../hosted_control_planes/hcp-updating.adoc#hcp-supported-ocp-versions_hcp-updating[Supported {product-title} versions in a hosted cluster]. |
| 33 | + |
| 34 | +[id="hcp-4-17-virt-dc_{context}"] |
| 35 | +=== {hcp-capital} on {VirtProductName} in a disconnected environment is Generally Available |
| 36 | +In this release, {hcp} on {VirtProductName} in a disconnected environment is Generally Available. For more information, see xref:../hosted_control_planes/hcp-disconnected/hcp-deploy-dc-virt.adoc[Deploying {hcp} on {VirtProductName} in a disconnected environment]. |
| 37 | + |
| 38 | +[id="hcp-4-17-multi_{context}"] |
| 39 | +=== {hcp-capital} for an ARM64 {product-title} cluster on {aws-short} is Generally Available |
| 40 | +In this release, {hcp} for an ARM64 {product-title} cluster on {aws-short} is Generally Available. For more information, see xref:../hosted_control_planes/hcp-deploy/hcp-deploy-aws.adoc#hcp-enable-arm-amd_hcp-deploy-aws[Running hosted clusters on an ARM64 architecture]. |
| 41 | + |
| 42 | +[id="hcp-4-17-bm_{context}"] |
| 43 | +=== {hcp-capital} for an {product-title} cluster on bare metal is Generally Available |
| 44 | +In this release, {hcp} for an {product-title} cluster on bare metal is Generally Available. For more information, see xref:../hosted_control_planes/hcp-deploy/hcp-deploy-bm.adoc[Deploying hosted clusters on bare metal]. |
| 45 | + |
| 46 | +[id="hcp-4-17-ibm-z-ga_{context}"] |
| 47 | +=== {hcp-capital} on {ibm-z-title} is Generally Available |
| 48 | +In this release, {hcp} on {ibm-z-title} is Generally Available. For more information, see xref:../hosted_control_planes/hcp-deploy/hcp-deploy-ibmz.adoc[Deploying {hcp} on {ibm-z-title}]. |
| 49 | + |
| 50 | +[id="hcp-4-17-ibm-power-ga_{context}"] |
| 51 | +=== {hcp-capital} on {ibm-power-title} is Generally Available |
| 52 | +In this release, {hcp} on {ibm-power-title} is Generally Available. For more information, see xref:../hosted_control_planes/hcp-deploy/hcp-deploy-ibm-power.adoc[Deploying {hcp} on {ibm-power-title}]. |
| 53 | + |
| 54 | +[id="bug-fixes-hcp-rn-4-17_{context}"] |
| 55 | +== Bug fixes |
| 56 | + |
| 57 | +* Previously, when a hosted cluster proxy was configured and it used an identity provider (IDP) that had an HTTP or HTTPS endpoint, the hostname of the IDP was unresolved before sending it through the proxy. Consequently, hostnames that could only be resolved by the data plane failed to resolve for IDPs. With this update, a DNS lookup is performed before sending IPD traffic through the `konnectivity` tunnel. As a result, IDPs with hostnames that can only be resolved by the data plane can be verified by the Control Plane Operator. (link:https://issues.redhat.com/browse/OCPBUGS-41371[*OCPBUGS-41371*]) |
| 58 | + |
| 59 | +* Previously, when the hosted cluster `controllerAvailabilityPolicy` was set to `SingleReplica`, `podAntiAffinity` on networking components blocked the availability of the components. With this release, the issue is resolved. (link:https://issues.redhat.com/browse/OCPBUGS-39313[*OCPBUGS-39313*]) |
| 60 | + |
| 61 | +* Previously, the `AdditionalTrustedCA` that was specified in the hosted cluster image configuration was not reconciled into the `openshift-config` namespace, as expected by the `image-registry-operator`, and the component did not become available. With this release, the issue is resolved. (link:https://issues.redhat.com/browse/OCPBUGS-39225[*OCPBUGS-39225*]) |
| 62 | + |
| 63 | +* Previously, Red{nbsp}Hat HyperShift periodic conformance jobs failed because of changes to the core operating system. These failed jobs caused the OpenShift API deployment to fail. With this release, an update recursively copies individual trusted certificate authority (CA) certificates instead of copying a single file, so that the periodic conformance jobs succeed and the OpenShift API runs as expected. (link:https://issues.redhat.com/browse/OCPBUGS-38941[*OCPBUGS-38941*]) |
| 64 | + |
| 65 | +* Previously, the Konnectivity proxy agent in a hosted cluster always sent all TCP traffic through an HTTP/S proxy. It also ignored host names in the `NO_PROXY` configuration because it only received resolved IP addresses in its traffic. As a consequence, traffic that was not meant to be proxied, such as LDAP traffic, was proxied regardless of configuration. With this release, proxying is completed at the source (control plane) and the Konnectivity agent proxying configuration is removed. As a result, traffic that is not meant to be proxied, such as LDAP traffic, is not proxied anymore. The `NO_PROXY` configuration that includes host names is honored. (link:https://issues.redhat.com/browse/OCPBUGS-38637[*OCPBUGS-38637*]) |
| 66 | + |
| 67 | +* Previously, the `azure-disk-csi-driver-controller` image was not getting appropriate override values when using `registryOverride`. This was intentional so as to avoid propagating the values to the `azure-disk-csi-driver` data plane images. With this update, the issue is resolved by adding a separate image override value. As a result, the `azure-disk-csi-driver-controller` can be used with `registryOverride` and no longer affects `azure-disk-csi-driver` data plane images. (link:https://issues.redhat.com/browse/OCPBUGS-38183[*OCPBUGS-38183*]) |
| 68 | + |
| 69 | +* Previously, the {aws-short} cloud controller manager within a hosted control plane that was running on a proxied management cluster would not use the proxy for cloud API communication. With this release, the issue is fixed. (link:https://issues.redhat.com/browse/OCPBUGS-37832[*OCPBUGS-37832*]) |
| 70 | + |
| 71 | +* Previously, proxying for Operators that run in the control plane of a hosted cluster was performed through proxy settings on the Konnectivity agent pod that runs in the data plane. It was not possible to distinguish if proxying was needed based on application protocol. |
| 72 | ++ |
| 73 | +For parity with {product-title}, IDP communication via HTTPS or HTTP should be proxied, but LDAP communication should not be proxied. This type of proxying also ignores `NO_PROXY` entries that rely on host names because by the time traffic reaches the Konnectivity agent, only the destination IP address is available. |
| 74 | ++ |
| 75 | +With this release, in hosted clusters, proxy is invoked in the control plane through `konnectivity-https-proxy` and `konnectivity-socks5-proxy`, and proxying traffic is stopped from the Konnectivity agent. As a result, traffic that is destined for LDAP servers is no longer proxied. Other HTTPS or HTTPS traffic is proxied correctly. The `NO_PROXY` setting is honored when you specify hostnames. (link:https://issues.redhat.com/browse/OCPBUGS-37052[*OCPBUGS-37052*]) |
| 76 | + |
| 77 | +* Previously, proxying for IDP communication occurred in the Konnectivity agent. By the time traffic reached Konnectivity, its protocol and hostname were no longer available. As a consequence, proxying was not done correctly for the OAUTH server pod. It did not distinguish between protocols that require proxying (`http/s`) and protocols that do not (`ldap://`). In addition, it did not honor the `no_proxy` variable that is configured in the `HostedCluster.spec.configuration.proxy` spec. |
| 78 | ++ |
| 79 | +With this release, you can configure the proxy on the Konnectivity sidecar of the OAUTH server so that traffic is routed appropriately, honoring your `no_proxy` settings. As a result, the OAUTH server can communicate properly with identity providers when a proxy is configured for the hosted cluster. (link:https://issues.redhat.com/browse/OCPBUGS-36932[*OCPBUGS-36932*]) |
| 80 | + |
| 81 | +* Previously, the Hosted Cluster Config Operator (HCCO) did not delete the `ImageDigestMirrorSet` CR (IDMS) after you removed the `ImageContentSources` field from the `HostedCluster` object. As a consequence, the IDMS persisted in the `HostedCluster` object when it should not. With this release, the HCCO manages the deletion of IDMS resources from the `HostedCluster` object. (link:https://issues.redhat.com/browse/OCPBUGS-34820[*OCPBUGS-34820*]) |
| 82 | + |
| 83 | +* Previously, deploying a `hostedCluster` in a disconnected environment required setting the `hypershift.openshift.io/control-plane-operator-image` annotation. With this update, the annotation is no longer needed. Additionally, the metadata inspector works as expected during the hosted Operator reconciliation, and `OverrideImages` is populated as expected. (link:https://issues.redhat.com/browse/OCPBUGS-34734[*OCPBUGS-34734*]) |
| 84 | + |
| 85 | +* Previously, hosted clusters on {aws-short} leveraged their VPC's primary CIDR range to generate security group rules on the data plane. As a consequence, if you installed a hosted cluster into an AWS VPC with multiple CIDR ranges, the generated security group rules could be insufficient. With this update, security group rules are generated based on the provided machine CIDR range to resolve this issue. (link:https://issues.redhat.com/browse/OCPBUGS-34274[*OCPBUGS-34274*]) |
| 86 | + |
| 87 | +* Previously, the {cluster-manager} container did not have the right TLS certificates. As a consequence, you could not use image streams in disconnected deployments. With this release, the TLS certificates are added as projected volumes to resolve this issue. (link:https://issues.redhat.com/browse/OCPBUGS-31446[*OCPBUGS-31446*]) |
| 88 | + |
| 89 | +* Previously, the bulk destroy option in the {mce} console for {VirtProductName} did not destroy a hosted cluster. With this release, this issue is resolved. (link:https://issues.redhat.com/browse/ACM-10165[*ACM-10165*]) |
| 90 | + |
| 91 | +[id="known-issues-hcp-rn-4-17_{context}"] |
| 92 | +== Known issues |
| 93 | + |
| 94 | +* If the annotation and the `ManagedCluster` resource name do not match, the {mce} console displays the cluster as `Pending import`. The cluster cannot be used by the {mce-short}. The same issue happens when there is no annotation and the `ManagedCluster` name does not match the `Infra-ID` value of the `HostedCluster` resource. |
| 95 | + |
| 96 | +* When you use the {mce} console to add a new node pool to an existing hosted cluster, the same version of {product-title} might appear more than once in the list of options. You can select any instance in the list for the version that you want. |
| 97 | + |
| 98 | +* When a node pool is scaled down to 0 workers, the list of hosts in the console still shows nodes in a `Ready` state. You can verify the number of nodes in two ways: |
| 99 | + |
| 100 | +** In the console, go to the node pool and verify that it has 0 nodes. |
| 101 | +** On the command-rline interface, run the following commands: |
| 102 | + |
| 103 | +*** Verify that 0 nodes are in the node pool by running the following command: |
| 104 | ++ |
| 105 | +[source,terminal] |
| 106 | +---- |
| 107 | +$ oc get nodepool -A |
| 108 | +---- |
| 109 | + |
| 110 | +*** Verify that 0 nodes are in the cluster by running the following command: |
| 111 | ++ |
| 112 | +[source,terminal] |
| 113 | +---- |
| 114 | +$ oc get nodes --kubeconfig |
| 115 | +---- |
| 116 | + |
| 117 | +*** Verify that 0 agents are reported as bound to the cluster by running the following command: |
| 118 | ++ |
| 119 | +[source,terminal] |
| 120 | +---- |
| 121 | +$ oc get agents -A |
| 122 | +---- |
| 123 | + |
| 124 | +* When you create a hosted cluster in an environment that uses the dual-stack network, you might encounter the following DNS-related issues: |
| 125 | + |
| 126 | +** `CrashLoopBackOff` state in the `service-ca-operator` pod: When the pod tries to reach the Kubernetes API server through the hosted control plane, the pod cannot reach the server because the data plane proxy in the `kube-system` namespace cannot resolve the request. This issue occurs because in the HAProxy setup, the front end uses an IP address and the back end uses a DNS name that the pod cannot resolve. |
| 127 | +** Pods stuck in `ContainerCreating` state: This issue occurs because the `openshift-service-ca-operator` cannot generate the `metrics-tls` secret that the DNS pods need for DNS resolution. As a result, the pods cannot resolve the Kubernetes API server. |
| 128 | +To resolve these issues, configure the DNS server settings for a dual stack network. |
| 129 | + |
| 130 | +* On the Agent platform, the {hcp} feature periodically rotates the token that the Agent uses to pull ignition. As a result, if you have an Agent resource that was created some time ago, it might fail to pull ignition. As a workaround, in the Agent specification, delete the secret of the `IgnitionEndpointTokenReference` property then add or modify any label on the Agent resource. The system re-creates the secret with the new token. |
| 131 | + |
| 132 | +* If you created a hosted cluster in the same namespace as its managed cluster, detaching the managed hosted cluster deletes everything in the managed cluster namespace including the hosted cluster. The following situations can create a hosted cluster in the same namespace as its managed cluster: |
| 133 | + |
| 134 | +** You created a hosted cluster on the Agent platform through the {mce} console by using the default hosted cluster cluster namespace. |
| 135 | +** You created a hosted cluster through the command-line interface or API by specifying the hosted cluster namespace to be the same as the hosted cluster name. |
| 136 | + |
| 137 | +[id="hcp-tech-preview-features_{context}"] |
| 138 | +== Generally Available and Technology Preview features |
| 139 | + |
| 140 | +Features which are Generally Available (GA) are fully supported and are suitable for production use. Technology Preview (TP) features are experimental features and are not intended for production use. For more information about TP features, see the link:https://access.redhat.com/support/offerings/techpreview[Technology Preview scope of support on the Red Hat Customer Portal]. |
| 141 | + |
| 142 | +See the following table to know about {hcp} GA and TP features: |
| 143 | + |
| 144 | +.{hcp-capital} GA and TP tracker |
| 145 | +[cols="4,1,1,1",options="header"] |
| 146 | +|=== |
| 147 | +|Feature |4.15 |4.16 |4.17 |
| 148 | + |
| 149 | +|{hcp-capital} for {product-title} on {aws-first} |
| 150 | +|Technology Preview |
| 151 | +|Generally Available |
| 152 | +|Generally Available |
| 153 | + |
| 154 | +|{hcp-capital} for {product-title} on bare metal |
| 155 | +|General Availability |
| 156 | +|General Availability |
| 157 | +|General Availability |
| 158 | + |
| 159 | +|{hcp-capital} for {product-title} on {VirtProductName} |
| 160 | +|Generally Available |
| 161 | +|Generally Available |
| 162 | +|Generally Available |
| 163 | + |
| 164 | +|{hcp-capital} for {product-title} using non-bare-metal agent machines |
| 165 | +|Technology Preview |
| 166 | +|Technology Preview |
| 167 | +|Technology Preview |
| 168 | + |
| 169 | +|{hcp-capital} for an ARM64 {product-title} cluster on {aws-full} |
| 170 | +|Technology Preview |
| 171 | +|Technology Preview |
| 172 | +|Generally Available |
| 173 | + |
| 174 | +|{hcp-capital} for {product-title} on {ibm-power-title} |
| 175 | +|Technology Preview |
| 176 | +|Technology Preview |
| 177 | +|Generally Available |
| 178 | + |
| 179 | +|{hcp-capital} for {product-title} on {ibm-z-title} |
| 180 | +|Technology Preview |
| 181 | +|Technology Preview |
| 182 | +|Generally Available |
| 183 | + |
| 184 | +|{hcp-capital} for {product-title} on {rh-openstack} |
| 185 | +|Not Available |
| 186 | +|Not Available |
| 187 | +|Developer Preview |
| 188 | +|=== |
0 commit comments