You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Copy file name to clipboardExpand all lines: release_notes/ocp-4-17-release-notes.adoc
+41Lines changed: 41 additions & 0 deletions
Original file line number
Diff line number
Diff line change
@@ -2890,6 +2890,47 @@ This section will continue to be updated over time to provide notes on enhanceme
2890
2890
For any {product-title} release, always review the instructions on xref:../updating/updating_a_cluster/updating-cluster-web-console.adoc#updating-cluster-web-console[updating your cluster] properly.
{product-title} release {product-version}.34 is now available. The list of bug fixes that are included in the update is documented in the link:https://access.redhat.com/errata/RHBA-2025:9289[RHBA-2025:9289] advisory. The RPM packages that are included in the update are provided by the link:https://access.redhat.com/errata/RHBA-2025:9290[RHBA-2025:9290] advisory.
2900
+
2901
+
Space precluded documenting all of the container images for this release in the advisory.
2902
+
2903
+
You can view the container images in this release by running the following command:
2904
+
2905
+
[source,terminal]
2906
+
----
2907
+
$ oc adm release info 4.17.34 --pullspecs
2908
+
----
2909
+
2910
+
[id="ocp-4-17-34-known-issues_{context}"]
2911
+
==== Known issues
2912
+
2913
+
* A known issue exists where a Technology Preview-enabled cluster has Sigstore verification for payload images in the `policy.json` file, but the Podman version in the base image does not support Sigstore configuration, so the new node is not available. As a workaround, the node starts running when the Podman version in the base image does not support Sigstore, so use the default `policy.json` file that does not have Sigstore verification if the base image is 4.11 or earlier. (link:https://issues.redhat.com/browse/OCPBUGS-52313[OCPBUGS-52313])
2914
+
2915
+
[id="ocp-4-17-34-bug-fixes_{context}"]
2916
+
==== Bug fixes
2917
+
2918
+
* Previously, if you tried to update a hosted cluster that used in-place updates, the proxy variables were not honored and the update failed. With this release, the pod that performs in-place upgrades honors the cluster proxy settings. As a result, updates now work for hosted clusters that use in-place updates. (link:https://issues.redhat.com/browse/OCPBUGS-57432[OCPBUGS-57432])
2919
+
2920
+
* Previously, when you defined multiple bring-your-own (BYO) subnet CIDRs for the `machineNetwork` parameter in the `install-config.yaml` configuration file, the installation failed at the bootstrap stage. This situation occurred because the control plane nodes were blocked from reaching the machine config server (MCS) to get their necessary setup configurations. The root cause was an overly strict {aws-short} security group rule that limited MCS access to only the first specified machine network CIDR. With this release, a fix to the {aws-short} security group means that the installation succeeds when multiple CIDRs are specified in the `machineNetwork` parameter of the `install-config.yaml`. (link:https://issues.redhat.com/browse/OCPBUGS-57292[OCPBUGS-57292])
2921
+
2922
+
* Previously, a Machine Config Operator (MCO) incorrectly set an `Upgradeable=False` condition to all new nodes that were added to a cluster. A `PoolUpdating` reason was provided for the `Upgradeable=False` condition. With this release, the MCO now correctly sets an `Upgradeable=True` condition to all new nodes that get added to a cluster, which resolves the issue. (link:https://issues.redhat.com/browse/OCPBUGS-57135[OCPBUGS-57135])
2923
+
2924
+
* Previously, the installer was not checking for ESXi hosts that were powered off within a {vmw-first} cluster, which caused the installation to fail because the OVA could not be uploaded. With this release, the installer now checks the power status of each ESXi host and skips any that are powered off, which resolves the issue and allows the OVA to be imported successfully. (link:https://issues.redhat.com/browse/OCPBUGS-56448[OCPBUGS-56448])
2925
+
2926
+
* Previously, in certain situations the gateway IP address for a node changed and caused the `OVN` cluster router to add a new static route with the new gateway IP address, without deleting the original one. The `OVN` cluster router manages the static route to the cluster subnet. As a result, a stale route still pointed to the switch subnet and this caused intermittent drops during egress traffic transfer. With this release, a patch applied to the `OVN` cluster router ensures that if the gateway IP address changes, the `OVN` cluster router updates the existing static route with the new gateway IP address. A stale route no longer points to the `OVN` cluster router so that egress traffic flow does not drop. (link:https://issues.redhat.com/browse/OCPBUGS-56443[OCPBUGS-56443])
2927
+
2928
+
* Previously, a pod with an IP address in an `OVN` `localnet` network was unreachable by other pods that ran on the same node but used the default network for communication. Communication between pods on different nodes was not impacted by this communication issue. With this release, communication between a `localnet` pod and a default network pod that both ran on the same node is improved so that this issue no longer exists. (link:https://issues.redhat.com/browse/OCPBUGS-56244[OCPBUGS-56244])
2929
+
2930
+
[id="ocp-4-17-34-updating_{context}"]
2931
+
==== Updating
2932
+
To update an {product-title} 4.17 cluster to this latest release, see xref:../updating/updating_a_cluster/updating-cluster-cli.adoc#updating-cluster-cli[Updating a cluster using the CLI].
0 commit comments