Skip to content

Commit 8667c95

Browse files
authored
Merge pull request #95582 from MirzWeiss/CNV-64591
CNV-64591: For 4.19.1-July 14th: Create release notes for Fusion Access for SAN.
2 parents 647aad2 + ba10183 commit 8667c95

File tree

1 file changed

+125
-0
lines changed

1 file changed

+125
-0
lines changed

virt/release_notes/virt-4-19-release-notes.adoc

Lines changed: 125 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -307,4 +307,129 @@ You can also disable free page reporting of memory ballooning for all new VMs. T
307307
// cnv-56890 - threads
308308
* In the {product-title} web console, it is erroneously possible to define multiple CPU threads for a VM based on s390x architecture. If you define multiple CPU threads, the VM enters a `CrashLoopBackOff` state with the `qemu-kvm: S390 does not support more than 1 threads` error. (link:https://issues.redhat.com/browse/CNV-56890[CNV-56890])
309309

310+
[id="virt-4.19-asynch-releases_{context}"]
311+
== Maintenance releases
310312

313+
Release notes for asynchronous releases of Red Hat {VirtProductName}.
314+
315+
[id="virt-4.19.1_{context}"]
316+
=== 4.19.1
317+
318+
.New and changed features
319+
320+
//CNV-57758 DOC: deploying clusters with GPFS
321+
* With the new {IBMFusionFirst}, you can now deploy VMs on a scalable, clustered file system in Red{nbsp}Hat {VirtProductName}. {FusionSAN} offers access to consolidated, block-level data storage. It presents storage devices such as disk arrays to the operating system as if they were direct-attached storage.
322+
+
323+
The {FusionSAN} Operator is available in the {product-title} Operator hub.
324+
+
325+
See xref:../../virt/fusion_access_SAN/fusion-access-san-overview.adoc#about-fusion-access-san_fusion-access-san-overview[About {IBMFusionFirst}] for more information.
326+
327+
.Known issues
328+
329+
//OCPNAS-56 Failed localdisk should have its status reflected
330+
* When a file system in {FusionSAN} has two local disks and one local disk fails, both local disks move to the `Unknown` state, with no indication which of the local disks failed. (*OCPNAS-56*)
331+
//(link:https://issues.redhat.com/browse/OCPNAS-56[OCPNAS-56])
332+
333+
//OCPNAS-61 Removing primary filesystem causes GPFS storage to become unusable
334+
* When creating more than one file system for VM storage in {FusionSAN}, deleting the initial primary file system results in all of the remaining file systems becoming unusable. You cannot migrate or restart any of the VMs running on the remaining file systems, and you cannot create new VMs on the remaining file systems.
335+
+
336+
To determine which file system is the primary file system, run the following command:
337+
+
338+
[source,terminal]
339+
----
340+
$ oc get cso -n ibm-spectrum-scale-csi ibm-spectrum-scale-csi -o jsonpath='{.spec.clusters[*].primary.primaryFs}'
341+
----
342+
+
343+
(*OCPNAS-61*)
344+
//(link:https://issues.redhat.com/browse/OCPNAS-61[OCPNAS-61])
345+
346+
//OCPNAS-62 VM cannot be unpaused after disruption to GPFS storage backend
347+
* When a disruption occurs between the worker nodes in a {FusionSAN} storage cluster and the shared LUNs they are connected to, the VMs on the storage cluster pause and cannot be unpaused even after the service was restored. The only way to recover the VM is to restart it. (*OCPNAS-62*)
348+
//(link:https://issues.redhat.com/browse/OCPNAS-62[OCPNAS-62])
349+
350+
//OCPNAS-77 MTC storage live migration fails for target GPFS/RWX access mode
351+
* Storage live migration from ODF to {FusionSAN} using MTC (v1.8.6) only works when the target access mode is specified as `RWO`. However, {FusionSAN} uses `filesystem/RWX` by default.
352+
+
353+
When you migrate from ODF to {FusionSAN} (RWO) you receive the following error in the VM logs:
354+
+
355+
[source,text]
356+
----
357+
message: 'cannot migrate VMI: PVC dv-fedora000-mig-hwtp is not shared, live migration
358+
requires that all PVCs must be shared (using ReadWriteMany access mode)'
359+
reason: DisksNotLiveMigratable
360+
----
361+
+
362+
This results in the VM being inaccessible when the worker node is not available.
363+
+
364+
(*OCPNAS-77*)
365+
//(link:https://issues.redhat.com/browse/OCPNAS-77[OCPNAS-77])
366+
367+
//OCPNAS-81 Trying to create a file system with the same name as an existing one - getting an error and UI doesn't allow to change/lock
368+
* When you create a new file system in {FusionSAN} with the same name as an existing file system, an error appears, and the *Create file system* button is stuck displaying a loading spinner. If you reload the page, it lists only the original file system. However, if you try to create another new file system, the LUNs you selected for the second file system no longer appear as available. (*OCPNAS-81*)
369+
//(link:https://issues.redhat.com/browse/OCPNAS-81[OCPNAS-81])
370+
371+
//OCPNAS-110 Improve short output for filesystem resource
372+
* If a {FusionSAN} file system is filled to its maximum capacity, the `mmhealth state` of the file system custom resource (CR) becomes `Degraded`. This is caused by the `no_disk_space_warn` event. After freeing disk space, you can once again use the file system, but the file system keeps the `Degraded` status. (*OCPNAS-110*)
373+
//(link:https://issues.redhat.com/browse/OCPNAS-110[OCPNAS-110])
374+
375+
//OCPNAS-124 Deleting the localdisk when using multipath does not remove the partition
376+
* When using a multipath LUN in {FusionSAN}, removing a local disk does not remove the partition. (*OCPNAS-124*)
377+
//(link:https://issues.redhat.com/browse/OCPNAS-124[OCPNAS-124])
378+
** As a workaround, run the following commands on one of the nodes:
379+
+
380+
[source,terminal]
381+
----
382+
$ oc multipath -f <device>
383+
----
384+
+
385+
[source,terminal]
386+
----
387+
$ oc multipath -r
388+
----
389+
+
390+
Running these commands on one of the nodes fixes all of the nodes.
391+
392+
//OCPNAS-126 0.0.15 UI - Used LUNs should not be available even if file system is in "Creating" state
393+
* LUNs used to create a file system in {FusionSAN} still appear as available for use until the file system moves from the `Creating` state to the `Healthy` state. This can result in users creating an additional file system with LUNs that that are already in use. After the first file system shifts to the `Healthy` state, the LUNs disappear from the second file system. (*OCPNAS-126*)
394+
//(link:https://issues.redhat.com/browse/OCPNAS-126[OCPNAS-126])
395+
396+
//OCPNAS-143 Shares with existing partitions are automatically formatted
397+
* {FusionSAN} formats disks with existing partitions that are not {FusionSAN} related. When attempting to add a new iSCSI target with an existing partition and data, {FusionSAN} automatically formats the share without warning. (*OCPNAS-143*)
398+
//(link:https://issues.redhat.com/browse/OCPNAS-143[OCPNAS-143])
399+
400+
//OCPNAS-163 UI - Deleting second file system crashes the UI
401+
* Deleting a second file system in {FusionSAN} results in the following error:
402+
+
403+
[source,text]
404+
----
405+
Your focus-trap must have at least one container with at least one tabbable node in it at all times.
406+
----
407+
+
408+
(*OCPNAS-163*)
409+
//(link:https://issues.redhat.com/browse/OCPNAS-163[OCPNAS-163])
410+
** As a workaround, reload the page and delete the second file system.
411+
412+
//OCPNAS-170 fusion access operator needs to watch the builder-dockercfg-* account and reconcile the kmm-registry-push-pull-secret secret
413+
* If your credentials for the image registry used to install {FusionSAN} change, you must delete the `kmm-registry-push-pull-secret` pull secret in the `ibm-fusion-access` namespace.Then you must restart the `fusion-access-operator-controller-manager` pod in the `ibm-fusion-access` namespace. (*OCPNAS-170*)
414+
//(link:https://issues.redhat.com/browse/OCPNAS-170[OCPNAS-170])
415+
416+
//OCPNAS-172 kmm-worker-worker-0-2-gpfs-module pod fails with 'Fatal error"
417+
* If you change the KMM settings that trigger a rebuild while the {FusionSAN} storage cluster is running and using the kernel modules, KMM cannot unload the modules, resulting in an error. (*OCPNAS-172*)
418+
//(link:https://issues.redhat.com/browse/OCPNAS-172[OCPNAS-172])
419+
420+
//OCPNAS-175 OADP backup - pvc in 'Pending' status for a few minutes.
421+
* When backing up VMs with OADP datamover on a {FusionSAN} storage cluster, the process remains in the `Pending` state for a long time before shifting to the `Bound` state and beginning the backup. The process might even remain in `Pending` until it times out completely. (*OCPNAS-175*)
422+
//(link:https://issues.redhat.com/browse/OCPNAS-175[OCPNAS-175])
423+
424+
//OCPNAS-184 Creating a filesystem may take a long time - appears as stuck
425+
* When creating a file system, it may take over twenty minutes for the *Status* of the new file system to change from *Creating* to *Healthy*. During that time, the *Status* appears stuck in *Creating*, and the following error message appears when you click on the status:
426+
+
427+
[source,text]
428+
----
429+
Failed to create filesystem. Check the operator log for more details.
430+
----
431+
+
432+
This error is not correct.
433+
+
434+
(*OCPNAS-184*)
435+
//(link:https://issues.redhat.com/browse/OCPNAS-184[OCPNAS-184])

0 commit comments

Comments
 (0)