- 
          
- 
                Notifications
    You must be signed in to change notification settings 
- Fork 22
Description
Is there an existing issue for this?
- I have searched the existing issues
Current Behavior
I was trying to upgrade an aarch64 node from AlmaLinux 8 to AlmaLinux 9 when I encountered an issue during the upgrade process in the booted initramfs.
While this was on aarch64, I believe this likely affects x86_64 or any other architecture. Here's the layout of the system and fstab for a reference:
Block devices:
NAME                      MAJ:MIN RM   SIZE RO TYPE  MOUNTPOINTS
nvme0n1                   259:0    0 894.3G  0 disk  
|-nvme0n1p1               259:1    0     1G  0 part  
| `-md127                   9:127  0  1022M  0 raid1 /boot
|-nvme0n1p2               259:2    0   600M  0 part  
| `-md125                   9:125  0 599.9M  0 raid1 /boot/efi
`-nvme0n1p3               259:3    0 892.7G  0 part  
  `-md126                   9:126  0 892.5G  0 raid1 
    |-almalinux_arm1-root 253:0    0 764.5G  0 lvm   /
    `-almalinux_arm1-swap 253:1    0   128G  0 lvm   [SWAP]
nvme1n1                   259:4    0 931.5G  0 disk  
|-nvme1n1p1               259:5    0     1G  0 part  
| `-md127                   9:127  0  1022M  0 raid1 /boot
|-nvme1n1p2               259:6    0   600M  0 part  
| `-md125                   9:125  0 599.9M  0 raid1 /boot/efi
`-nvme1n1p3               259:7    0 892.7G  0 part  
  `-md126                   9:126  0 892.5G  0 raid1 
    |-almalinux_arm1-root 253:0    0 764.5G  0 lvm   /
    `-almalinux_arm1-swap 253:1    0   128G  0 lvm   [SWAP]
fstab
/dev/mapper/almalinux_arm1-root /                       ext4    defaults        1 1
UUID=ef8fe825-99ef-4c01-aab1-96a46b642e82 /boot                   ext4    defaults        1 2
UUID=3D34-9948          /boot/efi               vfat    umask=0077,shortname=winnt 0 2
/dev/mapper/almalinux_arm1-swap none                    swap    defaults        0 0
Unfortunately I don't have any error output as I've already worked around it, however it's likely reproducible. The problem seems to be the fact that the systems doesn't know how to reinstall grub since it tries to use the RAID device instead of each disk.
I think this might be an issue with grub-install, however I know the AlmaLinux installer allowed this configuration (at least that's my recollection).
The work around was to:
- Break the RAID1 of /boot/efi and reconfigure the system to use one disk
- After the upgrade, rebuild the RAID1 again manually by making a copy of the contents of /boot/efi
Expected Behavior
I expected to leapp at least send me a warning before doing the upgrade with the suggested fix.
If you want, I'll see if I can replicate this inside of a VM to make sure it just wasn't some kind of one-off.
Steps To Reproduce
No response
Anything else?
No response
Search terms
mdadm efi