Skip to main content

Restored Linux VM boots into maintenance (emergency) mode on a Hyper-V host.

Restored Linux VM boots into maintenance (emergency) mode on a Hyper-V host.

Updated over 3 weeks ago

Problem description

After performing a restore of a Linux VM in a Hyper-V environment (following a failed update), the restored VM consistently boots into maintenance (emergency) mode. The issue occurs regardless of the recovery point used or whether the restore is to the original or an alternate location. This only applies to Linux VMs.

Cause

The root cause is a mismatch between the disk identifiers (UUIDs) referenced in the restored VM’s /etc/fstab file and the actual disk UUIDs assigned after restore. When the disk ID or UUID changes post-restore, the system cannot mount critical partitions (such as /boot), causing the OS to drop into maintenance mode. This is an OS-level or filesystem-level issue, not a failure of the backup/restore process itself.

Traceback

The restored VM enters maintenance mode after boot.

Running mount -a shows a blockdev error for a disk listed by UUID.

The disk appears as a SCSI device by UUID, not by name.

df -h shows disks appear normal, but editing /etc/fstab and commenting out the problematic disk causes the VM to fail.

Resolution

Validate Disk UUIDs:

  • Boot the restored VM into emergency shell.

  • Run blkid or ls -l /dev/disk/by-id/ to list current disk and partition UUIDs.

Compare with /etc/fstab:

  • Run cat /etc/fstab to view the disk entries.

  • Compare the UUIDs or disk-by-id entries in /etc/fstab with those from blkid.

Correct /etc/fstab Entries:

  • Update any entries in /etc/fstab that reference old disk identifiers to match the new UUIDs.

  • Example:
    Change
    /dev/disk/by-id/scsi-OLD-ID-part1 /boot ext4 defaults 1 2
    to
    /dev/disk/by-id/scsi-NEW-ID-part1 /boot ext4 defaults 1 2

  • Save the changes to /etc/fstab.

Verification

  • Reboot the VM.

  • The VM should boot normally if the UUIDs are correct.

Did this answer your question?