This article describes Proxmox VE configurations and behaviors that are either not supported or may result in unexpected backup and restore outcomes.
Use this as a reference when planning, troubleshooting, or validating Proxmox backup and restore operations.
Unsupported Configurations
These conditions should be treated as misconfigurations. They can lead to backup/restore failures, incorrect disk handling, or unexpected errors.
Non-standard volume IDs
Disk volumes must follow the standard Proxmox naming convention:
vm-<VMID>-disk-<DISK-NUMBER>
If disk names deviate from this format, the system may be unable to:
Correctly identify disks for backup
Handle multi-disk VMs reliably
This can lead to backup and restore failures.
ZFS storage naming consistency
For ZFS-backed storage, the Proxmox storage name must match the underlying ZFS pool name.
A mismatch between the configured storage name and the ZFS pool name can lead to backup failure.
VM configuration vs. actual volume size mismatch
The disk size defined in the VM configuration must match the actual underlying volume size.
If the VM configuration and the storage layer report different sizes for the same disk, you may see:
Inconsistent or incomplete restores
Disk recognition issues on the target node
Failures during attach or resize operations
LVM-Thin volume group name requirement
Use the default Proxmox VE LVM-Thin storage volume group name pve.
Configuring LVM-Thin with a different volume group name can cause backup failures because the expected LVM snapshot path cannot be located.
Behavioral Limitations
These behaviors are by design and should be considered when planning capacity and operational procedures.
Disk size alignment during restore
ZFS-based restores do not support disk sizes with fractional MiB values. During restore, disk sizes are automatically rounded to the nearest whole MiB.
For LVM-Thin storage, restored disk sizes are aligned to the nearest 4 MiB boundary. The restored disk may be slightly larger than the original. This is expected behavior and is required for alignment on LVM-Thin volumes.
Impact of replication on backups
Enabling Proxmox VE replication on VMs can interfere with backup operations. This can lead to backup failure, or the backup proxy service can start multiple times.
Backup proxy node migration
Migrating the backup proxy to a different Proxmox node is not supported. This can disrupt ongoing backup and restore operations, and can result in failed backups or restores due to loss of node-specific context.
Snapshot cleanup after backup proxy restart
If the backup proxy service is restarted while snapshots exist for ongoing or recent backups:
Previously created VM snapshots are not automatically deleted.
New backups create new snapshots, while older snapshots may remain in a stale state.
Manual cleanup of the stale snapshots may be required to avoid unnecessary storage consumption and potential management overhead.
Partial restoration of VM properties
Some VM-level property values are not restored and revert to default values after restore. These properties are:
Start at boot
ACPI support
KVM hardware virtualization
Freeze CPU at startup
Protection flag
SPICE enhancements
VM state storage
AMD SEV
Intel TDX
