Skip to main content
Restore considerations for VMware virtual machines
Updated over 9 months ago

Enterprise Workloads Editions: ✅ Business | ✅ Enterprise | ✅ Elite

Considerations for restoring a virtual machine

  • Hot snapshots reside on CloudCache for a period that you specified at the time of configuring CloudCache.

  • If you are a group administrator, you can only restore data to a virtual machine that belongs to an administrative group that you manage. Cloud administrators and data protection officers can restore virtual machines across groups.

  • In the event of a network connection failure at the time of restores, backup proxies attempt to connect to Druva Cloud. After the restoration of connectivity, backup proxies restart restores from the state in which they were interrupted.

  • If you restart or reboot backup proxy during a restore, the restore operation changes to scheduled state and resumes after the virtual machine is up and running.

  • When creating a virtual machine, you can add a Virtual Trusted Platform Module ( vTPM ) to provide enhanced security to the guest operating system. You must create a key provider before you can add a vTPM. You can back up these vTPM-enabled VMs. However, v-TPM settings will not be copied in the recovery points during backup. Therefore, whenever you are doing a restore of these VMs, you will have to manually add the vTPM settings later.

  • Although Druva backs up VMX files along with the VMDK files, you can restore the VMDK files only.

  • The Restore Points follow the backup proxy pool time zone.

  • If you choose to restore a virtual disk, Druva creates a new virtual machine with minimum configuration and associates the VMDK files that you selected to it.

  • Druva supports restore of virtual machines to a different ESXi hypervisor, as well as the source hypervisor associated with a vCenter Server on which you installed the backup proxy.

  • A Thick Provisioned Lazy Zeroed disk is restored as a Thick Provisioned Lazy Zeroed disk.

  • A Thick Provisioned Eager Zeroed disk is restored as a Thick Provisioned Eager Zeroed disk.

  • Thin provisioned VMDK files are restored as Thin disks.

  • CBT status remains unchanged if a virtual machine is restored to the original location. If a virtual machine is restored to an alternate location, CBT is disabled.

  • Druva supports restore of RDM virtual mode disks (vRDM) as VMDK files.

  • If a virtual machine is associated with disks that are configured in different modes, for example, Independent Persistent, Druva restores only those disks for which the mode is supported.

  • In case of virtual machine restore to the original location, Druva restores the virtual machine to its original network configuration. However, for a virtual disk restore or restore to an alternate location, Druva restores the virtual to the default network configuration of the ESXi where it is restored.

  • If a restore fails, the newly created virtual machine is deleted.

  • After a restore, the virtual machine is always powered off. You must manually power on the virtual machine.

  • During an ongoing restore and scheduled backup, if the client machine is restarted then jobs request may not be resent to the client machine.

  • In case of virtual machine restore to the original location, as a backup and restore cannot run in parallel on the same virtual machine, you can cancel the ongoing backup and trigger the restore request.

  • In case of virtual machine restore to the original location, two restore requests cannot run in parallel on the same virtual machine.

  • In case of virtual machine restore to the original location, if a backup is triggered while a restore is in progress, the backup will be queued until the restore is complete.

  • In case of virtual machine restore to the alternate location, if a backup is triggered while a restore is in progress, and if backup goes to same backup proxy where restore is running, the backup will be queued until the restore is complete.

  • VMware introduced support for NVMe controllers and NVMe disks from vCenter version 6.5 and hardware version 13. With version 6.0.3-178504 of the VMware Backup proxy you can backup and restore virtual machines with NVMe disks or controllers using the HotAdd transport mode. The backup of virtual machines with NVMe disks or controllers falls back to the NBDSSL mode if the VMware Backup proxy does not have an NVMe controller.
    The following table explains the steps you need to perform if you want to protect virtual machines with NVMe disks or controllers.

    • Existing backup proxies at versions lower than 6.0.3-178504:

      Upgrade the Backup proxy from Management Console.


      📝 Note


      By default, when the existing VMware backup proxies are upgraded to the latest agent version, the HotAdd transport mode backups are disabled. You must add the ENABLE_NVME_SUPPORT flag and set it to True in Phoenix.cfg file to enable it, and perform the following:

      1. Shut down the VM.

      2. Upgrade the VM's hardware version to 13 or higher.

      3. Start the VM.


    • VMware Backup proxy version 6.0.3-178504:

      You can backup and restore virtual machines with NVMe disks or controllers. Restores from the new snapshots created after the agent upgrade are subject to some Limitations.

      For restores of virtual machines with NVMe controllers or disks from older snapshots, ensure that the parameter ENABLE_NVME_CONVERSION = True is enabled in the Phoenix.cfg file on the VMware Backup proxy. The restores are subject to some Limitations.

    • New Backup proxy deployments and additional Backup proxy deployments:

      VMware backup proxies version 6.0.3-178504 or higher support backups and restores of virtual machines with NVMe disks or controllers using the HotAdd transport mode.


📝 Note

​Before you add the parameter in the

  • Phoenix.cfg file (for VMware backup proxy version prior to 7.0.0), you need to stop the Phoenix service, add the parameter and then restart the Phoenix service again.

  • VMwareConfig.yaml file (for VMware backup proxy version 7.0.0 and later) at /etc/Druva/EnterpriseWorkloads/vmware/ , you need to stop the Druva-EnterpriseWorkloads service, add the parameter and then restart the Druva-EnterpriseWorkloads service again.

You need to follow this for all backup proxies in a given backup pool for restore.


Considerations for full virtual machine restore to the original location

  • If the virtual machine has new VMDKS attached at the time of restore and if they were not backed up, then all those VMDKS will be detached from the virtual machine.

  • If the virtual machine has independent disks attached, then they will be detached after the original restore.

  • If the virtual machine has detached backed up disks, then they will be renamed as ‘<original_vmdk_name>_phoenix_ <timestamp>.vmdk’.

  • If the old controllers are detached from the target virtual machine , then after restore they will be attached again but if the target virtual machine has new controllers attached then will remain the same.

  • If the type of the controllers is changed it will remain the same after restore.

  • All the user snapshots will be deleted after the original restore.

  • For vRDM disks:

    • For the backup proxy registered as VC, if vRDM detached at the time of restore then the thick disk with a new name as ‘<original_vmdk_name>_phoenix_ <timestamp>.vmdk’ will be created and vRDM disk will remain same.

    • For backup proxy registered as standalone ESX, the detached vRDM disk will be renamed as ‘<original_vmdk_name>_phoenix_ <timestamp>.vmdk’ and new thick disk will be created instead of detached vRDM disk.

Note: Restore to original location for virtual machines that have vRDM disk is supported. If vRDM disks are not detached before the restore then they are restored as vRDM.

  • If the virtual machine has added pRDM disks then it will not be detached after restore.

  • The CBT state of the virtual machine will not change.

  • Other devices of the virtual machine such as memory, CPUs, and CDROM device are not restored. Only the data on the virtual machine is restored.

  • The restore request to the original location is queued if there is an active backup running for the same agent on the same proxy.

Considerations for file and folders restore

  • If you have CloudCache in your setup, make sure it is connected to the backup proxy.

  • The original drive names for Windows partitions and mount points for Linux partitions will not be preserved. The partition will be shown as volume0, volume1 and so on.

  • Partitions with a corrupted file system, partition table, or incorrect partition boundaries cannot be browsed or restored.

  • Symbolic links will not be recovered.

  • Sparse files on original disks will be recovered as Thick files.

  • On Linux partitions, the “/dev”, "/var/spool", "/var/run" “/proc” , /tmp, and, “/sys” folders will be excluded, if selected from the restore set.

  • On Windows partition, "CryptnetUrlCache" will be excluded if selected from restore set.

  • Encrypted volumes and files are not supported for File-level restore.

  • File-level restore may fail for the guest Windows virtual machines, if the destination folder name contains consecutive special characters. For example “A%%B”.

  • Druva does not store guest OS credential.

  • If some files are not restored, the progress log will show the number of files missed and the detailed log will list the names of the files missed.

  • File-level restore is not supported on spanned volumes if the volume is created using two disks, and one of the disk is excluded from the backup. To browse and restore from such volumes, a staging VM is required. For more information, see Required prerequisites for File Level Restores (FLR) for GPT Dynamic Disks. For more information on Dynamic GPT FLR support, see Support Matrix.

  • For restoring files from Dynamic GPT disk partition/volumes, you need a staging VM.

Considerations for restore of MS SQL database from application-aware backups

  1. Not supported for VMware cloud (VMC) because NFS datastore is not supported by VMware cloud.

  2. You cannot use the same staging or target virtual for multiple restore jobs. The next restore job is allowed only after the previous job is finished.

  3. You can cancel a restore job till the files are downloaded and before you attach them to the target database.

  4. MS SQL database restores from VMs with NVMe controllers or disk are not supported. Full VM restores having app-aware policy configured will restore such databases as part of VM restores.

  5. In case of VMware client service restart (backup proxy restart), the following cleanup activities are performed:

    1. NFS datastore attached to the host is removed

    2. Disks attached to the staging VM are removed.

    3. Processes running inside the staging and target VMs are not cleaned up.

    4. Files ( guestossvc_restore.exe and PhoenixSQLGuestPluginRestore.exe ) injected inside the staging virtual machine and target virtual machine are not cleaned up.

    5. DB files copied before the service restart on the target virtual machine are not cleaned up.

  6. If the restore job fails, you may need to manually clean up the files from the staging and target virtual machines from the following location:
    %ProgramData%\Phoenix\VMWARE\<jodID>

  7. The job summary will display default values, for job details review the progress logs.

  8. During restore when the disks are attached to the staging virtual machine, all the disks are brought online. If one or more disks don’t come online Druva still proceeds with the restore. Only if the data resides on a disk that failed to come online, then the restore job fails.

  9. During restore it is recommended that a full backup job is not running on the target virtual machine. At times this may cause restore failures.

  10. For full VM restore, it is recommended that you simply upgrade the backup proxy to version 4.9.2 or higher.

Did this answer your question?