Problem description
When decommissioning a Phoenix Cloud Cache, the process can sometimes stall. This typically happens because a required component has been disconnected, removed, or lacks the necessary resources to finish the job.
The decommissioning process follows several key stages:
Unmapping: Backup sets are unmapped from the Cloud Cache.
Halting Operations: All backup and restore activities using the cache are stopped.
Final Sync: Any unsynced data remaining in the cache is flushed to the Phoenix Cloud. This is a critical step.
Data Deletion: Data blocks are removed from the local Cache Store.
Cleanup: The Cloud Cache is removed from the Phoenix UI and database.
If the process is stuck, it's likely due to an issue in one of these stages. Below are the five common causes and their solutions.
Cause 1: Cloud Cache is Disconnected
The decommissioning process requires a constant, stable connection between your Cloud Cache and the Phoenix Cloud to complete. If the connection drops, the process will pause indefinitely.
Symptom: Logs may show errors like
Failed to connect to client. (#100000011)
.Solution: Ensure the Cloud Cache server remains online and connected to the network until the decommissioning is fully complete in the Phoenix UI. Resolve any connectivity issues to allow the process to resume.
Cause 2: Insufficient Bandwidth or Sync Schedule
The final sync (Stage 3) can take a significant amount of time, especially if there is a large volume of unsynced data. If the configured network bandwidth or the sync schedule window is too small, the process may appear stuck when it is actually just running very slowly.
Solution: To accelerate the process, temporarily edit the Cloud Cache settings:
Set the synchronization schedule to run 24 hours a day, 7 days a week.
Set the network bandwidth to the "Max Available Bandwidth." This ensures the final data sync completes as quickly as possible.
Cause 3: Cache Store is Inaccessible
The process needs to access the local PhoenixCacheStore
folder to remove data blocks (Stage 4). If the disk containing this folder has been formatted, has crashed, or is otherwise unavailable, the process cannot continue.
Symptom: Logs may show a critical error like
Cache store folder E:\PhoenixCacheStore does not exist on file system.
Solution: Do not format or decommission the storage volume where the Cache Store resides until the process is finished. If the drive has already failed or been formatted, you will need to contact for assistance.
Cause 4: Mapped Storage No Longer Exists
If the underlying storage that was mapped to the Cloud Cache is deleted before decommissioning is complete, any unsynced data becomes permanently stranded. The process will be stuck trying to sync data from a source that no longer exists.
Solution: This scenario requires manual intervention. Please contact to help troubleshoot and resolve the issue.
Cause 5: Cloud Cache Server/VM Deleted Prematurely
Decommissioning will fail if the Cloud Cache server or virtual machine is deleted from your environment before the process is initiated from the Phoenix console. The cloud service has no local component to communicate with to perform the necessary cleanup steps.
Solution: Always follow the correct procedure to avoid data loss or orphaned entries in the system:
Ensure all pending data has been synced from the Cloud Cache.
Initiate the decommissioning process from the Phoenix console.
Wait for the process to show as complete in the UI.
Only then should you delete the Cloud Cache server/VM from your environment.
If you have already deleted the server, contact for the next steps.