Skip to main content
All CollectionsKnowledge BaseEnterprise WorkloadsHow To - Enterprise Workloads
Steps to perform manual restore from one RAC to another RAC
Steps to perform manual restore from one RAC to another RAC
Updated over a month ago

Overview

This article will guide customers through the process of performing a RAC to RAC manual restore, particularly useful if they are seeking a Redirected restore approach.

Procedure title

  • Begin restoring the Database to standalone ASM Storage on Node-1 by selecting alternate restore location Option using the automated restore process, steps for automated restore process are mentioned below.

  • To execute this automated restore to standalone ASM, it is imperative to upgrade the agent to version 6.1.3-466835 beforehand.

  • Our chosen destination target RAC is Sayali-n1.local.domain, which corresponds to Node-1 within the RAC cluster​

    Picture1.png
  • The Oracle Home and Oracle Base directories have been extracted from the specified destination locations.​

    [oracle@sayali-n1 ~]$ echo $ORACLE_HOME
        /u01/app/oracle/product/19c/db_1
        [oracle@sayali-n1 ~]$ echo $ORACLE_BASE
        /u01/app/oracle
  • In configuring other server parameters, we can specify ASM location details to ensure direct restoration of data into the ASM location.

    db_create_file_dest=+DATA_DISK
        log_archive_dest=+DATA_DISK/Orc
  • In the above parameters, the intention is to store database files in +DATA_DISK and set the archive destination to +DATA_DISK/Orc

  • If the specified destinations not exist on ASM, the restoration process will encounter the following error. Therefore, it's crucial to ensure accurate destination locations are provided:

    channel ch2: reading from backup piece Phx_DN-ORCL_DBID-1683780889_DT-20240321-143731_612m8o8m_1_1
        channel ch0: ORA-19870: error while restoring backup piece Phx_DN-ORCL_DBID-1683780889_DT-20240321-143731_5v2m8o8c_1_1
        ORA-19504: failed to create file "+DATA_DISK/orc/1_2899_1162922489.dbf"
        ORA-17502: ksfdcre:4 Failed to create file +DATA_DISK/orc/1_2899_1162922489.dbf
        ORA-15173: entry 'orc' does not exist in directory '/'
  • Once we provide the Database name, we can click on Next to initiate the restore.

    Picture2.png
  • By clicking "Next," the restore pre-check will pop-up, aiding in the identification of any potential environmental challenges. In the screenshot provided below, no errors are visible, indicating that everything appears to be in order.

    Picture3.png
    Picture4.png
  • The restore process will commence, and you can monitor the progress on the Jobs page.

  • Once the restore process is finished, the status of the Restore Job should turn green, indicating successful completion, as depicted below.

    Picture5.png
  • To verify the restored database files, log in to Node-1 and navigate to the specified ASM location below:

[oracle@sayali-n1 admin]$ asmcmd
ASMCMD> ls
DATA_DISK/
OCR_DATA/
ASMCMD> cd DATA_DISK/
ASMCMD> ls -lrt
Type Redund Striped Time Sys Name
Y DB_UNKNOWN/
N DBfile/
N SNRAC/
N orc/
DATAFILE UNPROT COARSE MAR 21 22:00:00 N feb5table1_data.dbf => +DATA_DISK/orc/DATAFILE/FEB5TBS1.1340.1164202625
DATAFILE UNPROT COARSE MAR 21 22:00:00 N feb5table2_data.dbf => +DATA_DISK/orc/DATAFILE/FEB5TBS2.1337.1164202629
DATAFILE UNPROT COARSE MAR 21 22:00:00 N o1_mf_sysaux_lr2s3qwp_.dbf => +DATA_DISK/orc/DATAFILE/SYSAUX.1350.1164202597
DATAFILE UNPROT COARSE MAR 21 22:00:00 N o1_mf_system_lr2s20l0_.dbf => +DATA_DISK/orc/DATAFILE/SYSTEM.1354.1164202591
DATAFILE UNPROT COARSE MAR 21 22:00:00 N o1_mf_undotbs1_lr2s4k50_.dbf => +DATA_DISK/orc/DATAFILE/UNDOTBS1.1360.1164202585
DATAFILE UNPROT COARSE MAR 21 22:00:00 N o1_mf_users_lr2s4l9h_.dbf => +DATA_DISK/orc/DATAFILE/USERS.1363.1164202585
DATAFILE UNPROT COARSE MAR 21 22:00:00 N table11_data.dbf => +DATA_DISK/orc/DATAFILE/HCT11.1351.1164202597
DATAFILE UNPROT COARSE MAR 21 22:00:00 N table1_data.dbf => +DATA_DISK/orc/DATAFILE/HCT1.1365.1164202585
DATAFILE UNPROT COARSE MAR 21 22:00:00 N table3_data.dbf => +DATA_DISK/orc/DATAFILE/HCT3.1368.1164202585
DATAFILE UNPROT COARSE MAR 21 22:00:00 N table5_data.dbf => +DATA_DISK/orc/DATAFILE/HCT5.1356.1164202591
DATAFILE UNPROT COARSE MAR 21 22:00:00 N table6_data.dbf => +DATA_DISK/orc/DATAFILE/HCT6.1358.1164202591
DATAFILE UNPROT COARSE MAR 21 22:00:00 N tablee2_data.dbf => +DATA_DISK/orc/DATAFILE/TBSPP2.1347.1164202607
DATAFILE UNPROT COARSE MAR 21 22:00:00 N tablee3_data.dbf => +DATA_DISK/orc/DATAFILE/TBSPP3.1345.1164202611
DATAFILE UNPROT COARSE MAR 21 22:00:00 N tablee4_data.dbf => +DATA_DISK/orc/DATAFILE/TBSPP4.1344.1164202615
DATAFILE UNPROT COARSE MAR 21 22:00:00 N tablee5_data.dbf => +DATA_DISK/orc/DATAFILE/TBSPP5.1341.1164202625
ASMCMD>·
  • Here we can verify the process of Restored DB

[oracle@sayali-n1 ~]$ ps -ef | grep -i pmon
oracle 10311 1 0 Mar21 ? 00:00:06 ora_pmon_MHRACDB1
oracle 13167 1 0 2023 ? 00:21:31 asm_pmon_+ASM1
oracle 16510 1 0 Mar21 ? 00:00:03 ora_pmon_snrac1
oracle 26985 1 0 Mar21 ? 00:00:00 ora_pmon_orcl
oracle 31090 30725 0 00:06 pts/1 00:00:00 grep --color=auto -i pmon
[oracle@sayali-n1 ~]$
[oracle@sayali-n1 ~]$ export ORACLE_SID=orcl
[oracle@sayali-n1 ~]$ sqlplus / as sysdba

SQL*Plus: Release 19.0.0.0.0 - Production on Fri Mar 22 00:12:37 2024
Version 19.3.0.0.0

Copyright (c) 1982, 2019, Oracle. All rights reserved.
Connected to:
Oracle Database 19c Enterprise Edition Release 19.0.0.0.0 - Production
Version 19.3.0.0.0
SQL> select name, open_mode from v$database;
NAME OPEN_MODE
--------- --------------------
ORCL READ WRITE
  • Upon completion of the Restore process, the database will be mounted using the SPfile.

  • To add the database to the cluster, a pfile is required to create instances on both Node-1 and Node-2.

  • To locate the SPfile location, log in to sqlplus and execute the following query:

    SELECT value FROM v$parameter WHERE name = 'spfile';
        /u01/app/oracle/product/19c/db_1/dbs/initORCL.ora
  • Post obtaining the location from the above query, you can convert the SPfile to Pfile using the following command in sqlplus:

    SQL> CREATE PFILE='/u01/app/oracle/product/19c/db_1/dbs/initORCL.ora' FROM SPFILE;
  • Above command will create a Pfile in below location with name initORCL.ora and you can verify at below location at OS level.

  • [oracle@sayali-n1 dbs]$ ls -lrt
    total 197800
    -rw-r--r--. 1 oracle oinstall 3079 May 14 2015 init.ora
    -rw-rw----. 1 oracle oinstall 1544 Mar 21 15:28 hc_snrac1.dat
    -rw-r-----. 1 oracle oinstall 1536 Mar 22 00:00 spfileorcl.ora
    -rw-r--r--. 1 oracle oinstall 395 Mar 22 00:25 initORCL.ora
    -rw-rw----. 1 oracle oinstall 1544 Mar 22 00:28 hc_orcl.dat
    -rw-r-----. 1 oracle oinstall 2097152 Mar 22 00:31 id_snrac1.dat
    -rw-r-----. 1 oracle oinstall 2097152 Mar 22 00:31 id_MHRACDB1.dat​
  • After creating the Pfile, you can initiate the database startup using the Pfile. To do so, first, shut down the database.

  •  SQL> SHUTDOWN IMMEDIATE;
        Database closed.
        Database dismounted.
        ORACLE instance shut down.
  • Then start the database as shown below.

    SQL> STARTUP PFILE='/u01/app/oracle/product/19c/db_1/dbs/initORCL.ora';
        ORACLE instance started.
        Total System Global Area  432009920 bytes
        Fixed Size                  8897216 bytes
        Variable Size             364904448 bytes
        Database Buffers           50331648 bytes
        Redo Buffers                7876608 bytes
        Database mounted.
        Database opened. 
        SQL> select name, open_mode from v$database;
        NAME      OPEN_MODE
        --------- --------------------
        ORCL      READ WRITE
        SQL>
  • The CLUSTER_DATABASE=true parameter needs to set before a database can be started in cluster mode.

  • The CLUSTER_DATABASE_INSTANCES parameter specifies the number of instances configured as part of the cluster database.

  • You should set this parameter value to the number of instances in your Real Application Cluster.

  • The INSTANCE_NUMBER is a unique number that maps instances to database.

  • Below are the parameters used for the test restore. Please modify them according to your specific requirements.

    ORCL1.__data_transfer_cache_size=0
    ORCL2.__data_transfer_cache_size=0
    ORCL1.__db_cache_size=1728053248
    ORCL2.__db_cache_size=1811939328
    ORCL1.__inmemory_ext_roarea=0
    ORCL2.__inmemory_ext_roarea=0
    ORCL1.__inmemory_ext_rwarea=0
    ORCL2.__inmemory_ext_rwarea=0
    ORCL1.__java_pool_size=0
    ORCL2.__java_pool_size=0
    ORCL1.__large_pool_size=33554432
    ORCL2.__large_pool_size=33554432
    ORCL1.__oracle_base='/u01/app/oracle'#ORACLE_BASE set from environment
    ORCL2.__oracle_base='/u01/app/oracle'#ORACLE_BASE set from environment
    ORCL1.__pga_aggregate_target=872415232
    ORCL2.__pga_aggregate_target=872415232
    ORCL1.__sga_target=2600468480
    ORCL2.__sga_target=2600468480
    ORCL1.__shared_io_pool_size=134217728
    ORCL2.__shared_io_pool_size=134217728
    ORCL2.__shared_pool_size=603979776
    ORCL1.__shared_pool_size=687865856
    ORCL1.__streams_pool_size=0
    ORCL2.__streams_pool_size=0
    ORCL1.__unified_pga_pool_size=0
    ORCL2.__unified_pga_pool_size=0
    *.audit_file_dest='/u01/app/oracle/admin/ORCL/adump'
    *.audit_trail='db'
    *.cluster_database=true
    *.compatible='19.0.0'
    *.control_files='+DATA_DISK/ORCL/CONTROLFILE/current.1407.1164239855'
    *.db_block_size=8192
    *.db_create_file_dest='+OCR_DATA'
    *.db_name='ORCL'
    *.diagnostic_dest='/u01/app/oracle'
    family:dw_helper.instance_mode='read-only'
    ORCL2.instance_number=5
    ORCL1.instance_number=4
    *.local_listener='-oraagent-dummy-'
    *.nls_language='AMERICAN'
    *.nls_territory='AMERICA'
    *.open_cursors=300
    *.pga_aggregate_target=824m
    *.processes=640
    *.remote_login_passwordfile='exclusive'
    *.sga_target=2469m
    ORCL2.thread=2
    ORCL1.thread=1
    ORCL2.undo_tablespace='UNDOTBS2'
    ORCL1.undo_tablespace='UNDOTBS1'

  • Now we can edit the initORCL.ora with the above parameters.

  • Create a Pfile for 2 instances on 2 nodes as shown below.

    [oracle@sayali-n1 dbs]$ mv initORCL.ora initORCL1.ora
        [oracle@sayali-n2 ~]$ mv initORCL.ora initORCL2.ora
  • Now shutdown the database.

    SQL> shutdown
        Database closed.
        Database dismounted.
        ORACLE instance shut down.
  • Now create environmental variable for both nodes.
    In Node-1

    Last login: Fri Mar 22 00:49:55 IST 2024
        [oracle@sayali-n1 dbs]$ export ORACLE_SID=ORCL1
        [oracle@sayali-n1 dbs]$ export ORACLE_HOME=/u01/app/oracle/product/19c/db_1


    In Node-2

    [oracle@sayali-n2 ~]$ export ORACLE_SID=ORCL2
        [oracle@sayali-n2 ~]$ export ORACLE_HOME=/u01/app/oracle/product/19c/db_1
  • Now perform a startup for instance-1 in node1

  • [oracle@sayali-n1 ORCL]$ sqlplus / as sysdba SQL*Plus: Release 19.0.0.0.0 - Production on Fri Mar 22 02:16:37 2024 Version 19.3.0.0.0 Copyright (c) 1982, 2019, Oracle.  All rights reserved. Connected to an idle instance. SQL> startup
    ORACLE instance started.
    Total System Global Area 2600467784 bytes
    Fixed Size 8899912 bytes
    Variable Size 721420288 bytes
    Database Buffers 1862270976 bytes
    Redo Buffers 7876608 bytes
    Database mounted.
    Database opened.
    [oracle@sayali-n1 ~]$ sqlplus / as sysdba
    SQL*Plus: Release 19.0.0.0.0 - Production on Fri Mar 22 04:09:19 2024
    Version 19.3.0.0.0
    Copyright (c) 1982, 2019, Oracle. All rights reserved.
    Connected to:
    Oracle Database 19c Enterprise Edition Release 19.0.0.0.0 - Production
    Version 19.3.0.0.0
    SQL> select instance_name from v$instance;
    INSTANCE_NAME
    ----------------
    ORCL1
    SQL> select name, open_mode from v$database;
    NAME OPEN_MODE
    --------- --------------------
    ORCL READ WRITE
    Now perform a same above step on node-2 for instance2 (ORCL2)
    SQL> startup
    ORACLE instance started.
    Total System Global Area 2600467784 bytes
    Fixed Size 8899912 bytes
    Variable Size 637534208 bytes
    Database Buffers 1946157056 bytes
    Redo Buffers 7876608 bytes
    Database mounted.
    Database opened.
    SQL> select name, open_mode from v$database;
    NAME OPEN_MODE
    --------- --------------------
    ORCL READ WRITE
    SQL> select instance_name from v$instance;
    INSTANCE_NAME
    ----------------
    ORCL2
    Now Register the RAC instances as mentioned below from Node-1
    [oracle@sayali-n1 ~]$ srvctl add database -d ORCL -o '/u01/app/oracle/product/19c/db_1'
    [oracle@sayali-n1 ~]$ srvctl add instance -d ORCL -i ORCL1 -n sayali-n1
    [oracle@sayali-n1 ~]$ srvctl add instance -d ORCL -i ORCL2 -n sayali-n2
    [oracle@sayali-n1 ~]$ $ srvctl config database -d ORCL
    [oracle@sayali-n1 ~]$ srvctl status database -d ORCL
    Instance ORCL1 is running on node sayali-n1
    Instance ORCL2 is running on node sayali-n2
    [oracle@sayali-n1 ~]$
    [oracle@sayali-n2 ~]$ srvctl status database -d ORCL
    Instance ORCL1 is running on node sayali-n1
    Instance ORCL2 is running on node sayali-n2
    [oracle@sayali-n2 ~]$

  • We can see restored DB ORCL is running on both nodes.

Did this answer your question?