Creating A Physical Standby Database Using RMAN Restore Database From Service
Creating A Physical Standby Database Using RMAN Restore Database From Service
Dashboard Knowledge Service Requests Patches & Updates
Give Feedback...
Copyright (c) 2023, Oracle. All rights reserved. Oracle Confidential.
Creating a Physical Standby database using RMAN restore database from service (Doc ID 2283978.1) To Bottom
Goal Yes
No
Solution
Steps to Create a Physical Standby Database using “RESTORE DATABASE FROM SERVICE”
Document Details
PRIMARY DATABASE
Put primary database in forced logging mode
Type:
Create Standby Redo Logs Status: HOWTO
Last Major PUBLISHED
Enable Standby File Management Aug 17, 2022
Update:
Password Copy Last Aug 17, 2022
Update: English
Create a pfile from the spfile on the primary database and scp to standby
Language:
Create net alias' for the primary and standby databases
STANDBY DATABASE
Related Products
Create Audit Directory
Oracle Database - Enterprise
Place the Standby Password File Edition
Oracle Cloud Infrastructure -
Modify Parameters
Database Service
Create the spfile
Oracle Cloud at Customer
Set the parameters and create the Data Guard Broker configuration. インフォメーション・センタ
ー: データベースおよび
Stop and Start the standby database
Enterprise Manager 日本語ド
Validate broker configuration キュメント [1946305.2]
Implement MAA Best Practice Recommendations
Information Center: Data
Data Protection Parameters Warehousing [1487754.2]
Enable Flashback Database Show More
References
Document References
APPLIES TO:
Recently Viewed
Oracle Database - Enterprise Edition - Version 12.1.0.2 and later Creating a Physical Standby
Oracle Cloud Infrastructure - Database Service - Version N/A and later database using RMAN restore
Oracle Cloud at Customer - Version N/A and later database from service
[2283978.1]
Information in this document applies to any platform.
Dbname for Standby
Database in
GOAL Mgmt_target_properties
Shows Primary db Unique
Name [1067738.1]
NOTE: In the images and/or the document content below, the user information and data used represents fictitious data PRIMARY_DB_UNIQUE_NAME
column not populated in
from the Oracle sample schema(s) or Public Documentation delivered with an Oracle database product. Any similarity to standby database
actual persons, living or dead, is purely coincidental and not intended in any manner. [2491642.1]
System Is Not Allowing To
Perform RT Transaction
Maximum Availability Architecture
[2662478.1]
The Maximum Availability Architecture (MAA) defines Oracle’s most comprehensive architecture for reducing downtime for USE_LARGE_PAGES To
Enable HugePages
scheduled outages as well as preventing, detecting and recovering from unscheduled outages. Real Application Clusters (RAC) [1392497.1]
and Oracle Data Guard are integral components of the Database MAA reference architectures and solutions.
Show More
More detailed information, such as a discussion of the purpose of MAA and the benefits it provides, can be found on the Oracle
Provide a step-by-step guide for instantiating a standby database using the RMAN “from service” clause to copy directly from the
primary database through an Oracle Net connection.
NOTE:
This document applies to Oracle Database Server versions 12.1 to 19c and higher.
SECTION SIZE support is available. The section size clause used with multiple RMAN channels enables parallelization
of individual files by dividing large files into smaller pieces. This improves the overall efficiency of parallelization
across channels.
Encryption is supported.
Compression is supported. It is not recommended to use compression on backups or data that has already been
compressed (e.g. using OLTP, HCC compression) or encrypted since the compression benefits is very small and the
overall impact (e.g. CPU resources and increased elapsed time) can be significant.
The RMAN ‘from service’ clause enables the restore and recover of primary database files to a standby database across the
network. This functionality can be used to instantiate a standby database in lieu of the RMAN DUPLICATE DATABASE command
and is more intuitive and less error prone thus saving time. Additionally, utilizing the SECTION SIZE clause with multiple RMAN
channels improves the efficiency of parallelization of the restore, further improving instantiation times.
NOTE: This ‘FROM SERVICE‘ method can be used to restore or recover an entire database, individual data files, control files,
server parameter file, or tablespaces. This method is useful in synchronizing the primary and standby database.
1. The network between the primary and standby sites is reliable and has been assessed and determined to support the
peak redo generation rate of the primary. See note 2275154.1 for details on assessing the network.
2. A primary database utilizing ASM for data file storage as well as Oracle Managed Files(OMF).
3. The primary database is in archive log mode.
4. The primary database online redo logs:
1. are identical in size
2. are sized so they do not switch more than 6 times per hour at peak redo generation (this can significantly impact
redo apply in Data Guard)
3. reside on the DATA disk group
4. are multiplexed on NORMAL redundancy disk groups (HIGH redundancy disk groups are not multiplexed)
5. have a minimum of 3 groups for each thread of a RAC database
5. Password and spfile are stored in ASM.
6. The target standby host has all the required Oracle software installed and configured.
7. The standby database database software matches the primary database software. Including PSU/RU/RUR and one off
patches.
8. The standby target database storage will utilize ASM storage and OMF.
9. The standby target resides on separate hardware.
10. If role separation is used in your environment set the environment based on the roles with oracle or grid. In our example
the oracle user owns both grid and database software installations.
All of the example names illustrated in this document use the following naming:
Primary Standby
SOLUTION
Steps to Create a Physical Standby Database using “RESTORE DATABASE FROM SERVICE”
The following are the steps used to create the Data Guard standby database:
PRIMARY DATABASE
Database force logging is recommended so that all changes to the primary database are replicated to the standby
database regardless of NOLOGGING settings. To enable force logging, use the following command on the primary:
[oracle@<primaryhost1>]$ sqlplus / as sysdba
SQL> alter database force logging;
SQL>exit
Standby Redo Logs enable real time redo apply where the redo is applied as it is received rather than when a complete
archived log is received. This improves standby currency and reduces potential data loss.
Per MAA best practices Standby Redo Logs(SRL) are recommended to:
be identical size as online redo logs (to the byte)
have groups assigned to a thread in RAC configurations
be single member groups
have the same number of groups per thread as online redo log groups
reside on DATA disk group
Create standby redo logs on the primary database that are the exact same size as the online redo logs. Creating the SRLs
on the primary before instantiation will ensure they are defined on the standby during instantiation process. It will also
ensure that standby redo log files are available on the current primary when the current primary becomes a standby.
Oracle recommends having the same number of standby redo log groups as there are online redo log groups for each
thread. Our primary database has 3 online redo log groups per thread. We therefore need 3 Standby Redo Log files per
thread for the standby. As per MAA Best Practice we recommend to create only one member for standby redo log. For
example:
4. Password Copy
Copy the password file from the primary database to the first standby host.
NOTE: If Transparent Data Encryption (TDE) is enabled on the primary, the TDE wallet must be copied to the
standby also.
5. Create a pfile from the spfile on the primary database and scp to standby
(All entries should exist in tnsnames.ora of all primary and standby instances)
# RAC ONLY create an entry for each primary and standby instance #
<primary unique name>1 =
(DESCRIPTION =
(ADDRESS_LIST =
(ADDRESS=(PROTOCOL= TCP) (HOST=prmy-scan)(PORT=<PORT>))
)
(CONNECT_DATA =
(SERVER = DEDICATED)
(SERVICE_NAME =<primary database service name>)
(INSTANCE_NAME=<primary instance 1 SID_NAME>)
)
)
<primary unique name>2 =
(DESCRIPTION =
(ADDRESS_LIST =
(ADDRESS=(PROTOCOL= TCP) (HOST=prmy-scan)(PORT=<PORT>))
)
(CONNECT_DATA =
(SERVER = DEDICATED)
(SERVICE_NAME =<primary database service name>)
(INSTANCE_NAME=<primary instance 2 SID_NAME>)
)
)
# < create an entry for each instance of primary and standby if more than two > #
STANDBY DATABASE
On all standby hosts create the audit directory for the standby database.
First create the database directory in the DATA disk group then place the password file copied from the primary database
to /tmp
9. Modify Parameters
Change the pfile copied to the standby in step 5 /tmp/standby.pfile update the instance specific RAC parameters,
db_unique_name. For example:
NOTE: This list is not exhaustive. There are many parameters that may need to be changed due to a change in
db_unique_name or disk group names or db_domain. Review each parameter in the pfile and change as appropriate.
P t t b difi d th St db d t th P i
Primary - Only CONVERT parameters may change, the rest are for
Standby - chang
reference
*.cluster_database=TRUE
*.cluster_database=TRUE
<standby unique name>2.instance_
<primary unique name>2.instance_number=2
<standby unique name>1.instance_
<primary unique name>1.instance_number=1
<standby unique name>2.thread=2
<primary unique name>2.thread=2
<standby unique name>1.thread=1
<primary unique name>1.thread=1
<standby unique name>2.undo_tab
<primary unique name>2.undo_tablespace='UNDOTBS2'
<standby unique name>1.undo_tab
<primary unique name>1.undo_tablespace='UNDOTBS1'
……….
……….
……….
……….
*.db_unique_name=<standby uniqu
*.db_unique_name=<primary unique name>
*.audit_file_dest=/u01/app/oracle/a
*.audit_file_dest=/u01/app/oracle/admin/<primary unique name>/adump
*.log_archive_dest_1='LOCATION=U
*.log_archive_dest_1='LOCATION=USE_DB_RECOVERY_FILE_DEST VALID_FOR=
(ALL_LOGFILES,ALL_ROLES) MAX_FA
(ALL_LOGFILES,ALL_ROLES) MAX_FAILURE=1 REOPEN=5 DB_UNIQUE_NAME=
<standby unique name> ALTERNATE
<primary unique name> ALTERNATE=LOG_ARCHIVE_DEST_10'
*.log_archive_dest_10='LOCATION=
*.log_archive_dest_10='LOCATION=+DATAC1 VALID_FOR=
(ALL_LOGFILES,ALL_ROLES) DB_UN
(ALL_LOGFILES,ALL_ROLES) DB_UNIQUE_NAME=<primary unique name>
ALTERNATE=LOG_ARCHIVE_DEST_1
ALTERNATE=LOG_ARCHIVE_DEST_1'
*.log_archive_dest_state_10='ALTER
*.log_archive_dest_state_10='ALTERNATE'
*.control_files='+DATAC1/<stanby u
*.control_files='+DATAC1/<primary unique name>/CONTROLFILE/control.ctl'
*.LOG_FILE_NAME_CONVERT=’+DA
# *CONVERT parameters are not dynamic and require a restart of the database.
name>’,’+DATAC1/<standby unique
*.LOG_FILE_NAME_CONVERT='+DATAC1/<standby unique
*.DB_FILE_NAME_CONVERT=’+DAT
name>','+DATAC1/<primary unique name>'
name>’,’+DATAC1/<standby unique
*.DB_FILE_NAME_CONVERT='+DATAC1/<standby unique
# For 12.2 and higher remove re
name>','+DATAC1/<primary unique name>'
will be picked up by clusterware
NOTE: The database parameter db_name must be the same between primary and all standby database.
NOTE: The CONVERT parameters, log_file_name_convert and db_file_name_convert are not required for file name
translation when Oracle Managed Files is used and the standby is on a different cluster than the primary. Setting
LOG_FILE_NAME_CONVERT to some value enables online redo log pre-clearing which improves role transition
performance.
NOTE: If disk group names are different between the primary and standby, change all disk group names accordingly.
From the edited pfile, create the spfile for the standby database (the instance has not been started).
File created.
Register the standby with clusterware and start the database nomount
[oracle@<standbyhost1>]$ srvctl add instance -database <standby unique name> -instance <standby
unique name>1 -node <standbyhost1>
[oracle@<standbyhost1>]$ srvctl add instance -database <standby unique name> -instance <standby
unique name>2 -node <standbyhost2>
RMAN> restore standby controlfile from service '<primary unique name>'; <- the service name is
whatever connect descriptor points to the primary database
To take advantage of parallelism during the restore, determine the number of cpu’s on your server by executing the
following:
Make the following RMAN configuration changes at the standby. Set the parallelism to match results of the network
evaluation to achieve the best performance.
[oracle@<standbyhost1>]$ rman target /
RMAN> CONFIGURE DEFAULT DEVICE TYPE TO DISK;
RMAN> CONFIGURE DEVICE TYPE DISK PARALLELISM 8;
The two topics below, instance parallelism and gap resolution with recover from service, are optimizations for large
databases. While this note without these optimizations is relevant for most cases without these optimizations, these
optimizations can significantly shorten the time required to instantiate very large databases.
Instance Parallelism - Using multiple channels with the PARALLELISM setting is helpful in utilizing the available
resources in a single node. This is generally enough for small to medium sized databases measured in the hundreds
of gigabytes or less. For larger databases, spreading the RMAN work across multiple nodes/instances of a RAC
cluster can utilize additional resources and bandwidth across multiple nodes of a cluster. This process is described
further in the next step.
Gap Resolution with RECOVER DATABASE FROM SERVICE - While the restore database from service is
running, the gap in redo to make the database consistent is growing. The first data file is copied as of time while the
last data file is copied as of a later time. Therefore, when the restore is complete, the database is not consistent and
requires redo from the primary database to catch up. In most cases, simply enabling redo transport will resolve the
gap in a reasonable amount of time. However, there are cases for which transport/apply would be either a
complicated or lengthy process. For example:
For these situations, it is often more efficient to utilize RECOVER DATABASE FROM SERVICE to 'roll forward' the
standby database. The recovery process is optimized to only copy the blocks that have changed.
The steps to execute recover database from service is described at the appropriate point in the process below.
14. Restore the Standby Database from the primary database service
Restoring the datafiles from the primary to standby for instantiation is initiated on the standby cluster. Maximizing the
use of the channels configured for parallelization by the previous step can be improved in some cases by using the RMAN
SECTION SIZE clause.
Section Size
Parallelization across channels is most efficient when the datafiles are all equally sized. Since each file by default is
copied by only one channel, if one or two files are significantly larger than the rest, those files will take longer while other
files have finished leaving some channels idle. When a small subset of files is larger than the rest, allowing RMAN to copy
those large files in sections can utilize all channels to maximum efficiency. The section size can be set with the RMAN
SECTION SIZE clause. RMAN testing has shown SECTION SIZE=64G to provide the best efficiency for files less than
16TB. If the data file is smaller than the section size chosen, it will not be broken into sections during the restore.
On the primary, query the largest datafile size to determine the section size to be used for the recover command.
For more information please refer to documentation RMAN ‘restore from service’. Also see refer to the best practice for
Cloud.
In order to initiate the copy of files, connect to the standby database and issue the restore command below using the
descriptor in the tnsnames.ora for the primary database. In this example, that is the primary db_unique_name.
RMAN> restore database from service <primary unique name> section size <section size>;
RMAN> switch database to copy; <- This may result in a no-op (nothing executes)
Re-running the restore database from service command will recopy all files whether they were completed, partially
completed or not started. In the event of a failed attempt this could result in multiple copies of files and for large
databases can result in significant time lost recopying files unnecessarily. If the restore database from service fails,
remove the partial files and replace the restore database from service command with a restore datafile from service
listing all files not yet completed.
From the mounted standby (as sys) list the partial files with the following SQL and remove them from the file system:
Then replace the restore database from service with the restore datafile from service generated with the SQL below:
select 'restore datafile '||listagg(FILE#,', ') within group (order by file#)||' from service
<primary unique name> section size 64G;' from v$datafile_header where ERROR='FILE NOT FOUND';
The listagg function has size limits as does the RMAN command line. On a database with many files, if either is
exceeded resulting in an error the list of files may need to be broken up into multiple commands.
For larger databases, instead of the previous code block the following can be used.
In order to parallelize the restore across all instances and utilize the bandwidth of all nodes or a cluster, use the connect
clause in RMAN. In this method, the parallelization is created through allocating channels in the run block rather than the
setting defined in the previous step.
If you've followed this note at this point all instances are not mounted. Stop the database and startup mount all
instance, then execute the run block with the proper substitutions.
Allocate the number of channels to match results of the network evaluation to achieve the best performance.
For 4-node and 8-node clusters allocate additional channels connecting to those instances.
RMAN> switch database to copy; <- This is not always necessary so may result in a no-op (nothing executes)
NOTE: For larger clusters or a higher degree of parallelism, allocate additional channels accordingly.
Large Database Optimization - Gap Resolution with RECOVER DATABASE FROM SERVICE
For large databases which took a long time to copy, generated a lot of redo during the copy, or do not have the archived
logs available at the primary since the copy started, use this RECOVER DATABASE FROM SERVICE option before enabling
Data Guard Broker in the next step. For databases which do not meet that description, skip to the next step.
RMAN > restore standby controlfile from service <primary unique name>;
RMAN > catalog start with '<DATA DISK GROUP>' noprompt; <-- This step catalogs all files into the copied controlfile.
RMAN > catalog start with '<RECO DISK GROUP>' noprompt; <-- Some files cannot be cataloged and will generate an error.
This process uses the instance parallelization method, normal parallelization where all channels run on one instance can
also be used.
Allocate the number of channels to match results of the network evaluation to achieve the best performance.
For 4-node and 8-node clusters allocate additional channels connecting to those instances.
RMAN> switch database to copy; <- This is not always necessary so may result in a no-op (nothing executes)
NOTE: This process can be used any time a standby has an unresolvable or large gap as a means of catching up to
the primary. See MOS Note 2850185.1 for a full description of the process on an established standby database.
begin
for log_cur in ( select group# group_no from v$log )
loop
execute immediate 'alter database clear logfile group '||log_cur.group_no;
end loop;
end;
/
begin
for log_cur in ( select group# group_no from v$standby_log )
loop
execute immediate 'alter database clear logfile group '||log_cur.group_no;
end loop;
end;
/
16. Set the parameters and create the Data Guard Broker configuration.
NOTE: These commands can also be executed individually from sqlplus as sys
connect / as sysdba
shutdown immediate
startup mount
host sleep 30
host sleep 30
host dgmgrl sys/<password>@<primary unique name> "ADD DATABASE <standby unique name> AS CONNECT
IDENTIFIER IS <standby unique name>" ;
exit
Execute the script PosrtCR.sql on the standby database. Set your environment to standby database.
[oracle@<standbyhost1>]$ export ORACLE_HOME=/u01/app/oracle/product/12.2.0.1/dbhome_1
[oracle@<standbyhost1>]$ export ORACLE_SID=<standby unique name>1
[oracle@<standbyhost1>]$ export PATH=$ORACLE_HOME/bin:$PATH
[oracle@<standbyhost1>]$ sqlplus / as sysdba
SQL> @PostCR.sql
[oracle@<standbyhost1>]$dgmgrl sys/<password>
DGMGRL> show configuration
Configuration - dgconfig
Configuration Status:
SUCCESS (status updated 58 seconds ago)
The following settings are recommended per MAA best practices and should be set on the primary and standby databases:
DB_BLOCK_CHECKING=MEDIUM or higher
NOTE: DB_BLOCK_CHECKING can have performance implications on a primary database. Any changes to this setting
should be thoroughly tested before implementing.
DB_BLOCK_CHECKSUM=TYPICAL or higher
DB_LOST_WRITE_PROTECT=TYPICAL
It is an MAA recommendation but there are some performance implications and should be tested to determine if the impact
to the performance of the application is acceptable.
NOTE: Without flashback enabled the primary database must be fully re-instantiated after a failover using another
restore from service. Switchover does not require flashback database.
Primary:
sqlplus / as sysdba
To enable flashback database on the standby the redo apply process must first be stopped. Once flashback has been
enabled redo apply can be restarted:
Related
Products
Oracle Database Products > Oracle Database Suite > Oracle Database > Oracle Database - Enterprise Edition > Oracle Data Guard > Active Data Guard
Oracle Cloud > Oracle Platform Cloud > Oracle Cloud Infrastructure - Database Service > Oracle Cloud Infrastructure - Database Service
Oracle Cloud > Oracle Infrastructure Cloud > Oracle Cloud at Customer > Oracle Cloud at Customer
Keywords
AVAILABILITY; BROKER; DATA GUARD; DGMGRL; MAXIMUM AVAILABILITY ARCHITECTURE; PHYSICAL STANDBY; STANDBY
Errors
ORA-01078
Translations
English Source Japanese 日本語
Back to Top
Copyright (c) 2023, Oracle. All rights reserved. Legal Notices and Terms of Use Privacy Statement