Oracle 12c Far Sync instance
Standby Database Overview:
Oracle standby database witnessed several major enhancements since its invention in Oracle v8i (though it was possible to configure physical standby dataset since 7.3) through to the current Oracle 12c release. Although it was initially started off as read-only physical standby database option to provide disaster solution to the business critical databases, the capabilities have been strengthened to further level with every new release as explained below:
- Logical standby database: provides option to have additional indexes, materialized views etc.
- Automatic switch-over/fail-over: automatic switchover and failover capabilities
- Auto gap resolution : potential to resolve archive gap by fetching the logs automatically
- Heterogeneous platform support
- Active Standby Database: Users can run SQL queries while recovery being taking place
- Supports Disaster sites configured hundred/thousands of kilometers away.
- Snapshot option: provides the ability to convert a standby database to a fully read write database and reverts back to standby mode
Far Sync instance overview
Oracle database 12c Active Data guard far sync instance is a light-weight/remote standby database instance whose role is to receive redo synchronously from the primary database and forward the redo asynchronously to the other remote standby databases configure thousands of miles away over WAN. Far sync instance acts as a middleman/proxy/redo log repository to remote standby databases. Unlike a conventional standby database, a far sync standby instance is a special/light-weight/remote instance with no physical data files of its own, manages only a sever parameter file (spfile), a standby control file and set of standby redo logs (SRLs). Since there is no physical structure, it is not possible to open the instance for general access, redo apply or convert its role to primary. It consumes very minimal server resources, such as, CPU, memory, I/O, disk etc.
The purpose of Far Sync instance discovery is primarily to offload the primary database performance obligations/complexity and overcome network latency issues involved while shipping the redo synchronously to its all remote standby databases configured far away, at the same time guarantee zero data loss failover capabilities. A typical recommendation is that you create a far sync instance close to the primary database, potentially 30-150 miles away, to overcome/avoid network latency bottlenecks and gain performance benefits while transmitting redo.
Far Sync instance architecture
The diagrams underneath illustrate far sync active data guard instance setup:
The diagram represents the following scenarios:
- A primary database send redo synchronously to the active far sync instance
- In case the far sync instance is not active for any reasons, it will automatically transmit the redo to the alternate far sync instance, if configured
- Although the primary database doesn’t send the redo directly to its remote standby database instances, it will maintain a direct network connection with the remote standby database members to perform health checks and switch over processing tasks
- The active far sync instance then forwards the redo asynchronously to the other remote standby database members configure thousands of miles away
Deploying Far Sync instance
Deploying and configuring a far sync instance is no different from creating/configuring any conventional standby database, except that a far sync will have no physical data files, hence, you don’t need to restore data files for this instance. The following describes the general procedure to deploy/configure a far sync instance:
(Please note that this article will not explain the procedure to configure standby database process)
- Create/generate a standby control file from the primary database using the following syntax, and copy the file fsinst01.ctl to the server were far sync will be configured:
- SQL> ALTER DATABASE CREATE FAR SYNC INSTANCE CONTROLFIE AS '/tmp/primfs.ctl';
- Prepare a server parameter file (SPFILE) and password file for a far sync instance, ensure you modify the below parameters on far sync instance:
- DB_UNIQUE_NAME=PRIMFS
- CONTROL_FILES= '+DG_PRIMDB'
- FAL_SERVER=PRIMDB
- LOG_ARCHIVE_CONFIG= 'DG_CONFIG=(PRIMDB,PRIMFS,PRIMSDB1,PRIMSDB2,PRIMSDB3)
- LOG_ARCHIVE_DEST_1= 'LOCATION=USE_DB_RECOVERY_FILE_DEST VALID_FOR=( ALL_LOGFILES,ALL_ROLES) DB_UNIQUE_NAME=PRIMFS
- LOG_ARCHIVE_DEST_2= 'SERVICE=PRIMSDB1 ASYNC VALID_FOR=(STANDBY_LOGFILES, STANDBY_ROLE) DB_UNIQUE_NAME='PRIMSDB1'
- LOG_ARCHIVE_DEST_3= 'SERVICE=PRIMSDB2 ASYNC VALID_FOR=(STANDBY_LOGFILES, STANDBY_ROLE) DB_UNIQUE_NAME='PRIMSDB2'
- LOG_ARCHIVE_DEST_4= 'SERVICE=PRIMSDB3 ASYNC VALID_FOR=(STANDBY_LOGFILES, STANDBY_ROLE) DB_UNIQUE_NAME='PRIMSDB3'
- Using Network Configuration Assistant (NETCA) create a listener on far sync server, and add a static service of far sync instance
- Add TNS entry for all remote standby databases (PRIMSDB1, PRIMSDB2, PRIMSDB3), as mentioned in the LOG_ARCHIVE_DEST_2 and other parameters
- Add a TNS entry to connect to the primary database, as mentioned in the fal_server parameter
Do the following parameter changes on the primary database:
- LOG_ARCHIVE_CONFIG= 'DG_CONFIG=(PRIMDB,PRIMFS,PRIMSDB1,PRIMSDB2,PRIMSDB3)
- LOG_ARCHIVE_DEST_1= 'LOCATION=USE_DB_RECOVERY_FILE_DEST VALID_FOR=( ALL_LOGFILES,ALL_ROLES) DB_UNIQUE_NAME=PRIMDB
- LOG_ARCHIVE_DEST_2= 'SERVICE=PRIMFS SYNC AFFIRM VALID_FOR=(ONLINE_LOGFILES, PRIMARY_ROLE) DB_UNIQUE_NAME='PRIMFS'
- Add a TNS entry to connect to the far sync instance
On the far sync server:
- Startup far sync instance in no mount, and restore the standby control file from the file transferred from the production server, and mount the database subsequently:
- RMAN> RESTORE STANDBY CONTROLFILE FROM ‘/tmp/primfs.ctl’
- RMAN> ALTER DATABASE MOUNT
You can check database role of far sync control file as below
- SQL> select db_unique_name,database_role from v$database;
DB_UNIQUE_NAME DATABASE_ROLE
------------------------------ ----------------
PRIMFS FAR SYNC
------------------------------ ----------------
PRIMFS FAR SYNC
- After far sync instance is mounted, create standby redo logs with same size and same number of redo groups + 1 of production database on the far sync instance. Assuming that the primary database has 3 redo groups with single redo member and sized 100m:
- SQL> ALTER DATABASE ADD STANDBY LOGFILE SIZE 100M;
- SQL> ALTER DATABASE ADD STANDBY LOGFILE SIZE 100M;
- SQL> ALTER DATABASE ADD STANDBY LOGFILE SIZE 100M;
- SQL> ALTER DATABASE ADD STANDBY LOGFILE SIZE 100M;
On the primary database, change the protection mode using the following command, Note that far sync instance is supported in either maximum performance or maximum availability mode. Usually when Data Guard is running under maximum availability and there is chance to switch internally to maximum performance in case of redo data unable to commit on any one standby database, With the same mechanism Oracle can work on both maximum availability and maximum performance, so worth increasing the protection level to Maximum availability.
- SQL> ALTER DATABASE SET STANDBY DATABASE TO MAXIMUM AVAILABILITY;
Optionally you can also configure alternative log shipping destination (any remote standby instance) to overcome far sync instance failure situations. Use the following example:
- LOG_ARCHIVE_DEST_STATE_3='ALTERNATE'
- LOG_ARCHIVE_DEST_3= 'SERVICE=PRIMSDB1 ASYNC ALTERNATE=LOG_ARCHIVE_DEST_2 VALID_FOR=(ONLINE_LOGFILES, PRIMARY_ROLE) DB_UNIQUE_NAME='PRIMDB'
In the event of far sync failure, the primary database will continue sending the redo to the alternate destination asynchronously. Therefore, when the far sync instance becomes available, the primary database starts sending the redo to the far sync instance synchronously.
Additionally, you can also configure another far sync instance to avail the high availability option to retain/maintain maximum availability with zero data loss capabilities. To configure another far sync instance follow the above example and do the following change on the production database:
- LOG_ARCHIVE_CONFIG= 'DG_CONFIG=(PRIMDB,PRIMFS1,PRIMFS2,PRIMSDB1,PRIMSDB2,PRIMSDB3)
- LOG_ARCHIVE_DEST_2= 'SERVICE=PRIMFS1 SYNC AFFIRM MAX_FAILURE=1 ALTERNATE=LOG_ARCHIVE_DEST_3 VALID_FOR=(ONLINE_LOGFILES, PRIMARY_ROLE) DB_UNIQUE_NAME='PRIMFS'
- LOG_ARCHIVE_DEST_STATE_3'ALTERNATE'
- LOG_ARCHIVE_DEST_3= 'SERVICE=PRIMFS2 SYNC AFFIRM ALTERNATE=LOG_ARCHIVE_DEST_2 VALID_FOR=(ONLINE_LOGFILES, PRIMARY_ROLE) DB_UNIQUE_NAME='PRIMFS2'
Also ensure that to use FAR sync feature, don’t forget to configure FAL_SERVER parameter to point the Far sync instances (PRIMFS1 or PRIMFS2) from the standby databases (PRIMSDB1, PRIMSDB2, PRIMSDB3). So that in case of any GAP’s standby database(s) can communicate to far sync instances to fetch the archive logs.
There will be confusion for sure, whether we have to start MRP on PRIMFS (or) PRIMSDB?
When MRP started, it reads every datafile headers, updates checkpoint information and also recovery will perform to data files. When there are no datafiles there is no scope of recovery. So MRP will be running only on standby database(s)
In this article, we have used the following primary/far sync/standby instance names for the demonstration purpose:
Primary Database
|
PRIMDB
|
Far sync
|
PRIMFS1, PRIMFS2
|
Remote standby databases
|
PRIMSDB1,PRIMSDB2,PRIMSDB3
|
In a nut shell:
- Is a light-weight/remote Oracle data guard special instance with no physical data files of its own acts as a middleman/layer between a primary database and all its standby databases configured far away
- Owns/manages only server parameter file (spfile), a standby control file and set of Standby redo logs (SRLs)
- The purpose of Far Sync instance is to receive redo from the primary database synchronously and transport them to respective remote destinations, a maximum of 29 destinations, asynchronously in real time, provides zero data loss fail-over capabilities
- Supports Maximum availability and maximum performance protection modes
- Advised to deploy/configure closer to the primary database location, roughly 30-150 miles, to avoid the network latency while shipping redo
- Best recommended option when standby databases are placed thousands of kilometer away from a primary site
- The solo purpose is to offloads/minimizes a primary database performance overheads
- Part of an active data guard option that requires separate license
- Its role can’t be converted either to primary or any other type of standby database
- Consumes minimal server resources, CPU, disk, memory etc.
DataGuard – Far Sync – part 1 – simple configuration
Oracle introduced a new feature for DataGuard called Far Sync. This configuration is design to support synchronous redo transport between primary and standby database which are far apart. In usual configuration synchronous redo transport between data centre in Europe and US is possible but inefficient from the primary database performance perspective. Every commit has to be written locally to redo logs and send to standby with confirmation – a network round trip between Europe and US is a significant delay for every commit operation in synchronous mode. If database has to run in no data loss mode DBA had no choice but run primary in SYNC mode and suffer from every transaction network round-trip or setup both databases in data centres which are relatively close to each other.
To mitigate that problem an Oracle 12c has introduced DataGuard Far Sync instance. Far Sync instance is a log transport "proxy" and it has to be located close to the primary database. As Far Sync is a type of instance only – there is no need to put whole database there. Far Sync instance requires space for a standby and an archive logs.
In that configuration primary is sending transaction to Far Sync instance in synchronous mode so every transaction is written to primary redo logs and Far Sync standby logs – so commit time is limited to time required for both writes and network round trip between primary and Far Sync. In next stage Far Sync instance is sending redo stream in asynchronous mode to standby database located in other data centre.
In my test configuration I setup OpenVPN connection between VM running in my home ESX server (located in Europe) and two EC2 instances running in USA. That configuration allows me to measure commit time for synchronous mode with and without Far Sync instance.
Configuration details:
Primary database – TEST_PRM
Far sync instance – FARSYNC
Standby database – TEST_STD
Primary database configuration – DataGuard related parameters only
Primary log transport configuration
As we can see in histogram – log file sync time between 256 and 512 ms is most frequent one and commit time for most of transaction were in that time range. In addition to Swingbench traffic I run one test transaction with simple row insert and traced it – here are important lines from trace file To mitigate that problem an Oracle 12c has introduced DataGuard Far Sync instance. Far Sync instance is a log transport "proxy" and it has to be located close to the primary database. As Far Sync is a type of instance only – there is no need to put whole database there. Far Sync instance requires space for a standby and an archive logs.
In that configuration primary is sending transaction to Far Sync instance in synchronous mode so every transaction is written to primary redo logs and Far Sync standby logs – so commit time is limited to time required for both writes and network round trip between primary and Far Sync. In next stage Far Sync instance is sending redo stream in asynchronous mode to standby database located in other data centre.
In my test configuration I setup OpenVPN connection between VM running in my home ESX server (located in Europe) and two EC2 instances running in USA. That configuration allows me to measure commit time for synchronous mode with and without Far Sync instance.
Configuration details:
Primary database – TEST_PRM
Far sync instance – FARSYNC
Standby database – TEST_STD
Primary database configuration – DataGuard related parameters only
DB_UNIQUE_NAME=TEST_PRM FAL_SERVER=TEST_STD LOG_ARCHIVE_CONFIG='DG_CONFIG=( TEST_PRM,FARSYNC, TEST_STD)' LOG_ARCHIVE_DEST_1='LOCATION=USE_DB_RECOVERY_FILE_DEST VALID_FOR=(ALL_LOGFILES,ALL_ROLES) DB_UNIQUE_NAME= TEST_PRMStandby configuration:
LOG_ARCHIVE_CONFIG='DG_CONFIG=( TEST_PRM,FARSYNC, TEST_STD)' LOG_ARCHIVE_DEST_1='location=/u01/app/oracle/oradata/test12c/arch'
First test case – SYNC mode redo transport between primary and standby database
Primary log transport configuration
log_archive_dest_2 = 'service="TEST_STD", SYNC AFFIRM delay=0 optional compression=disable max_failure=0 max_connections=1 reopen=300 db_unique_name="TEST_STD" net_timeout=10, valid_for=(online_logfile,all_roles)'; log_archive_config = 'dg_config=(TEST_PRM,TEST_STD,FARSYNC)In both cases I run Swingbench against primary database and monitor time histogram for ‘log file sync’ event (primary database was restarted every time to have identical test conditions and reset counters)
SQL> select * from v$event_histogram where EVENT = 'log file sync'; EVENT# EVENT WAIT_TIME_MILLI WAIT_COUNT LAST_UPDATE_TIME CON_ID ---------- ------------------------------ --------------- ---------- ------------------------------------------------------------------------- ---------- 144 log file sync 1 5 10-JUL-13 08.50.01.857358 PM +01:00 0 144 log file sync 2 23 10-JUL-13 08.38.51.202819 PM +01:00 0 144 log file sync 4 11 10-JUL-13 08.40.00.723312 PM +01:00 0 144 log file sync 8 2 10-JUL-13 08.59.06.068904 PM +01:00 0 144 log file sync 16 8 10-JUL-13 08.59.22.090580 PM +01:00 0 144 log file sync 32 18 10-JUL-13 08.59.29.450597 PM +01:00 0 144 log file sync 64 41 10-JUL-13 08.59.33.983966 PM +01:00 0 144 log file sync 128 62 10-JUL-13 08.59.39.447536 PM +01:00 0 144 log file sync 256 326 10-JUL-13 08.59.40.640604 PM +01:00 0 144 log file sync 512 1423 10-JUL-13 08.59.40.640691 PM +01:00 0 144 log file sync 1024 37 10-JUL-13 08.59.11.646609 PM +01:00 0 144 log file sync 2048 1 10-JUL-13 08.53.53.715921 PM +01:00 0 144 log file sync 4096 1 10-JUL-13 08.56.15.150343 PM +01:00 0 13 rows selected.
WAIT #140043716402464: nam='log file sync' ela= 184627 buffer#=419 sync scn=987285
As you can see 184 ms for commit is not a great result.
Second test case – SYNC mode redo transport between primary and far sync instance
Primary log transport configuration
LOG_ARCHIVE_DEST_2='service="FARSYNC"','SYNC AFFIRM delay=0 optional compression=disable max_failure=0 max_connections=1 reopen=300 db_unique_name="FARSYNC" net_timeout=30','valid_for=(online_logfile,all_roles)' SCOPE=BOTH;Far Sync instance required Oracle 12c binaries installed and special version of control file created on primary database using following command
ALTER DATABASE CREATE FAR SYNC INSTANCE CONTROLFILE AS '/tmp/farsync.ctl';
Instance parameters: CONTROL_FILE=/u01/app/oracle/oradata/test12c/control01.ctl DB_UNIQUE_NAME= FARSYNC FAL_SERVER= TEST_PRM LOG_FILE_NAME_CONVERT='test','test' LOG_ARCHIVE_CONFIG='DG_CONFIG=( TEST_PRM,FARSYNC, TEST_STD)' LOG_ARCHIVE_DEST_1 = "location=/u01/app/oracle/oradata/test12c/arch" LOG_ARCHIVE_DEST_2='service="TEST_STD"','ASYNC NOAFFIRM delay=0 optional compression=disable max_failure=0 max_connections=1 reopen=300 db_unique_name="TEST_STD" net_timeout=10','valid_for=(standby_logfile,all_roles)' SCOPE=BOTH;As you can see in above configuration LOG_ARCHIVE_DEST_2 is configured as ASYNC and pointed to real standby database.
Let's run test again
SQL> select * from v$event_histogram where EVENT = 'log file sync'; EVENT# EVENT WAIT_TIME_MILLI WAIT_COUNT LAST_UPDATE_TIME CON_ID ---------- ------------------------------ --------------- ---------- ------------------------------------------------------------------------- ---------- 144 log file sync 1 254 10-JUL-13 09.23.58.069192 PM +01:00 0 144 log file sync 2 364 10-JUL-13 09.23.58.027215 PM +01:00 0 144 log file sync 4 338 10-JUL-13 09.23.57.951897 PM +01:00 0 144 log file sync 8 278 10-JUL-13 09.23.57.540682 PM +01:00 0 144 log file sync 16 82 10-JUL-13 09.23.55.700236 PM +01:00 0 144 log file sync 32 29 10-JUL-13 09.23.56.896548 PM +01:00 0 144 log file sync 64 29 10-JUL-13 09.23.52.709345 PM +01:00 0 144 log file sync 128 18 10-JUL-13 09.23.40.675756 PM +01:00 0 144 log file sync 256 8 10-JUL-13 09.23.32.935084 PM +01:00 0 144 log file sync 512 18 10-JUL-13 09.23.32.949511 PM +01:00 0 144 log file sync 1024 7 10-JUL-13 09.22.50.183919 PM +01:00 0 11 rows selected.Now results are totally different – most of transactions have a commit time between 1 to 8 ms – so all transaction has been commit after writing data to primary redo and far sync standby logs. From performance perspective this is much better and still primary database is protected in no data loss mode. Similar line from trace file:
WAIT #140132880983720: nam='log file sync' ela= 1003 buffer#=63 sync scn=1042355
This time commit time was almost 180 faster and took only 1 ms.
Above configuration is a simplest one and doesn't include alternate destination if Far Sync is down. When standby database become primary there is no far sync instance close to new primary and it has to be addressed as well.
==================
DataGuard – Far Sync – part 2 - Data Guard Broker
Oracle introduced Far Sync Data Guard configuration which I described briefly in this post. Now is time for part two and using Data Guard Broker to add Far Sync instance.
Assuming that you have basic Data Guard Broker configuration ready (as described in - How to quickly build standby database and setup DataGuard configuration using Oracle 12c) adding new Far Sync instance is quite easy task.
First we have to setup a new instance which will be used as Far Sync for our primary database.
Instance parameter file - as a copy of primary database configuration (not necessary)
Copy it to Far Sync server
Redo logs routing rules are described in Oracle documentation.
Enabling and checking configuration
Now all is done. Happy testing.
Assuming that you have basic Data Guard Broker configuration ready (as described in - How to quickly build standby database and setup DataGuard configuration using Oracle 12c) adding new Far Sync instance is quite easy task.
First we have to setup a new instance which will be used as Far Sync for our primary database.
Instance parameter file - as a copy of primary database configuration (not necessary)
*.audit_file_dest='/home/oracle/admin/test12c/adump' *.audit_trail='db' *.compatible='12.1.0.0.0' *.control_files='/u01/app/oracle/oradata/test12c/control01.ctl','/u01/app/oracle/oradata/test12c/control02.ctl','/u01/app/oracle/oradata/test12c/control03.ctl' *.db_block_size=8192 *.db_domain='' *.db_name='test12c' *.diagnostic_dest='/u01/app/oracle' *.dispatchers='(PROTOCOL=TCP) (SERVICE=test12cXDB)' *.log_archive_dest_1='location=/u01/app/oracle/oradata/test12c/arch' *.open_cursors=300 *.processes=300 *.remote_login_passwordfile='EXCLUSIVE' *.sga_target=450m *.undo_tablespace='UNDOTBS1' # for far sync *.db_unique_name='test12c_far_sync' *.LOG_FILE_NAME_CONVERT='test','test' *.local_listener='(ADDRESS_LIST = (ADDRESS = (PROTOCOL = TCP)(HOST = 192.168.1.60)(PORT = 1522)))'There are three parameters which have to be changed:
- db_unique_name
- local_listener - new instance has to be registered in proper listener
- log_file_name_convert - it is useful to have even with dummy values so Oracle can recreate standby logs automatically
SID_LIST_LISTENER_DG = (SID_LIST = (SID_DESC= (ORACLE_HOME= /u01/app/oracle/product/12.1.0/db1) (SID_NAME=test12c) ) (SID_DESC = (SID_NAME = test12c) (GLOBAL_DBNAME = test12c_far_sync_DGMGRL) (ORACLE_HOME = /u01/app/oracle/product/12.1.0/db1) ) ) LISTENER_DG = (DESCRIPTION_LIST = (DESCRIPTION = (ADDRESS = (PROTOCOL = IPC)(KEY = EXTPROCDG1)) (ADDRESS = (PROTOCOL = TCP)(HOST = 192.168.1.60)(PORT = 1522)) ) )tnsnames file - entry test12c_far_sync has to be added on primary and standby server as well
test12c_prim = (DESCRIPTION = (ADDRESS_LIST = (ADDRESS = (PROTOCOL = TCP)(HOST = 192.168.1.45)(PORT = 1522)) ) (CONNECT_DATA = (SID = test12c) (SERVER=dedicated) ) ) test12c_std = (DESCRIPTION = (ADDRESS_LIST = (ADDRESS = (PROTOCOL = TCP)(HOST = 10.0.1.79)(PORT = 1522)) ) (CONNECT_DATA = (SID = test12c) (SERVER=dedicated) ) ) test12c_far_sync = (DESCRIPTION = (ADDRESS_LIST = (ADDRESS = (PROTOCOL = TCP)(HOST = 192.168.1.60)(PORT = 1522)) ) (CONNECT_DATA = (SID = test12c) (SERVER=dedicated) ) )When instance parameter file and network configuration is ready we need to create Far Sync control file on primary server.
[oracle@ora12c dbs]$ rlwrap sqlplus / as sysdba SQL*Plus: Release 12.1.0.1.0 Production on Fri Jul 26 23:21:59 2013 Copyright (c) 1982, 2013, Oracle. All rights reserved. Connected to: Oracle Database 12c Enterprise Edition Release 12.1.0.1.0 - 64bit Production With the Partitioning, OLAP, Advanced Analytics and Real Application Testing options SQL> ALTER DATABASE CREATE FAR SYNC INSTANCE CONTROLFILE AS '/tmp/farsync.ctl'; Database altered.
Copy it to Far Sync server
[oracle@ora12c dbs]$ scp /tmp/farsync.ctl 192.168.1.60:/u01/app/oracle/oradata/test12c/control01.ctl oracle@192.168.1.60's password: farsync.ctl 100% 10MB 10.3MB/s 00:00 [oracle@ora12c dbs]$ scp /tmp/farsync.ctl 192.168.1.60:/u01/app/oracle/oradata/test12c/control02.ctl oracle@192.168.1.60's password: farsync.ctl 100% 10MB 10.3MB/s 00:01 [oracle@ora12c dbs]$ scp /tmp/farsync.ctl 192.168.1.60:/u01/app/oracle/oradata/test12c/control03.ctl oracle@192.168.1.60's password: farsync.ctl 100% 10MB 10.3MB/s 00:00 [oracle@ora12c dbs]$Now Far Sync instance can be started and new standby logs could be created. Make sure if Data Guard Broker has been started as well.
[oracle@oraclese dbs]$ rlwrap sqlplus / as sysdba SQL*Plus: Release 12.1.0.1.0 Production on Fri Jul 26 23:24:27 2013 Copyright (c) 1982, 2013, Oracle. All rights reserved. Connected to an idle instance. SQL> create spfile from pfile; File created. SQL> startup mount ORACLE instance started. Total System Global Area 471830528 bytes Fixed Size 2289688 bytes Variable Size 293605352 bytes Database Buffers 167772160 bytes Redo Buffers 8163328 bytes Database mounted. SQL> alter system set dg_broker_start = true; System altered. SQL> alter database add standby logfile ('/u01/app/oracle/oradata/test12c/standby01.log') size 50M reuse; Database altered. SQL> alter database add standby logfile ('/u01/app/oracle/oradata/test12c/standby02.log') size 50M reuse; Database altered. SQL> alter database add standby logfile ('/u01/app/oracle/oradata/test12c/standby03.log') size 50M reuse; Database altered.Like if previous post about Data Guard configuration - Broker will be used to do all configuration work. You should to connect to Broker from primary database. This is how current configuration looks like:
[oracle@ora12c ~]$ rlwrap dgmgrl / DGMGRL for Linux: Version 12.1.0.1.0 - 64bit Production Copyright (c) 2000, 2012, Oracle. All rights reserved. Welcome to DGMGRL, type "help" for information. Connected as SYSDG. DGMGRL> connect / Connected as SYSDG. DGMGRL> show configuration verbose; Configuration - fsc Protection Mode: MaxPerformance Databases: test12c_prim - Primary database test12c_std - Physical standby database Properties: FastStartFailoverThreshold = '30' OperationTimeout = '30' TraceLevel = 'USER' FastStartFailoverLagLimit = '30' CommunicationTimeout = '180' ObserverReconnect = '0' FastStartFailoverAutoReinstate = 'TRUE' FastStartFailoverPmyShutdown = 'TRUE' BystandersFollowRoleChange = 'ALL' ObserverOverride = 'FALSE' Fast-Start Failover: DISABLED Configuration Status: SUCCESSAdding Far Sync
DGMGRL> add far_sync test12c_far_sync as connect identifier is 'test12c_far_sync'; far sync instance "test12c_far_sync" added DGMGRL> show configuration verbose; Configuration - fsc Protection Mode: MaxPerformance Databases: test12c_prim - Primary database test12c_std - Physical standby database test12c_far_sync - Far Sync (disabled) Properties: FastStartFailoverThreshold = '30' OperationTimeout = '30' TraceLevel = 'USER' FastStartFailoverLagLimit = '30' CommunicationTimeout = '180' ObserverReconnect = '0' FastStartFailoverAutoReinstate = 'TRUE' FastStartFailoverPmyShutdown = 'TRUE' BystandersFollowRoleChange = 'ALL' ObserverOverride = 'FALSE' Fast-Start Failover: DISABLED Configuration Status: SUCCESSNow new rules for redo log transport have to be configured
DGMGRL> edit database 'test12c_prim' set property redoroutes='(local : test12c_far_sync sync)'; Property "redoroutes" updated DGMGRL> edit far_sync 'test12c_far_sync' set property redoroutes='(test12c_prim : test12c_std)'; Property "redoroutes" updatedAbove configuration is sending redo data from primary database to far sync instance using SYNC mode and redo data from far sync instance to standby database using ASYNC mode. As primary and far sync instance are close one to each other you can have no data loss mode without additional network synchronization overhead.
Redo logs routing rules are described in Oracle documentation.
Enabling and checking configuration
DGMGRL> enable far_sync test12c_far_sync; Enabled. DGMGRL> show configuration verbose; Configuration - fsc Protection Mode: MaxPerformance Databases: test12c_prim - Primary database test12c_far_sync - Far Sync test12c_std - Physical standby database Properties: FastStartFailoverThreshold = '30' OperationTimeout = '30' TraceLevel = 'USER' FastStartFailoverLagLimit = '30' CommunicationTimeout = '180' ObserverReconnect = '0' FastStartFailoverAutoReinstate = 'TRUE' FastStartFailoverPmyShutdown = 'TRUE' BystandersFollowRoleChange = 'ALL' ObserverOverride = 'FALSE' Fast-Start Failover: DISABLED Configuration Status: SUCCESSIf everything is OK we can switch protection mode from Maximum Performance to Maximum Availability to get all advantages of our new configured Far Sync instance.
DGMGRL> edit configuration set protection mode as MaxAvailability; Succeeded. DGMGRL> show configuration verbose; Configuration - fsc Protection Mode: MaxAvailability Databases: test12c_prim - Primary database test12c_far_sync - Far Sync test12c_std - Physical standby database Properties: FastStartFailoverThreshold = '30' OperationTimeout = '30' TraceLevel = 'USER' FastStartFailoverLagLimit = '30' CommunicationTimeout = '180' ObserverReconnect = '0' FastStartFailoverAutoReinstate = 'TRUE' FastStartFailoverPmyShutdown = 'TRUE' BystandersFollowRoleChange = 'ALL' ObserverOverride = 'FALSE' Fast-Start Failover: DISABLED Configuration Status: SUCCESS
Now all is done. Happy testing.
================================================================================
Comments
Post a Comment
Oracle DBA Information