RAC One Node

RAC One Node


any significant good feature(s) of RAC One Node?

As discussed, the functionality was available in the previous versions as well but it was quite challenging to implement. RAC One node, introduced in 11.2 for the first time(and getting enhanced and feature rich since then constantly) offers the same functionality but more effectively and simply.  Some of the features (but not a complete list) which make it very effective is:
  1. Online migration of the sessions from active node to the other
  2. Easy conversion from RAC One Node to complete RAC and vice-versa
  3. Integrated with the features like Instance Caging to provide a better resource management
  4. Supported over Exadata
  5. Supported over OVM(Oracle Virtualization Manager)
  6. Support for Rolling Patches of RAC to provide the same interface on RAC ONE Node
  7. Easy creation of One Node database using DBCA(from 11.2.0.2)
  8. Supported on all those platforms where Oracle RAC is supported
Oracle RAC One Node is based on the same model of the RAC but it has a major difference from it as well. Full RAC works as an Active-Active solution i.e. the number of nodes that are in the cluster, they all are active and can accept connections and workloads and work together as one single unit. But RAC One Node, as the name suggests as well, works as an Active-Passive solution where at one time, only one node is going to be active at any time with the other nodes being available and ready to accept the workload in the case of a planned or an unplanned downtime related to the first node.

RAC One Node,  handling of Planned Downtime?

For a planned migration, the RAC one node  works perfectly because it is going to let the users continue with their work without  impacting the business. When in 11201, the feature was first introduced by Oracle Corp, there was a need to use the utility OMOTION to achieve a transition of the instance from source node to the target node. But from 11202 this is a non-requirement and the clusterware software itself takes care of everything, including the migration of the instance from one node to another and to move the sessions along with it.
Screenshot: three nodes belonging to one cluster, each connected to  a common, shared storage
In the above diagram, we have got three nodes belonging to one cluster, each connected to  a common, shared storage. We also have a user session which is alive while being connected to the first node.
If a planned failover occurs, the clusterware would detect it and would start shifting the session(s) which are connected to the primary node to the target node, after their current work is finished over it.  Also, in the meantime, another instance would be up on the second node and for a small time window, there would be an Oracle instance running on both the nodes.
Screenshot: If a planned failover occurs, the clusterware would detect it and would start shifting the sessions
Finally, only one node (in this example, node 2) would be active and all the sessions connected over the first node would have been migrated to the node 2 and also the instance on the node 1 would be also dismounted.
Screenshot: Only one node would be active and all the sessions connected over the first node would have been migrated to the node 2
Because RAC One Node offers an online migration of the instance from the source node to the target node,  the mechanism offers a transparent workaround for the issues which would involve the node being crashed or the instance being down or for those circumstances where the node is not having enough resources to cater the incoming workload. As Oracle 11.2 RAC uses the concept of SCAN(Single Client Access Name) , if the client is configured to use the SCAN name resolution method for discovering the cluster, it would be completely transparent for the client if such kind of migration would take place.  The only thing that probably may take long time is the actual migration period from the source node to the target node-it’s customizable though.
As shown that the RAC One node would support the online migration with minimal impact on the business, it’s important to make this fact clear that this possibility, of running two instances on two different nodes is just on a temporary basis and is also there only for the planned downtime.  The first instance along with the second instance would be only kept alive till the migration of the sessions is not completed .Once the migration of the instance on the other node is complete(along with the sessions connected on the first node) , the first instance would be closed and once again, the mode would be still Active-Passive only with only one instance up.  This also makes complete sense as if the option to have two instances running on two nodes would be there, RAC One Node wouldn’t be any different from a normal RAC, would it?
Let’s try to get the online relocation done by ourselves and see what happens to the existing and on to the target instance.
[oracle@host01 ~]$ srvctl config database -d aristone
Database unique name: aristone
Database name: aristone
Oracle home: /u01/app/oracle/product/11.2.0/dbhome_1
Oracle user: oracle
Spfile: +FRA/aristone/spfilearistone.ora
Domain:
Start options: open
Stop options: immediate
Database role: PRIMARY
Management policy: AUTOMATIC
Server pools: aristone
Database instances:
Disk Groups: FRA
Mount point paths:
Services: srvc1
Type: RAC One Node
Online relocation timeout: 30
Instance name prefix: aristone
Candidate servers: host01,host02,host03
Database is administrator managed
Before we start the migration, let’s check the status of the database and it’s instance right now.
[oracle@host01 ~]$ srvctl status database -d aristone
 Instance aristone_1 is running on node host01
 Online relocation: INACTIVE
So what we have here is a RAC One Node database with the SID aristone and it is running on the node HOST01with the instance aristone_1.  So now, we shall try to relocate the instance from this node to the target nodeHOST02.  Also it’s shown that the online relocation is not active at the moment.
It’s important to mention that with the version 11201, this task was done by a utility OMOTION but from 11202 onwards, the use of this utility is not required. The release of the software used for this demo was 11203 so obviously, the utility wasn’t required.
The conversion is done using the command SRVCTL RELOCATE DATABASE in which we are going to pass the name of the target node and the option to be in verbose mode for the output. Below is the output of this command:
[oracle@host01 ~]$ srvctl relocate database -d aristone -n host02 -w 30 -v
Configuration updated to two instances
Instance aristone_2 started
Services relocated
Waiting for up to 30 minutes for instance aristone_1 to stop ...
Instance aristone_1 stopped
Configuration updated to one instance
And from another session, we can see that the migration is going on.
[oracle@host01 ~]$ srvctl status database -d aristone
Instance aristone_2 is running on node host02
Online relocation: ACTIVE
Source instance: aristone_1 on host03
Destination instance: aristone_2 on host02
We can see that the second instance has come up and the relocation status is also shown as ACTIVE which means that the relocation is going on. We would need to run the command couple of times as it may take longer for the instance to crash.
[oracle@host01 ~]$ srvctl status database -d aristone
Instance aristone_2 is running on node host02
Online relocation: ACTIVE
Source instance: aristone_1 on host03
Destination instance: aristone_2 on host02

[oracle@host01 ~]$ srvctl status database -d aristone
Instance aristone_2 is running on node host02
Online relocation: ACTIVE
Source instance: aristone_1 on host03
Destination instance: aristone_2 on host02

Finally when the relocation would be over, this would be shown as the output,
[oracle@host01 ~]$ srvctl status database -d aristone
Instance aristone_2 is running on node host02
Online relocation: INACTIVE [oracle@host01 ~]$
As we can see, one the relocation is complete only the instance “aristone_2” is going to be working and the status of ONLINE RELOCATION is going to be completed.

What about Unplanned disasters?

If it’s an un-planned shutdown, though the instance would be up on the second node but in this case, the online migration of the sessions will not take place.
In case the reason behind the instance on the initial node has come down, was unplanned, it would cause a downtime as a result of which a smooth migration of the user sessions from the source node to the target, as in case of planned downtime , won’t be possible. But since RAC One essentially is running Grid Infrastructure or using a little old terminology, the clusterware software underneath, it would be able to detect this issue-without a DBA’s intervention.  As with the normal working of the clusterware, the first attempt would be to restart the failed instance on the very same node where it was running initially. If for some reason the instance can’t be started on the same node or worse, the node itself has crashed, there would be an automatic migration of the instance to the target node.

So how Do I Create a RAC One Node Database?

With 11202, the option to create the RAC One Node database came up with the DBCA itself.  In the prior release (11201), this wasn’t possible to be done unless you wouldn’t apply the patch #9004119 on your installation. Starting with 11202 release onwards, the option is built in into the DBCA if it finds itself running over a clustered environment.
Screenshot: creating a RAC One Node database
As shown, the 2nd option would be creating a RAC One Node database. The process is not much different from creating a normal RAC database  and we shall see it in the below screenshots.
After selecting the option to create the database, next step would be choose the option to create the database and to choose the right template for the database creation.
Screenshot: choose the right template for the database creation
Screenshot:Oracle rac create database template
Next step would be to enter the details of the for the database name and SID and to define the service that would be used by this database.  Also, you can choose whether you can choose the database to be an Admin-Managed or Policy Managed.
Screenshot: Oracle RAC one node identification
So for our example, we have chosen both the database name and SID to be ARISTONE and it’s an Admin-Managed database and being one,  the list of the nodes are shown which we can select to define that target node where the instance would be failed over in the event of a crash. The failover can’t happen to that node which is not selected in this step of the wizard by you if the database management is of Admin-Managed type.  If you are willing to create a Policy-Managed database, the node list won’t be shown but the option to use a server pool would be there and you must ensure that in that server pool, there is a node available so that the failover can happen over it.
Next step would be to choose the option of Database Control or not and also to select the passwords for the system accounts.
Screenshot: Oracle RAC One Node management options
Screenshot: Oracle  RAC One Node database credentials
The next option would be to choose the storage location which can be either ASM or file system. If you are going to choose ASM, the pop-up would ask you to give the password of the user ASMSNMP as well. We have chosen the option of ASM for the storage.
Screenshot: Oracle RAC One Node database file locations
Next would be whether we want to use Fast Recovery Area(FRA) or not.
Screenshot: Oracle RAC One Node database recovery
Next would be the option to choose the option run any script and also to choose the memory parameters and other database options.
Screenshot: Oracle RAC One Node database content
Finally it’s the option of Create Database and Summary and database creation would be started.
Screenshot: Oracle RAC One Node database creation
We can see the progress of the database creation in the progress window.
Screenshot: Oracle RAC One Node database creation progress
Finally, we have the database created and information about it shown in the last page.
Screenshot: Oracle RAC One Node database creation finished
So finally , we have the database ARISTAONE got created which is a RAC One Node database and we can confirm this even from it’s properties shown from the CLI too by running DBCA in the silent mode.
Since it’s a RAC database itself though working with one node only,
oracle@host01 ~]$ srvctl config database -d aristone
Database unique name: aristone
Database name: aristone
Oracle home: /u01/app/oracle/product/11.2.0/dbhome_1
Oracle user: oracle
Spfile: +FRA/aristone/spfilearistone.ora
Domain:
Start options: open
Stop options: immediate
Database role: PRIMARY
Management policy: AUTOMATIC
Server pools: aristone
Database instances:
Disk Groups: FRA
Mount point paths:
Services: srvc1
Type: RAC One Node Online relocation timeout: 30
Instance name prefix: aristone
Candidate servers: host01,host02,host03
Database is administrator managed
It is quite evident that most of the properties of the RAC One Database are similar to those of the normal RAC database but there are few differences as well and are specific to RAC One Node database. These are as follows:
  1. Database Type : This shows that which type of database it is. For RAC One, the output is RAC One Node.
  2. Online Relocation Timeout : This is the time that’s going to be given to the sessions to complete their transactions and switchover to the target node without any issue. If the transactions fail to get completed in this time period, it would be aborted and session would be switched over to the target node. The default unit of this parameter is in minutes and the value is 30(minutes) . Maximum value allowed for this parameter would be 12hours (720 minutes) .
  3. Candidate Servers: This is the list of those nodes where the relocation can happen.

I have a single instance, non-RAC db. Can I convert it to RAC One Node database?

In order to convert a single instance database into RAC One Node database, one can take the aid of  tool DBCA. All the tasks like creating the redo threads , undo tablespaces etc which are required for the RAC One Node database to function are going to be done automatically by the DBCA. So this makes the transition much easier. The only thing that’s going to be a pre-requisite is that the underlying database must be supporting all the mandatory properties or conditions that are required for a clustered database to run, for example a shared storage must be there on which the files should be placed.  Also, all the necessary software required to get the services of the RAC running must be there along with the required hardware-like a shared storage.
The conversion would be done with the help of the templates that can be created from the existing single instance database. Using this template, another new  RAC One Node database can be created.

And what if I have a RAC One Node database and I want to convert it to a complete RAC database?

Yes, you can do it and very easily!
If you have already got a RAC One Node database and you want to convert to a complete RAC , we can do it using the command SRVCTL  CONVERT. Since we have a database ARISTONE which is a RAC One Node database, we shall convert it to a complete RAC database which would run over three hosts. So let’s shut down it and issue the CONVERT command.
[oracle@host01 ~]$ srvctl stop database -d aristone
[oracle@host01 ~]$ srvctl convert database -d aristone -c RAC
Now since we have the instance 2 running on one host already, for the remaining two hosts we shall be adding the instances.
[oracle@host01 ~]$ srvctl add instance -d aristone -i aristone_3 -n host03
[oracle@host01 ~]$ srvctl add instance -d aristone -i aristone_1 -n host01
Now, let’s start the newly converted database and check it’s status.
[oracle@host01 ~]$ srvctl start database -d aristone
[oracle@host01 ~]$ srvctl status database -d aristone
Instance aristone_1 is running on node host01
Instance aristone_2 is running on node host02
Instance aristone_3 is running on node host03
So as expected the database is up and running with 3 instances on all the 3 nodes and is successfully converted. Let’s confirm that it’s a RAC database only by seeing it’s properties from the SRVCTL CONFIG command.
[oracle@host01 ~]$ srvctl config database -d aristone
Database unique name: aristone
Database name: aristone
Oracle home: /u01/app/oracle/product/11.2.0/dbhome_1
Oracle user: oracle
Spfile: +FRA/aristone/spfilearistone.ora
Domain:
Start options: open
Stop options: immediate
Database role: PRIMARY
Management policy: AUTOMATIC
Server pools: aristone
Database instances: aristone_2,aristone_3
Disk Groups: FRA
Mount point paths:
Services:
Type: RAC
Database is administrator managed
Since it’s a RAC database now so the TYPE is shown as RAC and also, there are no parameters like TIMOUT etc are shown which were previously visible for the RAC One Node database.



==============================================================================

Oracle RACOne Node -- Changes in 11.2.0.2 (Doc ID 1232802.1)


Oracle Real Application Clusters One Node (Oracle RAC One Node) is a single instance of an Oracle Real Application Clusters (Oracle RAC) database that runs on one node in a cluster. This option adds to the flexibility that Oracle offers for database consolidation. There are significant changes in the way an Oracle RAC One Node database is administered in version 11.2.0.2 compared to 11.2.0.1.

Major Changes include:

  • Oracle Universal Installer (OUI) has a new option to select an Oracle RAC One Node Installation.
  • The DBCA is now capable of creating and configuring an Oracle RAC One Node database.
  • SRVCTL is capable of configuring and administering an Oracle RAC One database.
    (In 11.2.0.1 the administration was performed using command line tools and scripts.)
  • Oracle Data Guard and the Oracle Data Guard Broker are Oracle RAC One Node aware.
Creating a new Oracle RAC One Node database in 11.2.0.2 involves the following steps:
  • Install and configure 11.2.0.2 Oracle Grid Infrastructure on all the servers that you want to host the Oracle RAC One Database, either under normal operations or after a failover or relocation.
  • Install the Oracle RDBMS software by selecting Oracle RAC One Node for the respective nodes. 
  • Create an Oracle RAC One Node database using the DBCA.

Administration of an Oracle RAC One Node Database

1). Verifying an existing Oracle RAC One Node Database

command: srvctl config database -d

Eg:
[oracle@harac1 bin]$ srvctl config database -d racone
Database unique name: racone
Database name:
Oracle home: /home/oracle/product/11gR2/11.2.0.2_RACOne
Oracle user: oracle
Spfile: +DG2/RacOne/spfileRacOne.ora
Domain:
Start options: open
Stop options: immediate
Database role: PRIMARY
Management policy: AUTOMATIC
Server pools: racone
Database instances:
Disk Groups: DG2,DG1
Mount point paths:
Services:
Type: RACOneNode <<<<
Online relocation timeout: 30
Instance name prefix: racone
Candidate servers: harac1,harac2,lfmsx3
Database is administrator managed

command: srvctl status database -d
Eg:
[oracle@harac1 bin]$ srvctl status database -d racone
Instance racone_1 is running on node harac1
Online relocation: INACTIVE

2). Performing an online migration

Command: srvctl relocate database -d {[-n ] [-w ] | -a [-r]} [-v]
-d Unique name of database to relocate
-n Target node to which to relocate database
-w Online relocation timeout in minutes
-a Abort failed online relocation
-r Remove target node of failed online relocation request from the candidate server list of administrator-managed RAC One Node database
-v Verbose output
-h Print usage

Eg:
[oracle@harac2 dbs]$ srvctl relocate database -d racone -n harac1 -w 15 -v
Configuration updated to two instances
Instance racone_1 started
Waiting for 15 minutes for instance racone_2 to stop.....
Instance racone_2 stopped
Configuration updated to one instance

The default timeout for an online relocation is 30 minutes. This is the time for the sessions to finish their open transactions and migrate to the new instance. If a session does not finish the transaction in fly in this amount of time, the transaction will be cancelled and the session will be removed from the instance as part of the "shutdown abort" operation that is issued after the timeout has expired. If a different timeout is preferred, then one can use the -w option to specify a timeout up to 720 minutes.
Status during the relocation:

[oracle@harac2 bin]$ srvctl status database -d racone
Instance racone_1 is running on node harac1
Online relocation: ACTIVE
Source instance: racone_2 on harac2
Destination instance: racone_1 on harac1

Once the migration is completed, we should see the sessions moved to the remote instance.
Eg:
[oracle@harac2 orarootagent]$ srvctl status database -d racone
Instance racone_1 is running on node harac1
Online relocation: INACTIVE


3) Converting an Oracle RAC One Node Database to Oracle RAC or vice versa

Converting between an Oracle RAC One Node and Oracle RAC database can be achieved using the "srvctl convert database" command. For example:
To convert a database from Oracle RAC One Node to Oracle RAC:

srvctl convert database -d -c RAC [-n ]

srvctl convert database -d racone -c RAC 
-n harac1

Add more instances on other nodes as required:

[oracle@harac2 bin]$ srvctl add instance -d racone -i racone_1 -n harac1
[oracle@harac2 bin]$ srvctl add instance -d racone -i racone_3 -n lfmsx3

[oracle@harac2 bin]$ srvctl config database -d racone
Database unique name: racone
Database name:
Oracle home: /home/oracle/product/11gR2/11.2.0.2_RACOne
Oracle user: oracle
Spfile: +DG2/RacOne/spfileRacOne.ora
Domain:
Start options: open
Stop options: immediate
Database role: PRIMARY
Management policy: AUTOMATIC
Server pools: racone
Database instances: racone_1,racone_2,racone_3
Disk Groups: DG2,DG1
Mount point paths:
Services:
Type: RAC
Database is administrator managed

After starting the database on all the 3 nodes:

[oracle@harac2 bin]$ srvctl status database -d racone
Instance racone_1 is running on node harac1
Instance racone_2 is running on node harac2
Instance racone_3 is running on node lfmsx3

To convert a database from Oracle RAC to Oracle RAC One Node:

srvctl convert database -d -c RACONENODE -i -w

Eg:  srvctl convert database -d racone -c RACONENODE -w 30 -i racone
[oracle@harac2 bin]$ srvctl config database -d racone
Database unique name: racone
Database name:
Oracle home: /home/oracle/product/11gR2/11.2.0.2_RACOne
Oracle user: oracle
Spfile: +DG2/RacOne/spfileRacOne.ora
Domain:
Start options: open
Stop options: immediate
Database role: PRIMARY
Management policy: AUTOMATIC
Server pools: racone
Database instances:
Disk Groups: DG2,DG1
Mount point paths:
Services: racone_taf
Type: RACOneNode
Online relocation timeout: 30
Instance name prefix: racone
Candidate servers: harac1,harac2,lfmsx3
Database is administrator managed

After starting the database:
[oracle@harac2 bin]$ srvctl status database -d racone
Instance racone_1 is running on node harac1
Online relocation: INACTIVE
Notes:

1)
Please ensure that at least one service is configured, before running the convert command, otherwise the following error will appear:

[oracle@harac2 oswtop]$ srvctl convert database -d racone -c RACONENODE -w 30 -i racone
PRCD-1242 : Unable to convert RAC database racone to RAC One Node database because the database had no service added.

2)
During the RAC to RACOne conversion, please ensure that the addition instances are removed using DBCA before we run the "srvctl convert database" command.

$ srvctl convert database -d racone -c RACONENODE -w 30 -i racone
PRCD-1214 : Administrator-managed RAC database racone has more than one instance


4) Upgrading an Oracle RAC One Node database from 11.2.0.1 to 11.2.0.2

In order to upgrade an existing 11.2.0.1 Oracle RAC One Node to 11.2.0.2 perform the following steps:
  • Upgrade Oracle Grid Infrastructure to 11.2.0.2 (Only out of place upgrade is possible).
  • Upgrade Oracle RAC RDBMS software to 11.2.0.2 (an out of place upgrade is recommended).
  • The DBUA is not aware of Oracle RAC One in 11.2.0.2, and set the database type to RAC after the upgrade. This is addressed in internal bug 9950713 - to be fixed in 11.2.0.3.
  • Development suggests the following workaround.

       (a)  Run the racone2rac.sh before the upgrade to convert the RACOne node to 1 node RAC.
       (b)  Run the the DBUA to complete the upgrade.
       (c)  After the upgrade, run "srvctl convert database" listed above to covert the database to RAC One.

For more details, please refer to Oracle Real Application Clusters Administration and Deployment Guide 11g Release 2 (11.2) http://download.oracle.com/docs/cd/E11882_01/rac.112/e16795/onenode.htm#RACAD7894
========================================================================

Comments