Remove a Node from an Existing Oracle RAC 11g R2 Cluster on Linux
Example Configuration
The example configuration used in this guide stores all physical database files (data, online redo logs, control files, archived redo logs) on ASM in an ASM disk group named +RACDB_DATA while the Fast Recovery Area is created in a separate ASM disk group named +FRA.
The existing three-node Oracle RAC and the network storage server is configured as described in the table below.
Node Name | Instance Name | Database Name | Processor | RAM | Operating System |
---|---|---|---|---|---|
racnode1 | racdb1 | racdb.idevelopment.info | 1 x Dual Core Intel Xeon, 3.00 GHz | 4GB | CentOS 5.5 - (x86_64) |
racnode2 | racdb2 | 1 x Dual Core Intel Xeon, 3.00 GHz | 4GB | CentOS 5.5 - (x86_64) | |
racnode3 [Remove] | racdb3 | 1 x Dual Core Intel Xeon, 3.00 GHz | 4GB | CentOS 5.5 - (x86_64) | |
openfiler1 | 2 x Intel Xeon, 3.00 GHz | 6GB | Openfiler 2.3 - (x86_64) |
Node Name | Public IP | Private IP | Virtual IP | SCAN Name | SCAN IP |
---|---|---|---|---|---|
racnode1 | 192.168.1.151 | 192.168.2.151 | 192.168.1.251 | racnode-cluster-scan | 192.168.1.187 192.168.1.188 192.168.1.189 |
racnode2 | 192.168.1.152 | 192.168.2.152 | 192.168.1.252 | ||
racnode3 [Remove] | 192.168.1.153 | 192.168.2.153 | 192.168.1.253 | ||
openfiler1 | 192.168.1.195 | 192.168.2.195 |
Software Component | OS User | Primary Group | Supplementary Groups | Home Directory | Oracle Base / Oracle Home |
---|---|---|---|---|---|
Grid Infrastructure | grid | oinstall | asmadmin, asmdba, asmoper | /home/grid | /u01/app/grid /u01/app/11.2.0/grid |
Oracle RAC | oracle | oinstall | dba, oper, asmdba | /home/oracle | /u01/app/oracle /u01/app/oracle/product/11.2.0/dbhome_1 |
Storage Component | File System | Volume Size | ASM Volume Group Name | ASM Redundancy | Openfiler Volume Name |
---|---|---|---|---|---|
OCR/Voting Disk | ASM | 2GB | +CRS | External | racdb-crs1 |
Database Files | ASM | 32GB | +RACDB_DATA | External | racdb-data1 |
ASM Cluster File System | ASM | 32GB | +DOCS | External | racdb-acfsdocs1 |
Fast Recovery Area | ASM | 32GB | +FRA | External | racdb-fra1 |
The following is a conceptual look at what the environment will look like after removing the third Oracle RAC node (racnode3) from the cluster. Click on the graphic below to enlarge the image.
Backup OCR
Backup the OCR using ocrconfig -manualbackup from a node that is to remain a member of the Oracle RAC.
[root@racnode1 ~]# /u01/app/11.2.0/grid/bin/ocrconfig -manualbackup |
Note that voting disks are automatically backed up in OCR after the changes we will be making to the cluster.
Remove Instance from any Services - (if necessary)
The instance racdb3 is hosted on node racnode3 which is part of the existing Oracle RAC and the node being removed in this guide. The racdb3 instance is in the preferred list of the serviceracdbsvc.idevelopment.info.
[oracle@racnode1 ~]$ srvctl config service -d racdb -s racdbsvc.idevelopment.info -v
Service name: racdbsvc.idevelopment.info
Service is enabled
Server pool: racdb_racdbsvc.idevelopment.info
Cardinality: 3
Disconnect: false
Service role: PRIMARY
Management policy: AUTOMATIC
DTP transaction: false
AQ HA notifications: false
Failover type: NONE
Failover method: NONE
TAF failover retries: 0
TAF failover delay: 0
Connection Load Balancing Goal: LONG
Runtime Load Balancing Goal: NONE
TAF policy specification: NONE
Edition:
Preferred instances: racdb3,racdb1,racdb2
Available instances:
|
Update the racdbsvc.idevelopment.info service to remove the racdb3 instance.
[oracle@racnode1 ~]$ srvctl modify service -d racdb -s racdbsvc.idevelopment.info -n -i racdb1,racdb2 [oracle@racnode1 ~]$ srvctl config service -d racdb -s racdbsvc.idevelopment.info -v Service name: racdbsvc.idevelopment.info Service is enabled Server pool: racdb_racdbsvc.idevelopment.info Cardinality: 2 Disconnect: false Service role: PRIMARY Management policy: AUTOMATIC DTP transaction: false AQ HA notifications: false Failover type: NONE Failover method: NONE TAF failover retries: 0 TAF failover delay: 0 Connection Load Balancing Goal: LONG Runtime Load Balancing Goal: NONE TAF policy specification: NONE Edition: Preferred instances: racdb1,racdb2 Available instances: [oracle@racnode1 ~]$ srvctl status service -d racdb -s racdbsvc.idevelopment.info -v Service racdbsvc.idevelopment.info is running on instance(s) racdb1,racdb2 |
Remove Instance from the Cluster Database
As the Oracle software owner, run the Oracle Database Configuration Assistant (DBCA) in silent mode from a node that will remain in the cluster to remove the racdb3 instance from the existing cluster database. The instance that's being removed by DBCA must be up and running.
[oracle@racnode1 ~]$ dbca -silent -deleteInstance -nodeList racnode3 \ -gdbName racdb.idevelopment.info -instanceName racdb3 \ -sysDBAUserName sys -sysDBAPassword ****** Deleting instance 20% complete 21% complete 22% complete 26% complete 33% complete 40% complete 46% complete 53% complete 60% complete 66% complete Completing instance management. 100% complete Look at the log file "/u01/app/oracle/cfgtoollogs/dbca/racdb.log" for further details. |
Review the DBCA trace file which is located in /u01/app/oracle/cfgtoollogs/dbca/trace.log_OraDb11g_home1_ .
Verify that the racdb3 database instance was removed from the cluster database.
[oracle@racnode1 ~]$ srvctl config database -d racdb -v Database unique name: racdb Database name: racdb Oracle home: /u01/app/oracle/product/11.2.0/dbhome_1 Oracle user: oracle Spfile: +RACDB_DATA/racdb/spfileracdb.ora Domain: idevelopment.info Start options: open Stop options: immediate Database role: PRIMARY Management policy: AUTOMATIC Server pools: racdb Database instances: racdb1,racdb2 Disk Groups: RACDB_DATA,FRA Mount point paths: Services: racdbsvc.idevelopment.info Type: RAC Database is administrator managed [oracle@racnode1 ~]$ sqlplus / as sysdba SQL> select inst_id, instance_name, status, 2 to_char(startup_time, 'DD-MON-YYYY HH24:MI:SS') as "START_TIME" 3 from gv$instance order by inst_id; INST_ID INSTANCE_NAME STATUS START_TIME ---------- ---------------- ------------ -------------------- 1 racdb1 OPEN 01-MAY-2012 11:30:01 2 racdb2 OPEN 01-MAY-2012 11:30:00 |
As seen from the output above, the racdb3 database instance was removed and only racdb1 and racdb2 remain.
When DBCA is used to delete the instance, it also takes care of removing any Oracle dependencies like the public redo log thread, undo tablespace, and all instance related parameter entries for the deleted instance. This can be examined in the trace file produced by DBCA /u01/app/oracle/cfgtoollogs/dbca/trace.log_OraDb11g_home1_ :
... ... thread SQL = SELECT THREAD# FROM V$THREAD WHERE UPPER(INSTANCE) = UPPER('racdb3') ... threadNum.length=1 ... threadNum=3 ... redoLog SQL =SELECT GROUP# FROM V$LOG WHERE THREAD# = 3 ... redoLogGrNames length=2 ... Group numbers=(5,6) ... logFileName SQL=SELECT MEMBER FROM V$LOGFILE WHERE GROUP# IN (5,6) ... logFiles length=4 ... SQL= ALTER DATABASE DISABLE THREAD 3 ... archive mode = false ... SQL= ALTER DATABASE DROP LOGFILE GROUP 5 ... SQL= ALTER DATABASE DROP LOGFILE GROUP 6 ... SQL=DROP TABLESPACE UNDOTBS3 INCLUDING CONTENTS AND DATAFILES ... sidParams.length=2 ... SQL=ALTER SYSTEM RESET undo_tablespace SCOPE=SPFILE SID = 'racdb3' ... SQL=ALTER SYSTEM RESET instance_number SCOPE=SPFILE SID = 'racdb3' ... |
Check if the redo log thread and UNDO tablespace for the deleted instance is removed (which for my example, they were successfully removed). If not, manually remove them.
SQL> select thread# from v$thread where upper(instance) = upper('racdb3'); THREAD# ---------- 3 SQL> select group# from v$log where thread# = 3; GROUP# ---------- 5 6 SQL> select member from v$logfile where group# in (5,6); MEMBER -------------------------------------------------- +RACDB_DATA/racdb/onlinelog/group_5.270.781657813 +FRA/racdb/onlinelog/group_5.281.781657813 +RACDB_DATA/racdb/onlinelog/group_6.271.781657815 +FRA/racdb/onlinelog/group_6.289.781657815 SQL> alter database disable thread 3; Database altered. SQL> alter database drop logfile group 5; Database altered. SQL> alter database drop logfile group 6; Database altered. SQL> drop tablespace undotbs3 including contents and datafiles; Tablespace dropped. SQL> alter system reset undo_tablespace scope=spfile sid = 'racdb3'; System altered. SQL> alter system reset instance_number scope=spfile sid = 'racdb3'; System altered. |
Remove Oracle Database Software
In this step, the Oracle Database software will be removed from the node that will be deleted. In addition, the inventories of the remaining nodes will be updated to reflect the removal of the node's Oracle Database software home.
Log in as the Oracle Database software owner when executing the tasks in this section.
Verify Listener Not Running in Oracle Home
If any listeners are running from the Oracle home being removed, they will need to be disabled and stopped.
Check if any listeners are running from the Oracle home to be removed.
[oracle@racnode3 ~]$ srvctl config listener -a Name: LISTENER Network: 1, Owner: grid Home: |
In Oracle 11g Release 2 (11.2) the default listener runs from Grid home. Since the listener is running from the Grid home (shown above), disabling and stopping the listener can be skipped in release 11.2.
If any listeners were explicitly created to run from the Oracle home being removed, they would need to be disabled and stopped.
$ srvctl disable listener -l |
Update Oracle Inventory - (Node Being Removed)
As the Oracle software owner, execute runInstaller from Oracle_home/oui/bin on the node being removed to update the inventory. Set "CLUSTER_NODES={name_of_node_to_delete}".
[oracle@racnode3 ~]$ cd $ORACLE_HOME/oui/bin [oracle@racnode3 bin]$ ./runInstaller -updateNodeList ORACLE_HOME=$ORACLE_HOME "CLUSTER_NODES={racnode3}" -local Starting Oracle Universal Installer... Checking swap space: must be greater than 500 MB. Actual 9983 MB Passed The inventory pointer is located at /etc/oraInst.loc The inventory is located at /u01/app/oraInventory 'UpdateNodeList' was successful. |
After executing runInstaller on the node to be deleted, the inventory.xml file on that node (/u01/app/oraInventory/ContentsXML/inventory.xml) will show only the node to be deleted under the Oracle home name.
... |
The inventory.xml file on the other nodes will still show all of the nodes in the cluster. The inventory on the remaining nodes will be updated after de-installing the Oracle Database software.
... |
De-install Oracle Home
Before attempting to de-install the Oracle Database software, review the /etc/oratab file on the node to be deleted and remove any entries that contain a database instance running out of the Oracle home being de-installed. Do not remove any +ASM entries.
...
#
+ASM3:/u01/app/11.2.0/grid:N # line added by Agent
|
If a rouge entry exists in the /etc/oratab file that contains the Oracle home being deleted, then the deinstall described in the next step will fail:
ERROR: The option -local will not modify any database configuration for this Oracle home.
Following databases have instances configured on local node : 'racdb3'. Remove these
database instances using dbca before de-installing the local Oracle home.
|
When using a non-shared Oracle home (as is the case in this example guide), run deinstall as the Oracle Database software owner from the node to be removed in order to delete the Oracle Database software.
[oracle@racnode3 ~]$ cd $ORACLE_HOME/deinstall [oracle@racnode3 deinstall]$ ./deinstall -local Checking for required files and bootstrapping ... Please wait ... Location of logs /u01/app/oraInventory/logs/ ############ ORACLE DEINSTALL & DECONFIG TOOL START ############ ######################### CHECK OPERATION START ######################### ## [START] Install check configuration ## ... |
Review the complete deinstall output.
After the de-install completes, the Oracle home node will be successfully removed from the inventory.xml file on the local node.
Update Oracle Inventory - (All Remaining Nodes)
From one of the nodes that is to remain part of the cluster, execute runInstaller (without the -local option) as the Oracle software owner to update the inventories with a list of the nodes that are to remain in the cluster. Use the CLUSTER_NODES option to specify the nodes that will remain in the cluster.
[oracle@racnode1 ~]$ cd $ORACLE_HOME/oui/bin [oracle@racnode1 bin]$ ./runInstaller -updateNodeList ORACLE_HOME=$ORACLE_HOME "CLUSTER_NODES={racnode1,racnode2}" Starting Oracle Universal Installer... Checking swap space: must be greater than 500 MB. Actual 9521 MB Passed The inventory pointer is located at /etc/oraInst.loc The inventory is located at /u01/app/oraInventory 'UpdateNodeList' was successful. |
Review the inventory.xml file on each remaining node in the cluster to verify the Oracle home name does not include the node being removed.
... |
Remove Node from Clusterware
This section describes the steps to remove a node from Oracle Clusterware.
Verify Grid_home
Most of the commands in this section will be run as root. Ensure that Grid_home correctly specifies the full directory path for the Oracle Clusterware home on each node, where Grid_home is the location of the installed Oracle Clusterware software.
[root@racnode1 ~]# GRID_HOME=/u01/app/11.2.0/grid [root@racnode1 ~]# export GRID_HOME |
Unpin Node
Run the following command as root to determine whether the node you want to delete is active and whether it is pinned.
[root@racnode1 ~]# $GRID_HOME/bin/olsnodes -s -t
racnode1 Active Unpinned
racnode2 Active Unpinned
racnode3 Active Unpinned
|
If the node being removed is already unpinned then you do not need to run the crsctl unpin css command below and can proceed to the next step.
If the node being removed is pinned with a fixed node number, then run the crsctl unpin css command as root from a node that is to remain a member of the Oracle RAC in order to unpin the node and expire the CSS lease on the node you are deleting.
[root@racnode1 ~]# $GRID_HOME/bin/crsctl unpin css -n racnode3 CRS-4667: Node racnode3 successfully unpinned. |
If Cluster Synchronization Services (CSS) is not running on the node you are deleting, then the crsctl unpin css command in this step fails.
Disable Oracle Clusterware
Before executing the rootcrs.pl script described in this section, you must ensure EMAGENT is not running on the node being deleted.
[oracle@racnode3 ~]$ emctl stop dbconsole |
If you have been following along in this guide, the EMAGENT should not be running since the instance on the node being deleted was removed from OEM Database Control Monitoring.
Next, disable the Oracle Clusterware applications and daemons running on the node to be deleted from the cluster. Run the rootcrs.pl script as root from the Grid_home/crs/install directory on the node to be deleted.
[root@racnode3 ~]# GRID_HOME=/u01/app/11.2.0/grid [root@racnode3 ~]# export GRID_HOME [root@racnode3 ~]# cd $GRID_HOME/crs/install [root@racnode3 ~]# ./rootcrs.pl -deconfig -force Using configuration parameter file: ./crsconfig_params Network exists: 1/192.168.1.0/255.255.255.0/eth0, type static VIP exists: /racnode1-vip/192.168.1.251/192.168.1.0/255.255.255.0/eth0, hosting node racnode1 VIP exists: /racnode2-vip/192.168.1.252/192.168.1.0/255.255.255.0/eth0, hosting node racnode2 VIP exists: /racnode3-vip/192.168.1.253/192.168.1.0/255.255.255.0/eth0, hosting node racnode3 GSD exists ONS exists: Local port 6100, remote port 6200, EM port 2016 CRS-2673: Attempting to stop 'ora.registry.acfs' on 'racnode3' CRS-2677: Stop of 'ora.registry.acfs' on 'racnode3' succeeded CRS-2791: Starting shutdown of Oracle High Availability Services-managed resources on 'racnode3' CRS-2673: Attempting to stop 'ora.crsd' on 'racnode3' CRS-2790: Starting shutdown of Cluster Ready Services-managed resources on 'racnode3' CRS-2673: Attempting to stop 'ora.oc4j' on 'racnode3' CRS-2673: Attempting to stop 'ora.CRS.dg' on 'racnode3' CRS-2673: Attempting to stop 'ora.FRA.dg' on 'racnode3' CRS-2673: Attempting to stop 'ora.RACDB_DATA.dg' on 'racnode3' CRS-2673: Attempting to stop 'ora.DOCS.dg' on 'racnode3' CRS-2677: Stop of 'ora.FRA.dg' on 'racnode3' succeeded CRS-2677: Stop of 'ora.RACDB_DATA.dg' on 'racnode3' succeeded CRS-2677: Stop of 'ora.DOCS.dg' on 'racnode3' succeeded CRS-2677: Stop of 'ora.oc4j' on 'racnode3' succeeded CRS-2672: Attempting to start 'ora.oc4j' on 'racnode2' CRS-2676: Start of 'ora.oc4j' on 'racnode2' succeeded CRS-2677: Stop of 'ora.CRS.dg' on 'racnode3' succeeded CRS-2673: Attempting to stop 'ora.asm' on 'racnode3' CRS-2677: Stop of 'ora.asm' on 'racnode3' succeeded CRS-2792: Shutdown of Cluster Ready Services-managed resources on 'racnode3' has completed CRS-2677: Stop of 'ora.crsd' on 'racnode3' succeeded CRS-2673: Attempting to stop 'ora.drivers.acfs' on 'racnode3' CRS-2673: Attempting to stop 'ora.mdnsd' on 'racnode3' CRS-2673: Attempting to stop 'ora.crf' on 'racnode3' CRS-2673: Attempting to stop 'ora.ctssd' on 'racnode3' CRS-2673: Attempting to stop 'ora.evmd' on 'racnode3' CRS-2673: Attempting to stop 'ora.asm' on 'racnode3' CRS-2677: Stop of 'ora.mdnsd' on 'racnode3' succeeded CRS-2677: Stop of 'ora.crf' on 'racnode3' succeeded CRS-2677: Stop of 'ora.evmd' on 'racnode3' succeeded CRS-2677: Stop of 'ora.ctssd' on 'racnode3' succeeded CRS-2677: Stop of 'ora.drivers.acfs' on 'racnode3' succeeded CRS-2677: Stop of 'ora.asm' on 'racnode3' succeeded CRS-2673: Attempting to stop 'ora.cluster_interconnect.haip' on 'racnode3' CRS-2677: Stop of 'ora.cluster_interconnect.haip' on 'racnode3' succeeded CRS-2673: Attempting to stop 'ora.cssd' on 'racnode3' CRS-2677: Stop of 'ora.cssd' on 'racnode3' succeeded CRS-2673: Attempting to stop 'ora.gipcd' on 'racnode3' CRS-2677: Stop of 'ora.gipcd' on 'racnode3' succeeded CRS-2673: Attempting to stop 'ora.gpnpd' on 'racnode3' CRS-2677: Stop of 'ora.gpnpd' on 'racnode3' succeeded CRS-2793: Shutdown of Oracle High Availability Services-managed resources on 'racnode3' has completed CRS-4133: Oracle High Availability Services has been stopped. Successfully deconfigured Oracle clusterware stack on this node |
Delete Node from Clusterware Configuration
From a node that is to remain a member of the Oracle RAC, run the following command from the Grid_home/bin directory as root to update the Clusterware configuration to delete the node from the cluster.
[root@racnode1 ~]# $GRID_HOME/bin/crsctl delete node -n racnode3 CRS-4661: Node racnode3 successfully deleted. [root@racnode1 ~]# $GRID_HOME/bin/olsnodes -t -s racnode1 Active Unpinned racnode2 Active Unpinned |
where racnode3 is the node to be deleted.
Update Oracle Inventory - (Node Being Removed)
As the Oracle Grid Infrastructure owner, execute runInstaller from Grid_home/oui/bin on the node being removed to update the inventory. Set "CLUSTER_NODES={name_of_node_to_delete}". Note that this step is missing in the official Oracle Documentation (Oracle Clusterware Administration and Deployment Guide 11g Release 2 (11.2) E10717-11 April 2010).
[grid@racnode3 ~]$ cd $GRID_HOME/oui/bin [grid@racnode3 bin]$ ./runInstaller -updateNodeList ORACLE_HOME=$GRID_HOME "CLUSTER_NODES={racnode3}" CRS=TRUE -local Starting Oracle Universal Installer... Checking swap space: must be greater than 500 MB. Actual 9983 MB Passed The inventory pointer is located at /etc/oraInst.loc The inventory is located at /u01/app/oraInventory 'UpdateNodeList' was successful. |
After executing runInstaller on the node to be deleted, the inventory.xml file on that node (/u01/app/oraInventory/ContentsXML/inventory.xml) will show only the node to be deleted under the Grid home name.
... |
The inventory.xml file on the other nodes will still show all of the nodes in the cluster. The inventory on the remaining nodes will be updated after de-installing the Oracle Grid Infrastructure software.
... |
De-install Oracle Grid Infrastructure Software
When using a non-shared Grid home (as is the case in this example guide), run deinstall as the Grid Infrastructure software owner from the node to be removed in order to delete the Oracle Grid Infrastructure software.
[grid@racnode3 ~]$ cd $GRID_HOME/deinstall [grid@racnode3 deinstall]$ ./deinstall -local Checking for required files and bootstrapping ... Please wait ... Location of logs /tmp/deinstall2012-05-07_01-21-53PM/logs/ ############ ORACLE DEINSTALL & DECONFIG TOOL START ############ ######################### CHECK OPERATION START ######################### ## [START] Install check configuration ## Checking for existence of the Oracle home location /u01/app/11.2.0/grid Oracle Home type selected for deinstall is: Oracle Grid Infrastructure for a Cluster Oracle Base selected for deinstall is: /u01/app/grid Checking for existence of central inventory location /u01/app/oraInventory Checking for existence of the Oracle Grid Infrastructure home The following nodes are part of this cluster: racnode3 Checking for sufficient temp space availability on node(s) : 'racnode3' ## [END] Install check configuration ## Traces log file: /tmp/deinstall2012-05-07_01-21-53PM/logs//crsdc.log Enter an address or the name of the virtual IP used on node "racnode3"[racnode3-vip] > [ENTER] The following information can be collected by running "/sbin/ifconfig -a" on node "racnode3" Enter the IP netmask of Virtual IP "192.168.1.253" on node "racnode3"[255.255.255.0] > [ENTER] Enter the network interface name on which the virtual IP address "192.168.1.253" is active > [ENTER] Enter an address or the name of the virtual IP[] > [ENTER] Network Configuration check config START Network de-configuration trace file location: /tmp/deinstall2012-05-07_01-21-53PM/logs/netdc_check2012-05-07_01-22-16-PM.log Specify all RAC listeners (do not include SCAN listener) that are to be de-configured [LISTENER,LISTENER_SCAN3,LISTENER_SCAN2,LISTENER_SCAN1]:LISTENER At least one listener from the discovered listener list [LISTENER,LISTENER_SCAN3,LISTENER_SCAN2,LISTENER_SCAN1] is missing in the specified listener list [LISTENER]. The Oracle home will be cleaned up, so all the listeners will not be available after deinstall. If you want to remove a specific listener, please use Oracle Net Configuration Assistant instead. Do you want to continue? (y|n) [n]: y Network Configuration check config END Asm Check Configuration START ASM de-configuration trace file location: /tmp/deinstall2012-05-07_01-21-53PM/logs/asmcadc_check2012-05-07_01-25-11-PM.log ######################### CHECK OPERATION END ######################### ####################### CHECK OPERATION SUMMARY ####################### Oracle Grid Infrastructure Home is: The cluster node(s) on which the Oracle home deinstallation will be performed are:racnode3 Since -local option has been specified, the Oracle home will be deinstalled only on the local node, 'racnode3', and the global configuration will be removed. Oracle Home selected for deinstall is: /u01/app/11.2.0/grid Inventory Location where the Oracle home registered is: /u01/app/oraInventory Following RAC listener(s) will be de-configured: LISTENER Option -local will not modify any ASM configuration. Do you want to continue (y - yes, n - no)? [n]: y A log of this session will be written to: '/tmp/deinstall2012-05-07_01-21-53PM/logs/deinstall_deconfig2012-05-07_01-21-56-PM.out' Any error messages from this session will be written to: '/tmp/deinstall2012-05-07_01-21-53PM/logs/deinstall_deconfig2012-05-07_01-21-56-PM.err' ######################## CLEAN OPERATION START ######################## ASM de-configuration trace file location: /tmp/deinstall2012-05-07_01-21-53PM/logs/asmcadc_clean2012-05-07_01-25-16-PM.log ASM Clean Configuration END Network Configuration clean config START Network de-configuration trace file location: /tmp/deinstall2012-05-07_01-21-53PM/logs/netdc_clean2012-05-07_01-25-16-PM.log De-configuring RAC listener(s): LISTENER De-configuring listener: LISTENER Stopping listener on node "racnode3": LISTENER Warning: Failed to stop listener. Listener may not be running. Listener de-configured successfully. De-configuring Naming Methods configuration file... Naming Methods configuration file de-configured successfully. De-configuring backup files... Backup files de-configured successfully. The network configuration has been cleaned up successfully. Network Configuration clean config END ----------------------------------------> The deconfig command below can be executed in parallel on all the remote nodes. Execute the command on the local node after the execution completes on all the remote nodes. Run the following command as the root user or the administrator on node "racnode3". /tmp/deinstall2012-05-07_01-21-53PM/perl/bin/perl \ -I/tmp/deinstall2012-05-07_01-21-53PM/perl/lib \ -I/tmp/deinstall2012-05-07_01-21-53PM/crs/install \ /tmp/deinstall2012-05-07_01-21-53PM/crs/install/rootcrs.pl \ -force \ -deconfig \ -paramfile "/tmp/deinstall2012-05-07_01-21-53PM/response/deinstall_Ora11g_gridinfrahome1.rsp" Press Enter after you finish running the above commands <---------------------------------------- pre=""> |
Review the complete deinstall output.
After the de-install completes, verify that the /etc/inittab file does not start Oracle Clusterware.
[root@racnode3 ~]# diff /etc/inittab /etc/inittab.no_crs [root@racnode3 ~]# |
Update Oracle Inventory - (All Remaining Nodes)
From one of the nodes that is to remain part of the cluster, execute runInstaller (without the -local option) as the Grid Infrastructure software owner to update the inventories with a list of the nodes that are to remain in the cluster. Use the CLUSTER_NODES option to specify the nodes that will remain in the cluster.
[grid@racnode1 ~]$ cd $GRID_HOME/oui/bin [grid@racnode1 bin]$ ./runInstaller -updateNodeList ORACLE_HOME=$GRID_HOME "CLUSTER_NODES={racnode1,racnode2}" CRS=TRUE Starting Oracle Universal Installer... Checking swap space: must be greater than 500 MB. Actual 9559 MB Passed The inventory pointer is located at /etc/oraInst.loc The inventory is located at /u01/app/oraInventory 'UpdateNodeList' was successful. |
Review the inventory.xml file on each remaining node in the cluster to verify the Grid home name does not include the node being removed.
... |
Verify New Cluster Configuration
Run the following CVU command to verify that the specified node has been successfully deleted from the cluster.
[grid@racnode1 ~]$ cluvfy stage -post nodedel -n racnode3 -verbose Performing post-checks for node removal Checking CRS integrity... Clusterware version consistency passed The Oracle Clusterware is healthy on node "racnode2" The Oracle Clusterware is healthy on node "racnode1" CRS integrity check passed Result: Node removal check passed Post-check for node removal was successful. |
At this point, racnode3 is no longer a member of the cluster. However, if an OCR dump is taken from one of the reamining nodes, information about the deleted node is still contained in the OCRDUMPFILE.
[grid@racnode1 ~]$ ocrdump |
[SYSTEM.crs.e2eport.racnode3] ORATEXT : (ADDRESS=(PROTOCOL=tcp)(HOST=192.168.3.153)(PORT=50989)) SECURITY : {USER_PERMISSION : PROCR_ALL_ACCESS, GROUP_PERMISSION : PROCR_NONE, OTHER_PERMISSION : PROCR_NONE, USER_NAME : grid, GROUP_NAME : oinstall} ... [SYSTEM.OCR.BACKUP.2.NODENAME] ORATEXT : racnode3 SECURITY : {USER_PERMISSION : PROCR_ALL_ACCESS, GROUP_PERMISSION : PROCR_READ, OTHER_PERMISSION : PROCR_READ, USER_NAME : root, GROUP_NAME : root} ... [SYSTEM.OCR.BACKUP.DAY.NODENAME] ORATEXT : racnode3 SECURITY : {USER_PERMISSION : PROCR_ALL_ACCESS, GROUP_PERMISSION : PROCR_READ, OTHER_PERMISSION : PROCR_READ, USER_NAME : root, GROUP_NAME : root} ... [SYSTEM.OCR.BACKUP.DAY_.NODENAME] ORATEXT : racnode3 SECURITY : {USER_PERMISSION : PROCR_ALL_ACCESS, GROUP_PERMISSION : PROCR_READ, OTHER_PERMISSION : PROCR_READ, USER_NAME : root, GROUP_NAME : root} ... [SYSTEM.OCR.BACKUP.WEEK_.NODENAME] ORATEXT : racnode3 SECURITY : {USER_PERMISSION : PROCR_ALL_ACCESS, GROUP_PERMISSION : PROCR_READ, OTHER_PERMISSION : PROCR_READ, USER_NAME : root, GROUP_NAME : root} ... [DATABASE.ASM.racnode3] ORATEXT : racnode3 SECURITY : {USER_PERMISSION : PROCR_ALL_ACCESS, GROUP_PERMISSION : PROCR_READ, OTHER_PERMISSION : PROCR_READ, USER_NAME : grid, GROUP_NAME : oinstall} [DATABASE.ASM.racnode3.+asm3] ORATEXT : +ASM3 SECURITY : {USER_PERMISSION : PROCR_ALL_ACCESS, GROUP_PERMISSION : PROCR_READ, OTHER_PERMISSION : PROCR_READ, USER_NAME : grid, GROUP_NAME : oinstall} [DATABASE.ASM.racnode3.+asm3.ORACLE_HOME] ORATEXT : /u01/app/11.2.0/grid SECURITY : {USER_PERMISSION : PROCR_ALL_ACCESS, GROUP_PERMISSION : PROCR_READ, OTHER_PERMISSION : PROCR_READ, USER_NAME : grid, GROUP_NAME : oinstall} [DATABASE.ASM.racnode3.+asm3.ENABLED] ORATEXT : true SECURITY : {USER_PERMISSION : PROCR_ALL_ACCESS, GROUP_PERMISSION : PROCR_READ, OTHER_PERMISSION : PROCR_READ, USER_NAME : grid, GROUP_NAME : oinstall} [DATABASE.ASM.racnode3.+asm3.VERSION] ORATEXT : 11.2.0.3.0 SECURITY : {USER_PERMISSION : PROCR_ALL_ACCESS, GROUP_PERMISSION : PROCR_READ, OTHER_PERMISSION : PROCR_READ, USER_NAME : grid, GROUP_NAME : oinstall} |
This does not mean that the node wasn't removed properly. It is still possible to add the node again with the same hostname, IP address, VIP, etc. anytime in the future.
Remove Remaining Components
This section provides instructures to delete any remaining components from the node to be removed.
Remove ASMLib
Remove the ASMLib kernel driver, supporting software, and associated directories from racnode3.
[root@racnode3 ~]# /usr/sbin/oracleasm exit Unmounting ASMlib driver filesystem: /dev/oracleasm Unloading module "oracleasm": oracleasm [root@racnode3 ~]# rpm -qa | grep oracleasm oracleasmlib-2.0.4-1.el5 oracleasm-2.6.18-274.el5-2.0.5-1.el5 oracleasm-support-2.1.7-1.el5 [root@racnode3 ~]# rpm -ev oracleasmlib-2.0.4-1.el5 oracleasm-2.6.18-274.el5-2.0.5-1.el5 oracleasm-support-2.1.7-1.el5 warning: /etc/sysconfig/oracleasm saved as /etc/sysconfig/oracleasm.rpmsave [root@racnode3 ~]# rm -f /etc/sysconfig/oracleasm.rpmsave [root@racnode3 ~]# rm -f /etc/sysconfig/oracleasm-_dev_oracleasm [root@racnode3 ~]# rm -f /etc/rc.d/rc2.d/S29oracleasm [root@racnode3 ~]# rm -f /etc/rc.d/rc0.d/K20oracleasm [root@racnode3 ~]# rm -f /etc/rc.d/rc5.d/S29oracleasm [root@racnode3 ~]# rm -f /etc/rc.d/rc4.d/S29oracleasm [root@racnode3 ~]# rm -f /etc/rc.d/rc1.d/K20oracleasm [root@racnode3 ~]# rm -f /etc/rc.d/rc3.d/S29oracleasm [root@racnode3 ~]# rm -f /etc/rc.d/rc6.d/K20oracleasm |
Comments
Post a Comment
Oracle DBA Information