How to backup or restore OLR in 11.2 Grid Infrastructure (Doc ID 1193643.1)
http://docs.oracle.com/cd/E18283_01/rac.112/e16794/votocr.htm (11Gr2)
Information in this document applies to any platform.
node1 2014/02/21 08:02:57 /opt/app/oracle/grid/11.2.0.1/cdata/node1/backup_20140221_080257.olr
node1 2014/02/21 08:02:56 /opt/app/oracle/grid/11.2.0.1/cdata/node1/backup_20140221_080256.olr
node1 2014/02/21 08:02:54 /opt/app/oracle/grid/11.2.0.1/cdata/node1/backup_20140221_080254.olr
node1 2014/02/21 08:02:51 /opt/app/oracle/grid/11.2.0.1/cdata/node1/backup_20140221_080251.olr
node1 2014/02/21 08:02:39 /opt/app/oracle/grid/11.2.0.1/cdata/node1/backup_20140221_080239.olr
#ls -ltr /opt/app/oracle/grid/11.2.0.1/cdata/node1
total 38896
-rw------- 1 root root 6635520 Feb 21 08:02 backup_20140221_080256.olr
-rw------- 1 root root 6635520 Feb 21 08:02 backup_20140221_080257.olr
2] ASM instance started and diskgroups are mounted.
3] RDBMS software installation succeded .
4] Database creation succeded .
1] Run the ocrcheck -config command to determine the OCR/OLR location, as grid owner:
$ ocrcheck -config
2] create an empty (0 byte) OCR file with appropriate permissions in that location:
$ touch/cdata/localhost/local.ocr
3] Restore OCR by running the following command as ROOT:
#/bin/crsctl pin css -n
NOTE: You must register again all resources (e.g database, listener, ASM) with Oracle Restart.
In order to add resources back to the configuration, follow next steps:
Check CRS functionality:
If there is no additional corruption in the solution provided should solve the problem.
directory.
How to backup a Grid Infrastructure installation (Doc ID 1482803.1)
Information in this document applies to any platform.
This document covers all the files required to be backed up at GI level Including when there is a third party cluster software present.
This document will be useful to DBA in situations where it requires the restore of software files from backup for a backout operation.
It is recommended you consider using operating system (OS) level backup tools and strategies whenever possible to backup the whole node for faster restore and recovery of Oracle installation to previous consistent state, in contingency.
When OS level backup of whole node is not a option due to time, space constraints or otherwise, you can backup following list of files.
Important Note :
You must use appropriate backup OS command options for cp/tar in order to achieve exact clone of the files with preserving the file attributes like mode, ownership, permission, timestamps, links, all.
-p will preserve the file mode, ownership, and timestamps
-r will copy files recursively
b) To save space on the backup of the grid home, tar can be used with "-p" option to preserve ownership, "-z" to compress using gzip. Not all distributions of tar support the "-z" option. Check your tar man page.
a) Use cp -pr , eg.
----------------
These files are used to start the ohasd daemon. This daemon will further spawn other clusterware processes.
Inittab file
---------------
The inittab file describes which processes are started at bootup and during normal operation. Init.ohasd script is registered in this file.
SCRBASE files (also know as Control files)
-------------------------------------------------
These files are used to control some aspects of ohasd and clusterware like autostart of ohasd.
Central Inventory Location
-------
Central Inventory Location. The location is defined by parameter inventory_loc in /etc/oraInst.loc or /var/opt/oracle/oraInst.loc depend on platform.
oratab
----------
The directory where oratab is located.
http://docs.oracle.com/cd/E18283_01/rac.112/e16794/votocr.htm (11Gr2)
APPLIES TO:
Oracle Database - Enterprise Edition - Version 11.2.0.1.0 and laterInformation in this document applies to any platform.
GOAL
Oracle Local Registry (OLR) is introduced in 11gR2 Grid Infrastructure. It contains local node specific configuration required by OHASD and is not shared between nodes; in other word, every node has its own OLR.
This note provides steps to backup or restore OLR.
SOLUTION
OLR location
The OLR location pointer file is '/etc/oracle/olr.loc' or '/var/opt/oracle/olr.loc' depending on platform. The default location after installing Oracle Clusterware is:
GI Cluster: /cdata/
GI Standalone (Oracle Restart):/cdata/localhost/
GI Standalone (Oracle Restart):
To backup
OLR will be backed up during GI configuration(installation or upgrade). In contrast to OCR, OLR will NOT be automatically backed up again after GI is configured, only manual backups can be taken. If further backup is required, OLR needs to be backed up manually. To take a backup of the OLR use the following command.
# /bin/ocrconfig -local -manualbackup
To list backups
To List the backups currently available:
# /bin/ocrconfig -local -showbackup
node1 2010/12/14 14:33:20 /opt/app/oracle/grid/11.2.0.1/cdata/node1/backup_20101214_143320.olr
node1 2010/12/14 14:33:17 /opt/app/oracle/grid/11.2.0.1/cdata/node1/backup_20101214_143317.olr
node1 2010/12/14 14:33:17 /opt/app/oracle/grid/11.2.0.1/cdata/node1/backup_20101214_143317.olr
Clusterware maintains the history of the five most recent manual backups and will not update/delete a manual backups after it has been created.
$ocrconfig -local -showbackup shows manual backups in the registry though they are removed or archived manually in OS file system by OS commands
#ocrconfig -local -showbackupnode1 2014/02/21 08:02:57 /opt/app/oracle/grid/11.2.0.1/cdata/node1/backup_20140221_080257.olr
node1 2014/02/21 08:02:56 /opt/app/oracle/grid/11.2.0.1/cdata/node1/backup_20140221_080256.olr
node1 2014/02/21 08:02:54 /opt/app/oracle/grid/11.2.0.1/cdata/node1/backup_20140221_080254.olr
node1 2014/02/21 08:02:51 /opt/app/oracle/grid/11.2.0.1/cdata/node1/backup_20140221_080251.olr
node1 2014/02/21 08:02:39 /opt/app/oracle/grid/11.2.0.1/cdata/node1/backup_20140221_080239.olr
#ls -ltr /opt/app/oracle/grid/11.2.0.1/cdata/node1
total 38896
-rw------- 1 root root 6635520 Feb 21 08:02 backup_20140221_080256.olr
-rw------- 1 root root 6635520 Feb 21 08:02 backup_20140221_080257.olr
To restore
Be sure GI stack is completely down and ohasd.bin is not up and running, use the following command to confirm:
ps -ef| grep ohasd.bin
This should return no process, if ohasd.bin is still up and running, stop it on local node:
# /bin/crsctl stop crs -f <========= for GI Cluster
OR
#/bin/crsctl stop has <========= for GI Standalone
OR
#
Once it's down, restore with the following command:
# /bin/ocrconfig -local -restore
If the command fails, create a dummy OLR, set correct ownership and permission and retry the restoration command:
# cd
# touch.olr
# chmod 600.olr
# chown: .olr
# touch
# chmod 600
# chown
Once it's restored, GI can be brought up:
# /bin/crsctl start crs <========= for GI Cluster
OR
$/bin/crsctl start has <========= for GI Standalone, this must be done as grid user.
OR
$
================================================================================================================================================
Crsctl start HAS failed with CRS-4563: Insufficient user privileges in Oracle Restart environment (Doc ID 1333606.1) (OR) Recreate missing OLR file
APPLIES TO:
Oracle Database - Enterprise Edition - Version 11.2.0.1 and later
Information in this document applies to any platform.
Information in this document applies to any platform.
SYMPTOMS
1]Grid Infrastructure installation finished successfully:
2011-06-14 17:44:23: ohasd is starting
2011-06-14 17:44:23: Checking ohasd
2011-06-14 17:44:23: ohasd started successfully
2011-06-14 17:44:23: Creating HA resources and dependencies
2011-06-14 17:44:23: Registering type ora.daemon.type
2011-06-14 17:44:25: Registering type ora.cssd.type
2011-06-14 17:44:27: Registering type ora.crs.type
2011-06-14 17:44:28: Registering type ora.evm.type
2011-06-14 17:44:29: Registering type ora.ctss.type
2011-06-14 17:44:30: Registering type ora.diskmon.type
2011-06-14 17:44:32: Successfully created HA resources for HAS daemon and ASM
2011-06-14 17:44:32: ADVM/ACFS is not configured
2011-06-14 17:44:33: /oracle_grid/11.2.0/grid/bin/ocrconfig -local -manualbackup ... passed
2011-06-14 17:44:33: Successfully configured Oracle Grid Infrastructure for a Standalone Server
2011-06-14 17:44:23: Checking ohasd
2011-06-14 17:44:23: ohasd started successfully
2011-06-14 17:44:23: Creating HA resources and dependencies
2011-06-14 17:44:23: Registering type ora.daemon.type
2011-06-14 17:44:25: Registering type ora.cssd.type
2011-06-14 17:44:27: Registering type ora.crs.type
2011-06-14 17:44:28: Registering type ora.evm.type
2011-06-14 17:44:29: Registering type ora.ctss.type
2011-06-14 17:44:30: Registering type ora.diskmon.type
2011-06-14 17:44:32: Successfully created HA resources for HAS daemon and ASM
2011-06-14 17:44:32: ADVM/ACFS is not configured
2011-06-14 17:44:33: /oracle_grid/11.2.0/grid/bin/ocrconfig -local -manualbackup ... passed
2011-06-14 17:44:33: Successfully configured Oracle Grid Infrastructure for a Standalone Server
2] ASM instance started and diskgroups are mounted.
3] RDBMS software installation succeded .
4] Database creation succeded .
After server reboot/crash, HAS does not start and the following error is raised:
CRS-4563: Insufficient user privileges.
CRS-4000: Command Start failed, or completed with errors.
CRS-4000: Command Start failed, or completed with errors.
Consequently, ASM instance will not start .
CRSCTL.LOG shows the next errors:
2011-06-22 17:36:44.708: [ OCROSD][1]utopen:7:failed to open any OCR file/disk, errno=2, os err string=No such file or directory
2011-06-22 17:36:44.708: [ OCRRAW][1]proprinit: Could not open raw device
2011-06-22 17:36:44.708: [ default][1]a_init:7!: Backend init unsuccessful : [26]
......
2011-06-22 17:36:44.708: [ OCRRAW][1]proprinit: Could not open raw device
2011-06-22 17:36:44.708: [ default][1]a_init:7!: Backend init unsuccessful : [26]
......
CHANGES
Server reboot/crashed.
CAUSE
OLR file from /cdata/localhost/ are missing.
Therefore, HAS will fail to start with error CRS-4563.
$ crsctl start has
CRS-4563: Insufficient user privileges.
CRS-4000: Command Start failed, or completed with errors
CRS-4563: Insufficient user privileges.
CRS-4000: Command Start failed, or completed with errors
SOLUTION
Recreate missing OLR file
1] Run the ocrcheck -config command to determine the OLR location, as Oracle grid owner:
$ ocrcheck -config -local
2] create an empty (0 byte) OLR file with appropriate permissions in that location:
$ touch/cdata/localhost/.olr
3] Restore OLR by running the following command as ROOT:
#/bin/ocrconfig -local -restore /cdata//backup_20110614_174432.olr
If OCR file is missing too, you can recreate it:$ ocrcheck -config -local
2] create an empty (0 byte) OLR file with appropriate permissions in that location:
$ touch
3] Restore OLR by running the following command as ROOT:
#
1] Run the ocrcheck -config command to determine the OCR/OLR location, as grid owner:
$ ocrcheck -config
2] create an empty (0 byte) OCR file with appropriate permissions in that location:
$ touch
3] Restore OCR by running the following command as ROOT:
#
Refer also: Oracle Grid Infrastructure Installation Guide for information about creating OCRs
Start HAS again:
$ crsctl start has
NOTE: You must register again all resources (e.g database, listener, ASM) with Oracle Restart.
In order to add resources back to the configuration, follow next steps:
1.srvctl add listener Usage: srvctl add listener [-l ] [-s] [-p "[TCP:][, ...][/IPC:][/NMP:][/TCPS:] [/SDP:]"] [-o ]
2.srvctl add asm -d Usage: srvctl add asm [-l ] [-p ] [-d ]
3.srvctl start asmAdd database as oracle database owner:
4.srvctl add database -d -o -a "" Usage: srvctl add database -d -o [-m ] [-p ] [-r {PRIMARY | PHYSICAL_STANDBY | LOGICAL_STANDBY | SNAPSHOT_STANDBY}] [-s ] [-t ] [-n ] [-y {AUTOMATIC | MANUAL}] [-a ""]
2.srvctl add asm -d
3.srvctl start asmAdd database as oracle database owner:
4.srvctl add database -d
Check CRS functionality:
crsctl stop res -all
crsctl start res -all
crsctl status res -t
crsctl start res -all
crsctl status res -t
Take a backup of the newly created file (as there is no automatic backup after install), as root:
#/bin/ocrconfig -local -manualbackup
If there is no additional corruption in
If there are another corruptions (CRS does not start), please follow the Note 887658.1 Reconfigure HAS and CSS for nonRAC ASM on 11.2 (Doc ID 887658.1) or reinstall grid software.
=====================================================
Grid infrastructure ocssd.log is not immediately recreated after being accidentally deleted (Doc ID 1508918.1)
APPLIES TO:
Oracle Server - Enterprise Edition - Version 11.2.0.1 to 11.2.0.4 [Release 11.2]
Information in this document applies to any platform.
Information in this document applies to any platform.
GOAL
The log file $GRID_HOME/log//cssd/ocssd.log was accidentally deleted. After the deletion, the log file was NOT immediately recreated by the clusterware. Instead, there is no ocssd.log file under $GRID_HOME/log//cssd directory.
This is an Expected behaviour.
The files under $GRID_HOME/log/* are not supposed to be deleted. All the log files, including ocssd.log, are rotated once they reach a certain size. So these files do not create space issue. The filesize is dependent on the log type and version. For ocssd.log, the filesize is 50MB.
FIX
1. Do not delete ocssd.log, or any other log files under $GRID_HOME/log/
2. If ocssd.log or any other log file is accidentally deleted:
(i) Although the deleted log is no longer available, the clusterware still keeps track of log size written while it is up. Once the default size of log data has been generated, another log rotation will be triggered, and then the log file will be recreated.
For example, if ocssd.log was deleted: The file OCSSD.log will be regenerated when the default size of 50MB is filled.
To speed up the process, you can increase the logging level for CSS until the rotation occurs, and then reset the logging level back to defaults:
#crsctl set log css "ALL=5" ==> This will set logging for all ocssd components to 5
#crsctl set log css "CLSF=0,CSSD=2,GIPCCM=2,GIPCGM=2,GIPCNM=2,GPNP=1,OLR=0,SKGFD=0" ==> reset logging back to default 11.2 levels for ocssd
#crsctl set log css "CLSF=0,CSSD=2,GIPCCM=2,GIPCGM=2,GIPCNM=2,GPNP=1,OLR=0,SKGFD=0" ==> reset logging back to default 11.2 levels for ocssd
(ii) Restart the clusterware. The missing log file will be recreated on clusterware restart.
Note: the log data which would have gone to the deleted log file cannot be recaptured. For example, if ocssd.log is deleted, and you choose to wait until the next log rotation, there is no way to recapture the log data which would have been written to the deleted file.
REFERENCES
NOTE:1368695.1 - Oracle Clusterware logs and RDBMS logs rotation policy
==========================================================================
How to backup a Grid Infrastructure installation (Doc ID 1482803.1)
APPLIES TO:
Oracle Database - Enterprise Edition - Version 11.2.0.1 to 11.2.0.4 [Release 11.2]Information in this document applies to any platform.
GOAL
The purpose of this document is to help identify files to be backed up in Oracle Grid Infrastructure before you apply any patch/patchset at CRS or OS level.This document covers all the files required to be backed up at GI level Including when there is a third party cluster software present.
This document will be useful to DBA in situations where it requires the restore of software files from backup for a backout operation.
It is recommended you consider using operating system (OS) level backup tools and strategies whenever possible to backup the whole node for faster restore and recovery of Oracle installation to previous consistent state, in contingency.
When OS level backup of whole node is not a option due to time, space constraints or otherwise, you can backup following list of files.
Important Note :
You must use appropriate backup OS command options for cp/tar in order to achieve exact clone of the files with preserving the file attributes like mode, ownership, permission, timestamps, links, all.
SOLUTION
Note: These files must be backed up SEPARATELY on EACH node which you plan to modify.
The grid home directory and the files outside the grid home all contain node-specific information.
You CANNOT restore files from one node onto a different node.
The grid home directory and the files outside the grid home all contain node-specific information.
You CANNOT restore files from one node onto a different node.
Commands to use
a) Use cp command with below options to copy the files.-p will preserve the file mode, ownership, and timestamps
-r will copy files recursively
b) To save space on the backup of the grid home, tar can be used with "-p" option to preserve ownership, "-z" to compress using gzip. Not all distributions of tar support the "-z" option. Check your tar man page.
I. Take the backup of OLR.
Only manual backups can be taken of the OLR, there is no default automated process available in the release documented in this note. Backups are only taken by the Oracle software during the install and upgrade process. It is recommended to take the backup of OLR manually. To take a backup of the OLR use the following command.
[root@vmrac1 bin]# ./ocrconfig -local -manualbackup
To List the backups currently available for restoration of the OLR
[root@vmrac1 bin]# ./ocrconfig -local -showbackup
vmrac1 2012/08/13 17:57:49 /u01/app/11.2.0/grid/cdata/vmrac1/backup_20120813_175749.olr
vmrac1 2011/10/07 06:00:48 /u01/app/11.2.0/grid/cdata/vmrac1/backup_20111007_060048.olr
vmrac1 2012/08/13 17:57:49 /u01/app/11.2.0/grid/cdata/vmrac1/backup_20120813_175749.olr
vmrac1 2011/10/07 06:00:48 /u01/app/11.2.0/grid/cdata/vmrac1/backup_20111007_060048.olr
II. Take the backup of OCR.
Oracle Clusterware automatically creates OCR backups every four hours. At any one time, Oracle Database always retains the last three backup copies of the OCR. The CRSD process that creates the backups also creates and retains an OCR backup for each full day and at the end of each week. It is recommended to take the backup of OCR as part of backing up the files required for restore.
[root@vmrac1 bin]# ./ocrconfig -manualbackup
[root@vmrac1 bin]# ./ocrconfig -showbackup
[root@vmrac1 bin]# ./ocrconfig -showbackup
III. Take the backup of Grid Infrastructure home.
This should be done as root user.a) Use cp -pr , eg.
# cp -pr $GRID_HOME $backup_location
b) Use tar, eg.
# tar cpfz $BKUPDIR/gridhome.tgz $GRID_HOME
or if -z option is not supported by your tar distribution
# tar cpf - $GRID_HOME | gzip -c > $BKUPDIR/gridhome.tar.gzip
IV. Backup the scripts which are required for proper functioning of the CRS.
Init scripts----------------
These files are used to start the ohasd daemon. This daemon will further spawn other clusterware processes.
Inittab file
---------------
The inittab file describes which processes are started at bootup and during normal operation. Init.ohasd script is registered in this file.
SCRBASE files (also know as Control files)
-------------------------------------------------
These files are used to control some aspects of ohasd and clusterware like autostart of ohasd.
Central Inventory Location
-------
Central Inventory Location. The location is defined by parameter inventory_loc in /etc/oraInst.loc or /var/opt/oracle/oraInst.loc depend on platform.
oratab
----------
The directory where oratab is located.
Please note: init script, inittab and SCRBASE files can be recreated by running Grid Infrastructure rootcrs.pl -deconfig and then root.sh.
Solaris:
1. Init scripts and inittab:
cp -p /etc/init.d/init.ohasd $BKUPDIR
cp -p /etc/init.d/ohasd $BKUPDIR
cp -p /etc/inittab $BKUPDIR
Identify the soft links pointing to /etc/init.d/ohasd:cp -p /etc/init.d/ohasd $BKUPDIR
cp -p /etc/inittab $BKUPDIR
# cd /etc
# find . | grep ohas
eg.
./init.d/init.ohasd
./init.d/ohasd
./rc0.d/K19ohasd
./rc1.d/K19ohasd
./rc2.d/K19ohasd
./rc3.d/S96ohasd
./rcS.d/K19ohasd
the soft links need not be copied, but in case of restore, may need to be restored, so keep a record
# find /etc | grep ohas > $BKUPDIR/initlinks.out
2. SCRBASE, oratab and setasmgid:# find . | grep ohas
eg.
./init.d/init.ohasd
./init.d/ohasd
./rc0.d/K19ohasd
./rc1.d/K19ohasd
./rc2.d/K19ohasd
./rc3.d/S96ohasd
./rcS.d/K19ohasd
the soft links need not be copied, but in case of restore, may need to be restored, so keep a record
# find /etc | grep ohas > $BKUPDIR/initlinks.out
mkdir -p $BKUPDIR/varoptoracle
mkdir -p $BKUPDIR/optoracle
cp -pr /var/opt/oracle $BKUPDIR/varoptoracle
cp -pr /opt/oracle $BKUPDIR/optoracle
3. Central Inventory:mkdir -p $BKUPDIR/optoracle
cp -pr /var/opt/oracle $BKUPDIR/varoptoracle
cp -pr /opt/oracle $BKUPDIR/optoracle
cat /var/opt/oracle/oraInst.loc | grep inventory_loc
inventory_loc=/home/ogrid/app/oraInventory
cp -pr /home/ogrid/app/oraInventory $BKUPDIR
4. Files to be copied when there is veritas or oracle sun clusterware installed along with CRS.inventory_loc=/home/ogrid/app/oraInventory
cp -pr /home/ogrid/app/oraInventory $BKUPDIR
cp -pr /opt/ORCLcluster $BKUPDIR
Linux:
1. Init scripts:-
cp -p /etc/init.d/init.ohasd
cp -p /etc/init.d/ohasd
cp -p /etc/inittab
2. SCRBASE:-cp -p /etc/init.d/ohasd
cp -p /etc/inittab
cp -p /etc/oracle/setasmgid
cp -pr /opt/oracle
cp -pr /etc/oracle/lastgasp
cp -pr /etc/oracle/oprocd
cp -pr /etc/oracle/scls_scr
cp -p /etc/oracle/ocr.loc
cp -p /etc/oracle/olr.loc
Below are the soft links pointing to /etc/init.d/ohasd
/etc/rc2.d/K96init.crs
/etc/rc2.d/S96init.crs
/etc/rc3.d/K96init.crs
/etc/rc3.d/S96init.crs
/etc/rc5.d/K96init.crs
/etc/rc5.d/S96init.crs
If you are using asmlib then below files needs to be backed up:cp -pr /opt/oracle
cp -pr /etc/oracle/lastgasp
cp -pr /etc/oracle/oprocd
cp -pr /etc/oracle/scls_scr
cp -p /etc/oracle/ocr.loc
cp -p /etc/oracle/olr.loc
Below are the soft links pointing to /etc/init.d/ohasd
/etc/rc2.d/K96init.crs
/etc/rc2.d/S96init.crs
/etc/rc3.d/K96init.crs
/etc/rc3.d/S96init.crs
/etc/rc5.d/K96init.crs
/etc/rc5.d/S96init.crs
/etc/rc.d/init.d/oracleasm
/usr/lib/oracleasm
/opt/oracle/extapi/64/asm/orcl/1/libasm.so
3. Central Inventory:-/usr/lib/oracleasm
/opt/oracle/extapi/64/asm/orcl/1/libasm.so
cp -p /etc/oraInst.loc
cat /etc/oraInst.loc | grep inventory_loc
inventory_loc=/home/ogrid/app/oraInventory
cp -pr /home/ogrid/app/oraInventory
4. oratab file.cat /etc/oraInst.loc | grep inventory_loc
inventory_loc=/home/ogrid/app/oraInventory
cp -pr /home/ogrid/app/oraInventory
cp -p /etc/oracle/oratab
HP-UX:
1. Init scripts:-
cp -p /sbin/init.d/init.ohasd
cp -p /sbin/init.d/ohasd
cp -p /etc/inittab
2. SCRBASE:-cp -p /sbin/init.d/ohasd
cp -p /etc/inittab
cp -p /var/opt/oracle/setasmgid
cp -pr /opt/oracle
cp -pr /var/opt/oracle/lastgasp
cp -pr /var/opt/oracle/oprocd
cp -pr /var/opt/oracle/scls_scr
cp -p /var/opt/oracle/ocr.loc
cp -p /var/opt/oracle/olr.loc
Below are the soft links pointing to /sbin/init.d/ohasd
/sbin/rc2.d/K960init.crs
/sbin/rc2.d/K001init.crs
/sbin/rc3.d/K960init.crs
/sbin/rc3.d/S960init.crs
3. Central Inventory:-cp -pr /opt/oracle
cp -pr /var/opt/oracle/lastgasp
cp -pr /var/opt/oracle/oprocd
cp -pr /var/opt/oracle/scls_scr
cp -p /var/opt/oracle/ocr.loc
cp -p /var/opt/oracle/olr.loc
Below are the soft links pointing to /sbin/init.d/ohasd
/sbin/rc2.d/K960init.crs
/sbin/rc2.d/K001init.crs
/sbin/rc3.d/K960init.crs
/sbin/rc3.d/S960init.crs
cp -p /var/opt/oracle/oraInst.loc
cat /var/opt/oracle/oraInst.loc | grep inventory_loc
inventory_loc=/home/ogrid/app/oraInventory
cp -pr /home/ogrid/app/oraInventory
4. oratab file:-cat /var/opt/oracle/oraInst.loc | grep inventory_loc
inventory_loc=/home/ogrid/app/oraInventory
cp -pr /home/ogrid/app/oraInventory
cp -p /var/opt/oracle/oratab
5. Files to be copied when HP service guard is installed along with oracle CRS.
cp -pr /opt/nmapi
AIX:
1. Init scripts:-
cp -p /etc/init.ohasd
cp -p /etc/ohasd
cp -p /etc/inittab
2. SCRBASE:-cp -p /etc/ohasd
cp -p /etc/inittab
cp -p /etc/oracle/lastgasp
cp -p /etc/oracle/oprocd
cp -p /etc/oracle/scls_scr
cp -p /etc/oracle/setasmgid
cp -pr /opt/oracle
cp -p /etc/oracle/ocr.loc
cp -p /etc/oracle/olr.loc
Below are the soft links pointing to /etc/ohasd
/etc/rc.d/rc2.d/K96init.ohasd
/etc/rc.d/rc2.d/S96init.ohasd
3. Central Inventory:-cp -p /etc/oracle/oprocd
cp -p /etc/oracle/scls_scr
cp -p /etc/oracle/setasmgid
cp -pr /opt/oracle
cp -p /etc/oracle/ocr.loc
cp -p /etc/oracle/olr.loc
Below are the soft links pointing to /etc/ohasd
/etc/rc.d/rc2.d/K96init.ohasd
/etc/rc.d/rc2.d/S96init.ohasd
cp -p /etc/oracle/oraInst.loc
cat /etc/oracle/oraInst.loc | grep inventory_loc
inventory_loc=/home/ogrid/app/oraInventory
cp -pr /home/ogrid/app/oraInventory
4. oratab file:-cat /etc/oracle/oraInst.loc | grep inventory_loc
inventory_loc=/home/ogrid/app/oraInventory
cp -pr /home/ogrid/app/oraInventory
cp -p /etc/oratab
5. Files to be copied when HACMP is installed along with CRS.
cp -pr /opt/ORCLcluster
=-==================================================================
Comments
Post a Comment
Oracle DBA Information