How To Apply A Rolling OPatch

How To Apply A Rolling OPatch
An Oracle OPatch for a RAC configuration can take take one of two forms, either as a Minimum Downtime OPatch or as a Rolling Opatch. The instructions for applying the former can be foundhere. Instructions for performing the latter can be found below.
In addition to being informed by Oracle Support, the type of OPatch can be determined by checking the <Patch No>/etc/config/inventory file. If the the variable online_rac_installable is set totrue then the patch is a rolling patch.

PATCH INSTALLATION INSTRUCTIONS

  • The worked example below will use the following settings:
    • CRS_HOME = /u01/crs/oracle/product/10/crs
    • RDBMS_HOME = /u01/app/oracle/product/db10.2.0
  • The patch being applied is p123456
  • The example output from the various commands has been abridged for readability.
1. Make sure all instances running under the ORACLE_HOME being patched are cleanly shutdown before installing this patch. Also ensure that the tool used to terminate the instance(s) has exited cleanly.
2. Ensure that the directory containing the opatch script appears in your $PATH. Execute which opatch to confirm.
oracle:> which opatch
no opatch in /u01/app/oracle/product/db10.2.0/bin /bin /usr/sbin /usr/bin /usr/local/bin /usr/ccs/bin 
/usr/bin /usr/local/bin /u01/crs/oracle/product/10/crs/bin /u01/crs/oracle/product/10/crs/bin

oracle:> export PATH=$PATH:$ORACLE_HOME/OPatch

oracle:> which opatch
/u01/crs/oracle/product/10/crs/OPatch/opatch
3. Each node of the cluster has its own CRS Home, the patch should be applied as a rolling upgrade. All of the following
steps should be followed for each node.
  • Do not patch two nodes at once.
4. As the Oracle Clusterware (CRS) software owner check CRS_HOME.
oracle:> opatch lsinventory -detail -oh /u01/crs/oracle/product/10/crs
Invoking OPatch 10.2.0.3.0
...
  Remote node = <node x>
--------------------------------------------------------------------------------

OPatch succeeded.
5. As the RDBMS server owner check ORACLE_HOME.
oracle:> opatch lsinventory -detail -oh /u01/app/oracle/product/db10.2.0/
Invoking OPatch 10.2.0.3.0
...
  Remote node = <node x>
--------------------------------------------------------------------------------

OPatch succeeded.
  • The above should list the components and the list of nodes. If the Oracle inventory is not setup correctly the OPatch utility will fail.
6. Unzip the patch set container file, this will create one or more sub-directories.
% unzip p123456.zip

Archive:  p123456.zip
   creating: 123456/
...
7. Shut down the RDBMS and ASM instances, listeners and nodeapps followed by CRS daemons on the local node.
  • To shutdown RDBMS instance on the local node run the following command:
% $ORACLE_HOME/bin/srvctl stop instance -d dbname -i instance_name
  • To shutdown ASM instances run the following command on each node:
% $ORACLE_HOME/bin/srvctl stop asm -n <node_name>
  • To shutdown nodeapps run the following comand on each node:
% $ORACLE_HOME/bin/srvctl stop nodeapps -n <node_name>
8. Now shutdown CRS daemons on each node by running as root:
root # $CRS_HOME/bin/crsctl stop crs
Stopping resources. This could take several minutes.
Successfully stopped CRS resources.
Stopping CSSD.
Shutting down CSS daemon.
Shutdown request successfully issued.
9. Prior to applying this part of the fix, invoke the unlock script as root to unlock protected files.
su -

root # cd <patch directory>/123456
root # custom/scripts/prerootpatch.sh -crshome /u01/crs/oracle/product/10/crs -crsuser oracle
root # exit
10. Now invoke an additional script as the crs software installer/owner. This script will save important configuration settings.
oracle:> cd <patch directory>/123456
custom/scripts/prepatch.sh -crshome /u01/crs/oracle/product/10/crs
custom/scripts/prepatch.sh completed successfully.
  • Note: Make sure the RDBMS portion is only applied to an RDBMS home that meets all the pre-requiste versions.
oracle:> cd <patch directory>/123456
oracle:> custom/server/123456/custom/scripts/prepatch.sh -dbhome /u01/app/oracle/product/db10.2.0
custom/server/123456/custom/scripts/prepatch.sh completed successfully.
11. After unlocking any protected files and saving configuration settings run opatch as the Oracle Clusterware (CRS) software owner.
cd <patch directory>/123456
oracle:> opatch apply -local -oh /u01/crs/oracle/product/10/crs
Invoking OPatch 10.2.0.3.0

Oracle interim Patch Installer version 10.2.0.3.0
Copyright (c) 2005, Oracle Corporation.  All rights reserved..

Oracle Home       : /u01/crs/oracle/product/10/crs
Central Inventory : /u01/app/oracle/oraInventory
   from           : /var/opt/oracle/oraInst.loc
OPatch version    : 10.2.0.3.0
OUI version       : 10.2.0.3.0
OUI location      : /u01/crs/oracle/product/10/crs/oui
Log file location : /u01/crs/oracle/product/10/crs/cfgtoollogs/opatch/opatch2008-02-28_11-51-38AM.log

ApplySession applying interim patch '123456' to OH '/u01/crs/oracle/product/10/crs'
ApplySession: Optional component(s) [ oracle.rdbms, 10.2.0.3.0 ]  
 not present in the Oracle Home or a higher version is found.
Invoking fuser to check for active processes.

You selected -local option, hence OPatch will patch the local system only.

Please shutdown Oracle instances running out of this ORACLE_HOME on the local system.
(Oracle Home = '/u01/crs/oracle/product/10/crs')

Is the local system ready for patching?

Do you want to proceed? [y|n]
y
User Responded with: Y
Backing up files and inventory (not for auto-rollback) for the Oracle Home
Backing up files affected by the patch '123456' for restore. This might take a while...
Backing up files affected by the patch '123456' for rollback. This might take a while...

Patching component oracle.rdbms.rsf, 10.2.0.3.0...
Updating archive file "/u01/crs/oracle/product/10/crs/lib/libgeneric10.a"  with "lib/libgeneric10.a/skgfr.o"
Updating archive file "/u01/crs/oracle/product/10/crs/lib32/libgeneric10.a"  with "lib32/libgeneric10.a/skgfr.o"
ApplySession adding interim patch '123456' to inventory

Verifying the update...
Inventory check OK: Patch ID 123456 is registered in Oracle Home inventory with proper meta-data.
Files check OK: Files from Patch ID 123456 are present in Oracle Home.
Running make for target client_sharedlib

The local system has been patched and can be restarted.

OPatch succeeded.
12. Use opatch to patch the RDBMS software.
oracle:> opatch apply -local -oh /u01/app/oracle/product/db10.2.0
Invoking OPatch 10.2.0.3.0

Oracle interim Patch Installer version 10.2.0.3.0
Copyright (c) 2005, Oracle Corporation.  All rights reserved..

Oracle Home       : /u01/app/oracle/product/db10.2.0
Central Inventory : /u01/app/oracle/oraInventory
   from           : /var/opt/oracle/oraInst.loc
OPatch version    : 10.2.0.3.0
OUI version       : 10.2.0.3.0
OUI location      : /u01/app/oracle/product/db10.2.0/oui
Log file location : /u01/app/oracle/product/db10.2.0/cfgtoollogs/opatch/opatch2008-02-28_11-53-12AM.log

ApplySession applying interim patch '123456' to OH '/u01/app/oracle/product/db10.2.0'
Invoking fuser to check for active processes.
Invoking fuser on "/u01/app/oracle/product/db10.2.0/bin/oracle"

You selected -local option, hence OPatch will patch the local system only.

Please shutdown Oracle instances running out of this ORACLE_HOME on the local system.
(Oracle Home = '/u01/app/oracle/product/db10.2.0')

Is the local system ready for patching?

Do you want to proceed? [y|n]
y
User Responded with: Y
Backing up files and inventory (not for auto-rollback) for the Oracle Home
Backing up files affected by the patch '123456' for restore. This might take a while...
Backing up files affected by the patch '123456' for rollback. This might take a while...

Patching component oracle.rdbms.rsf, 10.2.0.3.0...
Updating archive file "/u01/app/oracle/product/db10.2.0/lib/libgeneric10.a"  with "lib/libgeneric10.a/skgfr.o"
Updating archive file "/u01/app/oracle/product/db10.2.0/lib32/libgeneric10.a"  with "lib32/libgeneric10.a/skgfr.o"

Patching component oracle.rdbms, 10.2.0.3.0...
ApplySession adding interim patch '123456' to inventory

Verifying the update...
Inventory check OK: Patch ID 123456 is registered in Oracle Home inventory with proper meta-data.
Files check OK: Files from Patch ID 123456 are present in Oracle Home.
Running make for target client_sharedlib
Running make for target ioracle

The local system has been patched and can be restarted.

OPatch succeeded.
  • After opatch completes, some configuration settings need to be applied to the patched files.
13. As the RDBMS software owner execute the following:
% <patch directory>/123456/custom/server/123456/custom/scripts/postpatch.sh -dbhome <RDBMS_HOME>

oracle:> <patch directory>/123456/custom/server/123456/custom/scripts/postpatch.sh  \
> -dbhome /u01/app/oracle/product/db10.2.0
Reading ...
...
Reapplying file permissions on /u01/app/oracle/product/db10.2.0/lib/libsrvmhas10.so
14. Restore the security settings and restart CRS by running the following as root:
su -

Sourcing /root/.profile-EIS.....
root # cd <patch directory>/123456
root # custom/scripts/postrootpatch.sh -crshome /u01/crs/oracle/product/10/crs
Checking to see if Oracle CRS stack is already up...
Checking to see if Oracle CRS stack is already starting
WARNING: directory '/u01/crs/oracle/product/10' is not owned by root
WARNING: directory '/u01/crs/oracle/product' is not owned by root
WARNING: directory '/u01/crs/oracle' is not owned by root
Startup will be queued to init within 30 seconds.
Waiting for the Oracle CRSD and EVMD to start
Oracle CRS stack installed and running under init(1M)

PATCH DEINSTALLATION INSTRUCTIONS

To roll back the patch, invoke the following opatch commands to roll back the patch in all homes.
% opatch rollback -id 123456 -local -oh <CRS_HOME>

% opatch rollback -id 123456 -local -oh <RDBMS_HOME>











1 - RAC Patching methods

OPatch supports 3 different patch methods on a RAC environment:
  • Patching RAC as a single instance (All-Node Patch)
In this mode, OPatch applies the patch to the local node first, then propagates the patch to all the other nodes, and finally updates the inventory. All instances must be down during the whole patching process.
  • Patching RAC using a minimum down-time strategy (Min. Downtime Patch)
In this mode, OPatch patches the local node, asks users for a sub-set of nodes, which will be the first subset of nodes to be patched. After the initial subset of nodes are patched, Opatch propagates the patch to the other nodes and finally updates the inventory. The downtime would happen between the shutdown of the second subset of nodes and the startup of the initial subset of nodes patched.
  • Patching RAC using a rolling strategy - No down time (Rolling Patch)
With this method, there is no downtime. Each node would be patched and brought up while all the other nodes are up and running, resulting in no disruption of the system.
Rolling patching strategy incur no downtime, however, some rolling patches may incur downtime due to post-installation steps, i.e. running sql scripts to patch the actual database. Please refer to patch readme to find out whether post-installation steps requires downtime or not.

2 - Flow diagrams

  • All-Node Patch
. Shutdown all Oracle instances on all nodes . Apply the patch to the RAC home on all nodes . Bring all instances up
  • Minimum downtime
. Shutdown the Oracle instance on node 1  . Apply the patch to the RAC home on node 1  . Shutdown the Oracle instance on node 2  . Apply the patch to the RAC home on node 2  . Shutdown the Oracle instance on node 3  . At this point, instances on nodes 1 and 2 can be brought up . Apply the patch to the RAC home on node 3  . Startup the Oracle instance on node 3
  • Rolling patch (no downtime)
. Shutdown the Oracle instance on node 1  . Apply the patch to the RAC home on node 1  . Start the Oracle instance on node 1  . Shutdown the Oracle instance on node 2  . Apply the patch to the RAC home on node 2  . Start the Oracle instance on node 2  . Shutdown the Oracle instance on node 3  . Apply the patch to the RAC home on node 3  . Start the Oracle instance on node 3

3 - How does OPatch select which method to use?

To be eligible as a rolling patch, the patch needs to meet certain criterias, which are determined by Oracle developers. In order to be applied in a "rolling fashion", the patch must be designated as a "rolling updatable patch" or simply "rolling patch". The algorithm used to decide which method is going to be used is the following:        If (users specify minimize_downtime)               patching mechanism = Min. Downtime        else if (patch is a rolling patch)               patching mechanism = Rolling             else                   patching mechanism = All-Node   

4 - Availability of rolling patches

When patches are released, they have a tag as "rolling" or "not rolling" patch. While most patches can be applied in a rolling fashion, some patches can not be applied in this fashion. Patches that could potentially be installed on rolling fashion include:      . Patches that do not affect the contents of the database.     . Patches that are not related to the RAC internode communication infrastructure.     . Patches that change procedural logic and do not modify common header definitions of kernel modules. This includes client side patches that only affect utilities like export, import, sql*plus, sql*loader, etc.  Only individual patches -- not patch sets -- will be “rollable”. It should also be noted that a merge patch of a “rolling patch” and an ordinary patch will not be a “rolling patch”.  From 9.2.0.4 onwards, all patches released will be marked as a "rolling" or "not rolling patch", based on defined set of rules. Patches previously released are packaged as "not rolling". Because the set of rules currently defined are very conservative, patches released as "not rolling patches", either before and after 9.2.0.4, may be eligible to be re-released as "rolling patches", after analysis from Oracle Development.    If you plan to apply a patch that is marked as "not rolling" and want to check if is possible to take advantage of the rolling patch strategy, please contact Oracle Support.

5 - How to determine if a patch is a "rolling patch" or not?

As database user execute the following:     - 9i or 10gR1: opatch query -is_rolling     - 10gR2: opatch query -all  [unzipped patch location] | grep rolling     - 10gR2 on Windows: opatch query -all [unzipped patch location] | findstr rolling     - Later 10gR2 or 11g: opatch query -is_rolling_patch [unzipped patch location] The command may not work if unzipped patch location has more than one patch sub-directory, example output while checking CPU patches:
Failed to load the patch object.  Possible causes are:   The specified path is not an interim Patch shiphome   Meta-data files are missing from the patch area   Patch location = /home/oracle/stage/8836308   Details = Input metadata files are missing. Patch Location "/home/oracle/stage/8836308" doesn't point to a valid patch area. OPatch failed with error code 75
Please refer to patch readme to find out whether the patch is rolling patch or not. 

6 - Current Limitations

  • Patching with Shared File System
Currently OPatch treats Shared File System, like CFS, as a single-instance patch.  It means that OPatch will blindly patch files under a given ORACLE_HOME knowing that other nodes will pick up the changes via the Shared File System. Unfortunately, this means that OPatch cannot take advantage of a rolling patch on a Shared File System environment; all nodes must be down throughout the patching process.
  • Patching one node at time
The Opatch strategies discussed above (All-Node, Min. Down-Time, and Rolling) presumes that all nodes will be patched at the same time. Additionally, each node can be patched individually, at different times, using the "-local" key word, which will patch only the local node.















 OPatch/Patch Questions/Issues for Oracle Clusterware (Grid Infrastructure or CRS) and RAC Environments (Doc ID 1339140.1)



Oracle Home Type

What type of home do I have?

In a pre-11.2 Clusterware environment you will have:
  •  A single Clusterware (CRS) home
  •  One or more Database (RDBMS) homes at the same or at a lower release level than the Clusterware home (to the 4th dot in the release)
  •  Optionally a separate dedicated RDBMS home to run ASM (Automatic Storage Management) at the same or at a lower release level than the Clusterware home (to the 4th dot in the release)
    • In pre-11.2 installations, ASM was implemented using the RDBMS binary installation so this should be treated as a typical RDBMS home
In a 11.2 Grid Infrastructure environment you will have:
  •  A single Grid Infrastructure (GI) Home that implements both ASM and the Clusterware
    Note: During upgrade from pre-11gR2 to 11gR2 GI, the ASM upgrade can optionally be performed independently of the Clusterware (not recommended).  In this case the pre-11gR2 ASM Home would still be active after the GI Upgrade.  This is NOT the recommended method of upgrading to Grid Infrastructure and will therefore not be discussed in this note. Note: Starting with 11gR2 all upgrades of and to Grid Infrastructure MUST be performed "out-of-place" (into a new software home) for this reason there may be more than one clusterware home, however only one will be the active.  This "inactive" Clusterware or GI home should be removed once satisfied that the upgrade was successful.
  •  One or more database (RDBMS) homes at the same or at a lower release level than the GI home (to the 4th dot in the release)

Types of Patches

What Types of Patches are available for Oracle Clusterware, Grid Infrastructure and/or the Database?

Generally patches for Clusterware, Grid Infrastructure and/or the Database are catagorized into the following:
  • Patchsets
  • Patchset Updates (PSUs)
  • Critical Patch Updates (CPUs)
  • Bundle Patches
  • Interim (one-off) Patches

Patchsets Q&A

What's a Patchset?
Compared to all other patch types, a Patchset is released the least frequently. It contains fixes for most known issues for the release and potentially also introduces new features. A patchset is cumulative and when applied it changes the fourth digit of the product release banner - for example, 10.2.0.5 is 4th patch set for 10.2, and 11.2.0.2 is the 1st patch set for 11.2.
A patchset must be installed via the Oracle Universal Installer (OUI) and is generally considered an "upgrade". Prior to 11gR2, a base release had to be installed before a patchset could be applied. For example, to install 10.2.0.5 on Linux, the 10.2.0.1 base release has to be installed first and then upgraded to 10.2.0.5. Prior to 11gR2, the same patchset download is used to patch the Clusterware, ASM and Database homes.  For example, Patch 8202632 is the 10.2.0.5 patchset, this same patch (Patch 8202632) will be used to patch the 10.2.0.x Clusterware, 10.2.0.x ASM and 10.2.0.x Database to 10.2.0.5. Starting with 11gR2, patchset releases are now full releases and no longer require a "base" release, e.g. 11.2.0.2 can be installed directly without having to install 11.2.0.1 first. Prior to 11gR2 - even though the CRS and RDBMS base releases were provided on separate media (downloadable zip file or separate DVD/CD) - the patchsets for both products were delivered as one i.e. the same patchset could be applied to the CRS as well as the RDBMS home. Starting with 11gR2 the patchsets for Grid Infrastructure and RDBMS are delivered separately (as they are full releases). Clusterware patchsets can be applied in a rolling fashion, while database patchsets cannot. For example, you can rolling upgrade clusterware to 11.2.0.2, but you have to shutdown database on all nodes to upgrade to database 11.2.0.2.

Patch Set Updates (PSUs) Q&A

What's a Patch Set Update (PSU)?
As the name implies PSUs are patches that are applied on top of a given patchset release. They are released on a quarterly basis and contain fixes for known critical issues for the patchset. PSUs are subject to thorough testing and do not include changes that would alter the functionality of the software.  With this in mind, PSUs are designed to provide "low risk" and "high value" allowing for customers to more easily adopt proactive patching strategies. Consider the following PSU facts:
  • All PSUs are installed via "opatch" and are not considered an "upgrade".
  • Database PSUs always contain the CPU for the respective quarter that the PSU is released in.  PSUs and CPUs are NOT compatible meaning if you apply the 11.2.0.2.2 Database PSU and then want to apply the 11.2.0.2 July CPU this would result in the rollback of the 11.2.0.2.2 Database PSU.  That said, once a PSU patching strategy is adopted it must be maintained.
  • Independent PSUs are released for both the Database and Clusterware or Grid Infrastructure installations.
    • Clusterware PSUs (pre-11.2) are refered to as CRS PSUs
    • Grid Infrastructure PSUs are refered to as GI PSUs
      • GI PSUs do contain the Database PSU for the corresponding release, e.g.  11.2.0.2.3 GI PSU contains the 11.2.0.2.3 Database PSU
    • Database PSUs hold true to their name
  • Both Clusterware/Grid Infrastructure and Database PSU patches are cumulative. Clusterware PSU refers to CRS PSU for pre-11gR2 and GI PSU for 11gR2.
  • GI PSUs are always cumulative meaning that you can apply higher version GI PSU directly without having to apply a lower version one first. For example, the 11.2.0.2.2 GI PSU can be applied to a 11.2.0.2 home without having to apply GI PSU 11.2.0.2.1 first.
  • Database PSUs can be subject to overlay PSU packaging.  In these cases, the PSUs are still cumulative, but a higher PSU may require a lower PSU to be applied first; for example, to apply database PSU 10.2.0.4.7, you must apply database PSU 10.2.0.4.4 first.  If a previous PSU is a prerequisite to a later PSU the requirement will be clearly documented in the PSU readme.
  • For more information on PSUs please review Document 854428.1.
What's the PSU release schedule?
Generally speaking PSU are released on quarterly basis for both Clusterware/Grid Infrastructure and Database.  There's cases where a Clusterware PSUs is not released for corresponding Database PSU. For example, there's database PSU 10.2.0.5.4 but no CRS PSU 10.2.0.5.4.
Will the 5th digit of the version be changed after PSU is applied?
A PSU will not physically update the 5th-digit of the release information, the updates to the 5th digit in the release are for Documentation purposes only. So the third GI PSU that was released for 11.2.0.2 will have a documentation version of 11.2.0.2.3. You will NOT see this change reflected in the actual software version if you query it from the inventory, clusterware or database.
What's included in a GI PSU ?
Unlike other Grid Infrastructure patches (discussed later), 11gR2 GI PSUs contains both GI PSU and Database PSU (YES, both GI and DB PSU) for a particular quarter. For example, 11.2.0.2.2 GI PSU contains both the 11.2.0.2.2 GI PSU and the 11.2.0.2.2 Database PSU.   You are able to take note of this when you extract a GI PSU, you will see 2 directories (named with the Patch number) one for the GI PSU and one for the RDBMS PSU.  How do I find out whether a bug is fixed in a Clusterware or Grid Infrastructure PSU ? To find out, check the patch readme and the following notes:
Document 405820.1 - 10.2 CRS PSU Known issues Document 810663.1 - 11.1 CRS PSU Known issues Document 1082394.1 - 11.2.0.1 GI PSU Known issues Document 1272288.1 - 11.2.0.2 GI PSU Known Issues Document 1508641.1 - 11.2.0.3.x Grid Infrastructure Bundle/PSU Known Issues

Once the GI PSU is applied, "opatch lsinventory" will show that both GI PSU and DB PSU are applied, i.e.:
Interim patches (2) : Patch  9654983      : applied on Thu Feb 02 20:36:47 PST 2012 Patch  9655006      : applied on Thu Feb 02 20:35:53 PST 2012
And "opatch lsinventory -bugs_fixed" will list each individual bugs that have been fixed by all installed patches, i.e.:
List of Bugs fixed by Installed Patches: Bug        Fixed by  Installed at                   Description             Patch ---        --------  ------------                   ----------- 7519406    9654983   Thu Feb 02 20:36:47 PST 2012   'J000' TRACE FILE REGARDING GATHER_STATS_JOB INTER .. 9783609    9655006   Thu Feb 02 20:35:53 PST 2012   CRS 11.2.0.1.0 Bundle

Can a Database PSU be applied to a clusterware home?
No, only CRS PSUs, GI PSUs or other Clusterware/GI patches can be applied to a Clusterware/GI home.

Critical Patch Updates (CPUs) Q&A

What are Critical Patch Updates (CPUs)?
CPU Patches are collection of high-priority fixes for security related issues and are only applicable to the Database Home (and pre-11.2 ASM Home(s)) .  CPUs are released quarterly on the same cycle as PSUs and are cumulative with respect to prior security fixes but may contain other fixes in order to address patch conflicts with non-security patches.  PSUs always contain the CPU for that respective quarter.  PSUs and CPUs are NOT compatible meaning if you apply the 11.2.0.2.2 Database PSU and then want to apply the 11.2.0.2 July CPU this would result in the rollback of the 11.2.0.2.2 Database PSU. That said, once a PSU patching strategy is adopted it must be maintained.  PSU patching is preferred over CPU Patching.

Bundle Patches Q&A

What's the difference between a Clusterware/Grid Infrastructure bundle patch and a PSU?
A Clusterware or Grid Infrastructure (GI) patch can be in the form of bundle or Patchset Update (PSU).  The biggest difference between a GI/Clusterware bundle and a PSU is that PSUs are bound to a quarterly release schedule while a bundle may be released at any time throughout the course of a given quarter.  If a GI/Clusterware bundle is released in a given quarter, the fixes in that bundle will be included in the PSU that will be released for that quarter.  This concept allows for development to provide critical bug fixes in a more granular timeline if necessary.

Interim (one-off) Patch Q&A

What's an interim patch (one-off patch)?
An interim patch contains fixes for one or in some cases several bugs (merge patch). Clusterware interim patches are rare, they usually are build on top of the latest PSU (at the time) and include the entire PSU they were built on. The same does not hold true for database interim patches, they usually do not include a PSU. An interim clusterware patch on top of a PSU includes the PSU, for example, 11.2.0.2.2 patch 12647618 includes 11.2.0.2 GI PSU2 (11.2.0.2.2). But the same is not true for database interim patch, for example, 11.2.0.2.2 database patch 11890804 can only be applied on top of a 11.2.0.2.2 database home.

General Patch Q&A

What's the difference between Clusterware/Grid Infrastructure patches and Database patches?
Generally speaking Clusterware/Grid Infrastructure patches modify files that belong to the Clusterware or Grid Infrastructure product, while Database patches change files that belong to the database product.  As they apply to different sets of files, they do not conflict with each other. Please note:
  •  "files" in this context can refer to binaries/executables, scripts, libraries etc
  •  Clusterware files can reside in all types of Oracle software homes like clusterware home, database home and ASM home
  •  Prior to 11gR2, RDBMS files reside in DB/ASM homes only, while with 11gR2 RDBMS files will also reside in the GI home (as ASM is now part of GI)
  •  A GI PSU is a special type of clusterware patch as it also includes a database PSU and modifies database binaries.
What does a Clusterware/Grid Infrastructure patch contain?
Clusterware and Grid Infrastructure patches have at least two portions, one for the clusterware home, and the other for the Database home(s). Those two portions have same version number. The clusterware home portion is in the top level directory of the unzipped location, while the other portion is in custom/server/.  For example, if 11.2.0.2.2 GI PSU is unzipped to /op/db/11.2.0.2/GIPSU2, Grid Infrastructure home portion will be in /op/db/11.2.0.2/GIPSU2/12311357 and database home specific portion will be in /op/db/11.2.0.2/GIPSU2/12311357/custom/server/12311357.  This is just an example, full details will be in the README for the patch that is being applied.  ALWAYS consult the patch README prior to applying any patch.

Which Patch Applies to Which Home

What's Oracle's patching recommendation?

Oracle recommends to apply the latest patchset and latest PSU as a general best practice.

Which oracle home does a patch apply to?

A home must meet the patch version specification to be applicable. In CRS environments, a Clusterware patch (interim, MLR, bundle or PSU) applies to all homes (CRS, ASM and database), while a Database patch applies to the ASM and Database home only In Grid Infrastructure environments, a GI patch (interim, bundle or PSU) applies to all homes (GI and Database), while a Database patch could be applicable to GI/ASM home if the fix applies to ASM (in this case patch README will tell clearly that it applies to GI/ASM home).

How do I tell from the patch readme which home the patch applies to?

The patch README usually tells which home it applies to. Common identifier in patch README:
# Product Patched : ORCL_PF ==> Applies to database and pre-11.2 ASM home # Product Patched : RDBMS ==> Applies to database and pre-11.2 ASM home # Product Patched : CRS/RDBMS ==> Applies to both clusterware and database homes Oracle Database 11g Release 11.2.0.2.3 ==> Applies to database home. Oracle Clusterware 11g Release 2 (11.2.0.2.2) ==> Applies to both clusterware and database homes

Do Exadata patches apply to GI or RAC homes?

General speaking, Exadata patches should only be applied to Exadata environments.

Can I upgrade the Clusterware or Grid Infrastructure to a higher version while leaving database at a lower version?

Yes, as long as Clusterware/Grid Infrastructure is at higher version than the RAC Database homes on the cluster this is perfectly acceptable, refer to Document 337737.1 for details

Do I need downtime to apply a Clusterware or Grid Infrastructure patch?

If the Clusterware/Grid Infrastructure home is not shared (common), Clusterware/Grid Infrastructure patches can be applied in rolling fashion so no downtime (of the entire cluster) is needed.

Which PSU patch applies to what home in mixed environments (clusterware and database at different version)?

Example 1:
11.2.0.2 GI + 11.2.0.2 DB, 11.1.0.7 DB and 10.2.0.5 DB In this environment, 11.2.0.2 GI PSU applies to both 11.2.0.2 GI and 11.2.0.2 DB home(DB PSU does not apply to any home), 11.1.0.7 CRS PSU and 11.1.0.7 database PSU applies to 11.1.0.7 DB home, and 10.2.0.5 CRS PSU and 10.2.0.5 database PSU applies to 10.2.0.5 DB home.
Example 2:
11.1.0.7 CRS + 11.1.0.7 CRS, 10.2.0.5 DB In this environment, 11.1.0.7 CRS PSU applies to 11.1.0.7 CRS home, 11.1.0.7 CRS PSU and 11.1.0.7 database PSU applies to 11.1.0.7 database home, 10.2.0.5 CRS PSU and 10.2.0.5 DB PSU applies to 10.2.0.5 DB home.

OPatch Q&A

How to find out the opatch version?

To find out the opatch version perform the following as the Oracle software owner:
% export ORACLE_HOME= % $ORACLE_HOME/OPatch/opatch version

How do I install the latest OPatch release?

Prior to applying a patch to a system it is HIGHLY recommended (often required) to download and install the latest version of the OPatch utility into every ORACLE_HOME on every cluster node that is to be patched.  The latest version of OPatch can be found on MOS under Patch 6880880.  Be sure to download the release of OPatch that is applicable for your platform and major release (e.g. Linux-x86-64 11.2.0.0.0).  Once OPatch has been downloaded and staged on your target system(s) it can be installed by executing the following as the Oracle Software installation owner:
% export ORACLE_HOME= % unzip -d $ORACLE_HOME p6880880_112000_Linux-x86-64.zip

Why is "opatch auto" not patching my RAC database home?

Why is "opatch auto" not patching my RAC database home, refer to note 1479651.1 

What's the difference between manual opatch apply and opatch auto?

In a single instance environment to apply database patch it is very simple.  Shutdown all processes running from a home to be patched and invoke "opatch apply" or "opatch napply".  Very simple. Now, to apply a Clusterware patch to a Clusterware Home or a GI patch to Grid Infrastructure Home, the clusterware stack needs to be down, the clusterware home needs to be unlocked, the configuration needs to be saved, then the patch can be applied; once the patch is applied, the configuration needs to be restored, the clusterware home needs to be re-locked again, then the clusterware stack can start.  Once the Clusterware patch or GI patch was applied to the Clusterware/GI home, you then have to go and apply the database components of that patch to the Database Home with a whole other list of steps.  This is a complex process that had proven to be error prone if instructions were not carefully followed. This is where OPatch Auto comes in.  The whole goal of OPatch auto is for an administrator to only have to execute a single command to apply a patch to a RAC system, thus simplifying the process.

How do I apply a Grid Infrastructure patch before the root script (root.sh or rootupgrade.sh) is executed?

Refer to note 1410202.1 for details.

How to apply a patch after the root script (root.sh or rootupgrade.sh) has failed?

In some cases root script fails because of missing patch (e.g. Document 1212703.1 Oracle Grid Infrastructure 11.2.0.2 Installation or Upgrade may fail due to Multicasting Requirement). When this happens we must first determine whether the root script had made it to the point in which the ownership/permissions of the GI software has changed.  This will allow us to make the determination if unlocking the GI software is necessary prior to installing the patch.  To determine if unlocking is required, go to the GI home and execute:
# find $GRID_HOME -user root
If the above command returns no files, unlocking the software is not necessary.  Should the command return root owned files you must unlock the GI Home by executing the following as the root user:
# $GI_HOME/perl/bin/perl $GI_HOME/crs/install/rootcrs.pl -unlock
Once the GI home is unlocked (or it was determined that unlocking was not necessary, refer to answer for the question "How do I apply a Grid Infrastructure patch before the root script (root.sh or rootupgrade.sh) is executed?" to install the required patch.

How to apply a Clusterware or Grid Infrastructure patch manually?

It's recommended to use OPatch auto when applicable; but in the case that OPatch auto does not apply or fails to apply the patch, refer to patch README and the following notes for manual patch instructions.  The following notes may also be of use when in this situation:
Document 1089476.1 - Patch 11gR2 Grid Infrastructure Standalone (Oracle Restart) Document 1064804.1 - Apply Grid Infrastructure/CRS Patch in Mixed environment

Examples

OPatch Auto Example to Apply a GI PSU (includes Database PSU)

  • Platform is Linux 64-bit
  • All Homes (GI and Database) are not shared
  • All Homes are 11.2.0.2
  • The Grid Infrastructure owner is grid
  • The Database owner is oracle
  • The Grid Infrastructure Home is /ocw/grid
  • Database Home1 is /db/11.2/db1
  • Database Home2 is /db/11.2/db2
  • The 11.2.0.2.3 GI PSU will be applied to in this example
  • ACFS is NOT in use on this cluster
Note: This example only covers the application of the patch iteslf.  It does NOT cover the additional database, ACFS or any other component specific steps required for the PSU installation.  That said you must ALWAYS consult the patch README prior to attempting to install any patch.
  1. Create an EMPTY directory to stage the GI PSU as the GI software owner (our example uses a directory named gipsu3):
% mkdir /u01/stage/gipsu3

Note: The directory must be readable, writable by root, grid and all database users.
2. Extract the GI PSU into the empty stage directory as the GI software owner:
% unzip -d /u01/stage/gipsu3 p12419353_112020_Linux-x86-64.zip
3. Verify that opatch in ALL 11.2.0.2 homes that will be patched meet minimum version requirement documented in the PSU README (see "How to find out the opatch version?").  If the version of OPatch in any one (or all) of the homes does not meet the minimum version required for the patch, OPatch must be upgraded in this/these homes prior to continuing (see "How do I install the latest OPatch release?"). 4. As grid user repeat the following to validate inventory for ALL applicable homes on ALL nodes:
% $GI_HOME/OPatch/opatch lsinventory -detail -oh

Note: If any errors or inconsistencies are returned corrective action MUST be taken prior to applying the patch.
5. As the user root, execute OPatch auto to apply GI PSU 11.2.0.2.3 to all 11.2.0.2 Grid Infrastructure and Database homes:
# cd /u01/stage/gipsu3 # export GI_HOME=/ocw/grid # $GI_HOME/OPatch/opatch auto /u01/stage/gipsu3

Note:  You can optionally apply the patch to an individual 11.2.0.2 home by specifying the -oh to the opatch auto command: # $GI_HOME/OPatch/opatch auto /u01/stage/gipsu3 -oh /u01/stage/gipsu3 This would apply the 11.2.0.2.3 GI PSU to ONLY the 11.2.0.2 GI Home.
6. As the grid user repeat validate inventory for all patched homes:
% $GI_HOME/OPatch/opatch lsinventory -detail -oh
7. Repeat steps 1-6 on each of the remaining cluster nodes, 1 node at a time. 8. If you applied the Databse PSU to the Database Homes (as shown in this example), you must now load the Modified SQL Files into the Database(s) as follows:
Note:  The patch README should be consulted for additional instructions!

% cd $ORACLE_HOME/rdbms/admin % sqlplus /nolog SQL> CONNECT / AS SYSDBA SQL> STARTUP SQL> @catbundle.sql psu apply SQL> QUIT

EXAMPLE:  Apply a CRS patch manually

The example assumes the following:
  • Platform is Solaris SPARC 64-bit
  • All homes (CRS, ASM and DB) are non-shared
  • All homes are version 11.1.0.7
  • The Clusterware Home is /ocw/crs
  • The Clusterware, ASM and Database owner is oracle
  • The ASM Home is /db/11.2/asm
  • Database Home 1 is /db/11.1/db1
  • Database Home 2 is /db/11.1/db2

Note: This example only covers the application of the patch iteslf. It does NOT cover the additional databaseor any other component specific steps required for the patch installation. That said you must ALWAYS consult the patch README prior to attempting to install any patch.
 1. Create an EMPTY directory to stage the 11.1.0.7.7 CRS PSU as the CRS software owner (our example uses a directory named crspsu7):
% mkdir /u01/stage/crspsu7
2. Extract the CRS PSU into the empty stage directory as the CRS software owner:
% unzip -d /u01/stage/crspsu7 p11724953_11107_Solaris-64.zip
3. Verify that opatch in ALL 11.1.0.7 homes (for the Database homes there is a Database compnent to CRS patches) meet minimum version requirement documented in the PSU README (see "How to find out the opatch version?"). If the version of OPatch in any one (or all) of the homes does not meet the minimum version required for the patch, OPatch must be upgraded in this/these homes prior to continuing (see "How do I install the latest OPatch release?"). 4. As oracle repeat the following to validate inventory for ALL applicable homes:
% $CRS_HOME/OPatch/opatch lsinventory -detail -oh
5. On the local node, stop all instances, listeners, ASM and CRS:
As oracle from the ORACLE_HOME in which the resource is running: % $ORACLE_HOME/bin/srvctl stop instance -i -d % $ASM_HOME/bin/srvctl stop asm -n % $CRS_HOME/bin/srvctl stop nodeapps -n As root: # $CRS_HOME/bin/crsctl stop crs
6 As root execute the preroot patch script:
# /u01/stage/crspsu7/11724953/custom/scripts/prerootpatch.sh -crshome /ocw/crs -crsuser oracle

Note: the crsuser is not "root", it is the software install owner, this is a commonly made mistake 
7. As the oracle user execute the prepatch script for the CRS Home:
% /u01/stage/crspsu7/11724953/custom/scripts/prepatch.sh -crshome /ocw/crs
8. As the oracle user execute the prepatch script for the Database/ASM Homes:
% /u01/stage/crspsu7/11724953/custom/server/11724953/custom/scripts/prepatch.sh -dbhome /db/11.1/asm % /u01/stage/crspsu7/11724953/custom/server/11724953/custom/scripts/prepatch.sh -dbhome /db/11.1/db1 % /u01/stage/crspsu7/11724953/custom/server/11724953/custom/scripts/prepatch.sh -dbhome /db/11.1/db2
9. As oracle  apply the CRS PSU to the CRS Home using opatch napply:
% export ORACLE_HOME=/ocw/crs % $ORACLE_HOME/OPatch/opatch napply -oh $ORACLE_HOME -local /u01/stage/crspsu7/11724953
10. As the oracle user apply the database compnent of CRS PSU to the Database/ASM Homes using opatch napply:
% export ORACLE_HOME=/db/11.1/asm % $ORACLE_HOME/OPatch/opatch napply -oh $ORACLE_HOME -local /u01/stage/crspsu7/11724953/11724953/custom/server/11724953/ % export ORACLE_HOME=/db/11.1/db1 % $ORACLE_HOME/OPatch/opatch napply -oh $ORACLE_HOME -local /u01/stage/crspsu7/11724953/11724953/custom/server/11724953/ % export ORACLE_HOME=/db/11.1/db2 % $ORACLE_HOME/OPatch/opatch napply -oh $ORACLE_HOME -local /u01/stage/crspsu7/11724953/11724953/custom/server/11724953/
11. As the oracle user execute the postpatch script for the CRS Home:
% /u01/stage/crspsu7/11724953/custom/scripts/postpatch.sh -crshome /ocw/crs
12. As the oracle user execute the postpatch script for the Database/ASM Homes:
% u01/stage/crspsu7/11724953/custom/server/11724953/custom/scripts/postpatch.sh -dbhome /db/11.1/asm % u01/stage/crspsu7/11724953/custom/server/11724953/custom/scripts/postpatch.sh -dbhome /db/11.1/db1 % u01/stage/crspsu7/11724953/custom/server/11724953/custom/scripts/postpatch.sh -dbhome /db/11.1/db2
13. As root execute the postrootpatch script (this script will start the CRS stack):
# u01/stage/crspsu7/11724953/custom/scripts/postrootpatch.sh -crshome /ocw/crs
14. As the oracle user repeat the following to validate inventory for ALL applicable homes:
% $CRS_HOME/OPatch/opatch lsinventory -detail -oh
15. Start all instances, listeners and ASM on local node (CRS was started with the postrootpatch script):
% $ORACLE_HOME/bin/srvctl start instance -i -d % $ASM_HOME/bin/srvctl start asm -n % $CRS_HOME/bin/srvctl start nodeapps -n
16. Repeat steps 1-15 on each node in the cluster, one node at a time.

EXAMPLE: Applying a GI PSU patch manually

The example assumes the following:
  • Platform is Linux 64-bit
  • All Homes (GI and Database) are not shared
  • The Grid Infrastructure owner is grid
  • The Database owner is oracle
  • The Grid Infrastructure Home is /ocw/grid
  • Database Home1 is /db/11.2/db1
  • Database Home2 is /db/11.2/db2
  • All Homes are 11.2.0.2
  • ACFS is NOT in use on this cluster
Note: This example only covers the application of the patch iteslf. It does NOT cover the additional databaseor any other component specific steps required for the patch installation. That said you must ALWAYS consult the patch README prior to attempting to install any patch.
1. Create an EMPTY directory to stage the GI PSU as the GI software owner (our example uses a directory named gipsu3):
% mkdir /u01/stage/gipsu3

Note: The directory must be readable, writable by root, grid and all database users.
2. Extract the GI PSU into the empty stage directory as the GI software owner:
% unzip -d /u01/stage/gipsu3 p12419353_112020_Linux-x86-64.zip
3. Verify that opatch in ALL 11.2.0.2 homes that will be patched meet minimum version requirement documented in the PSU README (see "How to find out the opatch version?"). If the version of OPatch in any one (or all) of the homes does not meet the minimum version required for the patch, OPatch must be upgraded in this/these homes prior to continuing (see "How do I install the latest OPatch release?"). 4. As grid user repeat the following to validate inventory for ALL applicable homes on ALL nodes:
% $GI_HOME/OPatch/opatch lsinventory -detail -oh

Note: If any errors or inconsistencies are returned corrective action MUST be taken prior to applying the patch.
5. On the local node, use the "srvctl stop home" command to stop the database resources:
% $GI_HOME/bin/srvctl stop home -o /db/11.2/db1 -s /tmp/statefile_db1.out -n % $GI_HOME/bin/srvctl stop home -o /db/11.2/db2 -s /tmp/statefile_db2.out -n
6. As root unlock the Grid Infrastructure Home as follows:
# export ORACLE_HOME=/ocw/grid # $ORACLE_HOME/perl/bin/perl $ORACLE_HOME/crs/install/rootcrs.pl -unlock ## execute this in Grid Infrastructure cluster, for Oracle Restart see the note below.

Note:  If you are in a Oracle Restart environment, you will need to use the roothas.pl script instead of the rootcrs.pl script as follows: # $ORACLE_HOME/perl/bin/perl $ORACLE_HOME/crs/install/roothas.pl -unlock
7. Execute the prepatch script for the Database Homes as the oracle user:
% /u01/stage/gipsu3/12419353/custom/server/12419353/custom/scripts/prepatch.sh -dbhome /db/11.2/db1 % /u01/stage/gipsu3/12419353/custom/server/12419353/custom/scripts/prepatch.sh -dbhome /db/11.2/db2
8. As the grid user apply the patch to the local GI Home using opatch napply:
% export ORACLE_HOME=/ocw/grid % $ORACLE_HOME/OPatch/opatch napply -oh $ORACLE_HOME -local /u01/stage/gipsu3/12419353 % $ORACLE_HOME/OPatch/opatch napply -oh $ORACLE_HOME -local /u01/stage/gipsu3/12419331
9. As the oracle user apply the patch to the local Database Homes using opatch napply:
% export ORACLE_HOME=/db/11.2/db1 % $ORACLE_HOME/OPatch/opatch napply -oh $ORACLE_HOME -local /u01/stage/gipsu3/12419353/custom/server/12419353 % $ORACLE_HOME/OPatch/opatch napply -oh $ORACLE_HOME -local /u01/stage/gipsu3/12419331 % export ORACLE_HOME=/db/11.2/db2 % $ORACLE_HOME/OPatch/opatch napply -oh $ORACLE_HOME -local /u01/stage/gipsu3/12419353/custom/server/12419353 % $ORACLE_HOME/OPatch/opatch napply -oh $ORACLE_HOME -local /u01/stage/gipsu3/12419331
10. Execute the postpatch script for the Database Homes as the oracle user:
% /u01/stage/gipsu3/12419353/custom/server/12419353/custom/scripts/postpatch.sh -dbhome /db/11.2/db1 % /u01/stage/gipsu3/12419353/custom/server/12419353/custom/scripts/postpatch.sh -dbhome /db/11.2/db2
11. At the root user execute the rootadd_rdbms script from the GI Home:
# export ORACLE_HOME=/ocw/grid # $ORACLE_HOME/rdbms/install/rootadd_rdbms.sh
12. As root relock the Grid Infrastructure Home as follows (this will also start the GI stack):
# export ORACLE_HOME=/ocw/grid # $ORACLE_HOME/perl/bin/perl $ORACLE_HOME/crs/install/rootcrs.pl -patch ## execute this in Grid Infrastructure cluster, for Oracle Restart see the note below.

Note: If you are in a Oracle Restart environment, you will need to use the roothas.pl script instead of the rootcrs.pl script as follows: # $ORACLE_HOME/perl/bin/perl $ORACLE_HOME/crs/install/roothas.pl -patch
  13. Restart the database resources on the local node using the "srvctl start home" command:
# $GI_HOME/bin/srvctl start home -o /db/11.2/db1 -s /tmp/statefile_db1.out -n # $GI_HOME/bin/srvctl start home -o /db/11.2/db2 -s /tmp/statefile_db2.out -n
14. As grid user repeat the following to validate inventory for ALL patched homes:
% $GI_HOME/OPatch/opatch lsinventory -detail -oh
15. Repeat steps 1-14 on each node in the cluster, one node at a time. 16. If you applied the Databse PSU to the Database Homes (as shown in this example), you must now load the Modified SQL Files into the Database(s) as follows:
Note: The patch README should be consulted for additional instructions!

% cd $ORACLE_HOME/rdbms/admin % sqlplus /nolog SQL> CONNECT / AS SYSDBA SQL> STARTUP SQL> @catbundle.sql psu apply SQL> QUIT

Common OPatch Failures and Troubleshooting

What files to review if opatch auto fails?

The key to OPatch Auto troubleshooting is understanding where and how the OPatch Auto logs are generated. When OPatch auto is invoked, it will generate a opatchauto.log log in the $ORACLE_HOME/cfgtoollogs directory of the ORACLE_HOME that opatch auto was invoked. This log will contain information around the execution of OPatch auto itself and will be the starting point for troubleshooting issues. Now, OPatch auto will also invoke additional OPatch sessions using the OPatch executable from the home that is being patch to perform the pre-req checks and actually apply the patch. Each of these subsequent invokations OPatch will generate a new opatch.log log under $ORACLE_HOME/cfgtoollogs/opatch within the ORACLE_HOME that is being patched. These logs are often times where the root cause of an OPatch auto failure will be found. Should OPatch auto fail to restart the stack (noted in the error message) one should consult the rootcrs or roothas log (11.2 only) in $GRID_HOME/cfgtoollogs/crsconfig, as OPatch auto actually makes calls "rootcrs.pl" or "roothas.pl".  Should an SR need to be opened up in the event of of a OPatch auto failure it is recommended to collect the OPatch logs (discussed above) as well as the output of the diagcollection script as described Document 330358.1> and provide this diagnostic data at the time of SR creation.

"opatch apply" or "opatch napply" failure if clusterware home is not unlocked

If "opatch napply" or "opatch apply" is executed without unlocking clusterware home first, the following will be reported:
ApplySession failed: ApplySession failed to prepare the system. ApplySession was not able to create the patch_storage area: /ocw/grid/.patch_storage/ .. OPatch failed with error code 73 OR chmod: changing permissions of `/ocw/grid/bin': Operation not permitted .. mv: cannot move `/ocw/grid/bin/oracle' to `/ocw/grid/bin/oracleO': Permission denied ..
The solution is to carefully follow the instructions in the README packaged with the patch which will provide instructions use "opatch auto" if applicable or the manual opatch process.

OPatch reports: "Prerequisite check CheckSystemSpace failed"

On AIX, due to unpublished bug 9780505 opatch needs about 22GB of free disk space in $GRID_HOME to apply a GI PSU/Bundle compared to about 4.4GB on other platforms. "opatch util cleanup -oh " can be used to reclaim space in .patch_storage, refer to Document 550522.1 and Document 1088455.1 for a full explanation on this issue as well as and other tips to work around this issue.

Common causes of "The opatch version check failed"

ZOP-49: Not able to execute the prereq. OPatch cannot inform if the patch satisfies minimum version requirement.
  • Incorrect patch location specified
  • OPatch version does not meet the requirements for the patch (specified in the README), you must upgrade OPatch.
  • The grid or database user can not read unzipped patch directory/files, check directory permissions

OPatch reports: "Patch nnn requires component(s) that are not installed in OracleHome"

These not-installed components are oracle.crs:, oracle.usm: The patch is not applicable to the target home, i.e. if apply 11.2.0.3 GI PSU1 to 11.2.0.2 GI home, the error will be reported, refer to Document 763680.1 for troubleshooting instructions.

Common causes of "The opatch Component check failed"

  • Incorrect patch location specified
  • The patch was unzipped in non-empty directory
  • The proper version of the OPatch executable was is not installed into the ORACLE_HOME being patched and OPatch was not launched from that ORACLE_HOME
  • The current working directory is $ORACLE_HOME/OPatch of the ORACLE_HOME that is being patched
  • The $ORACLE_HOME/.patch_storage cannot be created, the solution is to create the directory manually with corresponding grid or database user with permission of 750
  • The listener is running from database home, the solution is to shutdown listeners from database home
  • The grid and database users are different, the workaround is to apply manually
  • The patch is intended for other version, i.e. when applying 11.2.0.2 GI PSU5 to 11.2.0.3 GI home
Refer to Document 1169036.1 for troubleshooting instructions.

Common causes of "The opatch Conflict check failed"

  • A true patch conflict, the solution is to open an SR and request merge patch
  • A Clusterware patch may report a conflict even it is super set of the currently installed patch.  Normally OPatch should rollback the installed installed patch in favor of the super set, but in some rare case, OPatch errors out; the solution is to manually rollback and apply the new patch.

Common causes of "The opatch Applicable check failed"

ZOP-46: The patch(es) are not applicable on the Oracle Home because some patch actions are not applicable. All required components, however, are installed.
Prereq "checkApplicable" for patch 13653086 failed. ZOP-46: The patch(es) are not applicable on the Oracle Home because some patch actions are not applicable. All required components, however, are installed. Prereq "checkApplicable" for patch 13653086 failed. .. Copy Action: Desctination File "/u02/grid/11.2.0/bin/crsd.bin" is not writeable. 'oracle.crs, 11.2.0.2.0': Cannot copy file from 'crsd.bin' to '/u02/grid/11.2.0/bin/crsd.bin'

  • Some of the files in target home are still being locked - some processes still running, or "rootcrs.pl -unlock" / "roothas.pl -unlock" has not been executed
  • Unzipped patch files in stage area are unreadable by grid or database user: "Copy Action: Source File "/patches/B202GIPSU4/12827731/files/bin/appvipcfg.pl" does not exists or is not readable". The solution is to unzip the patch by any OS user that has primary group oinstall, for example, grid user.
  • Database control (Enterprise Manager)/EM agent is not stopped: "Copy Action: Desctination File "/ocw/grid/11.2/bin/emcrsp.bin" is not writeable.". To stop, as the ORACLE_HOME owner execute:  $ /bin/emctl stop dbconsole
  • A directory is provides when prompted "OPatch  is bundled with OCM, Enter the absolute OCM response file path", the solution is to enter absolute OCM response file directory and filename
For other potential causes, review opatch logfile.  The OPatch log file can be found in $ORACLE_HOME/cfgtoollogs/opatch.

Common causes of "patch   apply  failed  for home "


OUI-67124:Copy failed from '/ocw/grid/.patch_storage/13540563_Jan_16_2012_03_31_27/files/bin/crsctl.bin' to '/ocw/grid/bin/crsctl.bin'... OUI-67124:ApplySession failed in system modification phase... 'Copy failed from '/ocw/grid/.patch_storage/13540563_Jan_16_2012_03_31_27/files/bin/crsctl.bin' to '/ocw/grid/bin/crsctl.bin'...OUI-67124:NApply restored the home. Please check your ORACLE_HOME to make sure:

  • dbconsole is not down - it needs to be stopped prior to patching, as database home owner, execute "/bin/emctl stop dbconsole" to stop it
  • opatch bug 12732495, refer to note 1472242.1 for more details.
  • opatch bug 14308442 which is closed as duplicate of bug 14358407, affects AIX only, refer to note 1518022.1 for more details.
  • other opatch bug - use latest opatch to avoid known issues.
For other potential causes, review opatch logfile.  The OPatch log file can be found in $ORACLE_HOME/cfgtoollogs/opatch.

When applying online patch in RAC: Syntax Error... Unrecognized Option for Apply .. OPatch failed with error code 14

Basically the earlier patch readme template is wrong for online patch for RAC, refer to note 1356093.1 for details.

opatch Fails to Rollback Online(Hot) Patch in RAC With oracle/ops/mgmt/cluster/NoSuchNodeException and error code 30

Refer to note 1416823.1 for details.

opatch Fails to Rollback Online(Hot) Patch With Prerequisite check "CheckRollbackSid" and error code 30

Prerequisite check "CheckRollbackSid" failed. Patch ID: 13413533 The details are: There are no SIDs defined to check with the input SIDs.
Refer to note 1422190.1 for details.

Common causes of "defined(@array) is deprecated at crs/crsconfig_lib.pm"

"opatch auto" reports the following warning message:
defined(@array) is deprecated at crs/crsconfig_lib.pm line 2149
The issue happens when applying 11.2.0.3.1 (patch 13348650) with latest opatch, it's being tracked via internal bug 13556626 bug 13555176, the message can be safely ignored

opatch auto Reports: The path "/bin/acfsdriverstate" does not exist

"opatch auto" reports the following message:
The path "/ocw/grid/bin/acfsdriverstate" does not exist
The message can be ignored, refer to note 1469969.1 for details 

Applying [PSU] patch takes very long time (hours) after "verifying the update"

opatch or "opatch auto" takes hours to finish after the following message is printed:
[Feb 7, 2013 9:27:18 PM] verifying 1009 copy files. << no more display after this, it could stop here for hours, eventually complete
This is due to bug 13963741 which at the time of this writing is still being worked by Development, the workaround is to unset environment variable "JAVA_COMPILER=NONE", refer tonote 1541186.1 for more details.

 

New OPatch Features

To find out new features in opatch 11.2.0.3.2, refer to note 1497639.1 

For searchability: database home; RAC home; RDBMS home; GI home; DB home; ASM home RDBMS_HOME; ASM_HOME; DB_HOME; ASM_HOME GI user; ASM user; DB user; database user

REFERENCES

NOTE:763680.1 - Opatch Error "UtilSession failed: Patch nnn requires component(s) that are not installed" When Applying CRS or Grid Infrastructure Merge, Bundle or PSU Patches NOTE:1416823.1 - opatch Fails to Rollback Online(Hot) Patch in RAC With oracle/ops/mgmt/cluster/NoSuchNodeException and error code 30 NOTE:1410202.1 - How to Apply a Grid Infrastructure Patch Before root script (root.sh or rootupgrade.sh) is Executed? NOTE:1417268.1 - opatch report "ERROR: Prereq checkApplicable failed." when Applying Grid Infrastructure patch NOTE:1422190.1 - opatch Fails to Rollback Online(Hot) Patch With Prerequisite check "CheckRollbackSid" failed and error code 30 NOTE:293369.1 - Master Note For OPatch NOTE:810663.1 - 11.1.0.X CRS Bundle Patch Information NOTE:854428.1 - Patch Set Updates for Oracle Products NOTE:330358.1 - Oracle Clusterware 10gR2/ 11gR1/ 11gR2/ 12cR1 Diagnostic Collection Guide NOTE:1212703.1 - Grid Infrastructure Startup During Patching, Install or Upgrade May Fail Due to Multicasting Requirement NOTE:1088455.1 - opatch CheckSystemSpace Fails With Error Code 73 While Applying GI PSU NOTE:337737.1 - Oracle Clusterware (CRS/GI) - ASM - Database Version Compatibility NOTE:1089476.1 - Patch 11gR2 Grid Infrastructure Standalone (Oracle Restart) NOTE:1272288.1 - 11.2.0.2.X Grid Infrastructure Bundle/PSU Known Issues NOTE:1169036.1 - Applying GI PSU using "opatch auto" fails with "The opatch Component check failed" NOTE:1356093.1 - Syntax error while applying and rolling back an online patch In RAC enivronment NOTE:1064804.1 - Apply Grid Infrastructure/CRS Patch in Mixed Version RAC Database Environment NOTE:1082394.1 - 11.2.0.1.X Grid Infrastructure PSU Known Issues NOTE:405820.1 - 10.2.0.X CRS Bundle Patch Information NOTE:1668630.1 - AIX: Apply PSU or Interim patch Fails with Copy Failed as TFA was Running

Comments

  1. Post is look...Thanks for the same.

    Please change the background. You can barely see anything with the combination of the background image and font selection.

    Thanks in advance.

    ReplyDelete
  2. All content's background is not good , ncoz of which its making content invisible. Please change it, otherwise content is awesome

    ReplyDelete

Post a Comment

Oracle DBA Information