Thursday, October 31, 2019

FAQ: OPatch/Patch Questions/Issues for Oracle Clusterware (Grid Infrastructure or CRS) and RAC Environments (Doc ID 1339140.1)

In this Document
Purpose
Questions and Answers
 Oracle Home Type
 What type of home do I have?
 Types of Patches
 What Types of Patches are available for Oracle Clusterware, Grid Infrastructure and/or the Database?
 Patchsets Q&A
 What's a Patchset?
 Patch Set Updates (PSUs) Q&A
 What's a Patch Set Update (PSU)?
 What's the PSU release schedule?
 Will the 5th digit of the version be changed after PSU is applied?
 What's included in a GI PSU ?
 Can a Database PSU be applied to a clusterware home?
 Critical Patch Updates (CPUs) Q&A
 What are Critical Patch Updates (CPUs)?
 Bundle Patches Q&A
 What's the difference between a Clusterware/Grid Infrastructure bundle patch and a PSU?
 Interim (one-off) Patch Q&A
 What's an interim patch (one-off patch)?
 General Patch Q&A
 What's the difference between Clusterware/Grid Infrastructure patches and Database patches?
 What does a Clusterware/Grid Infrastructure patch contain?
 Which Patch Applies to Which Home
 What's Oracle's patching recommendation?
 Which oracle home does a patch apply to?
 How do I tell from the patch readme which home the patch applies to?
 Do Exadata patches apply to GI or RAC homes?
 Can I upgrade the Clusterware or Grid Infrastructure to a higher version while leaving database at a lower version?
 Do I need downtime to apply a Clusterware or Grid Infrastructure patch?
 Which PSU patch applies to what home in mixed environments (clusterware and database at different version)?
 OPatch Q&A
 How to find out the opatch version?
 How do I install the latest OPatch release?
 Why is "opatch auto" not patching my RAC database home?
 What's the difference between manual opatch apply and opatch auto?
 How do I apply a Grid Infrastructure patch before the root script (root.sh or rootupgrade.sh) is executed?
 How to apply a patch after the root script (root.sh or rootupgrade.sh) has failed?
 How to apply a Clusterware or Grid Infrastructure patch manually?
 Examples
 OPatch Auto Example to Apply a GI PSU (includes Database PSU)
 EXAMPLE:  Apply a CRS patch manually
 EXAMPLE: Applying a GI PSU patch manually
 Common OPatch Failures and Troubleshooting
 What files to review if opatch auto fails?
 "opatch apply" or "opatch napply" failure if clusterware home is not unlocked
 OPatch reports: "Prerequisite check CheckSystemSpace failed"
 Common causes of "The opatch version check failed"
 OPatch reports: "Patch nnn requires component(s) that are not installed in OracleHome"
 Common causes of "The opatch Component check failed"
 Common causes of "The opatch Conflict check failed"
 Common causes of "The opatch Applicable check failed"
 Common causes of "patch <patch-loc>  apply  failed  for home <ORACLE_HOME>"
 When applying online patch in RAC: Syntax Error... Unrecognized Option for Apply .. OPatch failed with error code 14
 opatch Fails to Rollback Online(Hot) Patch in RAC With oracle/ops/mgmt/cluster/NoSuchNodeException and error code 30
 opatch Fails to Rollback Online(Hot) Patch With Prerequisite check "CheckRollbackSid" and error code 30
 Common causes of "defined(@array) is deprecated at crs/crsconfig_lib.pm"
 opatch auto Reports: The path "<GRID_HOME>/bin/acfsdriverstate" does not exist
 Applying [PSU] patch takes very long time (hours) after "verifying the update"
 opatch auto Reports: Not able to retreive database home information
  
 New OPatch Features
References

APPLIES TO:

Oracle Database - Enterprise Edition - Version 10.1.0.2 to 11.2.0.4 [Release 10.1 to 11.2]
Oracle Database Cloud Schema Service - Version N/A and later
Oracle Database Exadata Cloud Machine - Version N/A and later
Oracle Cloud Infrastructure - Database Service - Version N/A and later
Oracle Database Backup Service - Version N/A and later
Information in this document applies to any platform.
@

PURPOSE

This note lists the top opatch or patch related questions or problems in Oracle Clusterware (11gR2 Grid Infrastructure or pre-11.2 CRS) and RAC environments.

It does not intend to replace the readme that comes with a patch, rather it's recommended to go through the patch readme thoroughly and follow it. Occasionally a patch readme may contain incorrect information, in that case, please engage Oracle Support to seek clarification.

For general opatch questions or problems, refer to Document 293369.1

QUESTIONS AND ANSWERS


Oracle Home Type

What type of home do I have?

In a pre-11.2 Clusterware environment you will have:
  •  A single Clusterware (CRS) home
  •  One or more Database (RDBMS) homes at the same or at a lower release level than the Clusterware home (to the 4th dot in the release)
  •  Optionally a separate dedicated RDBMS home to run ASM (Automatic Storage Management) at the same or at a lower release level than the Clusterware home (to the 4th dot in the release)
    • In pre-11.2 installations, ASM was implemented using the RDBMS binary installation so this should be treated as a typical RDBMS home
In a 11.2 Grid Infrastructure environment you will have:
  •  A single Grid Infrastructure (GI) Home that implements both ASM and the Clusterware
    Note: During upgrade from pre-11gR2 to 11gR2 GI, the ASM upgrade can optionally be performed independently of the Clusterware (not recommended).  In this case the pre-11gR2 ASM Home would still be active after the GI Upgrade.  This is NOT the recommended method of upgrading to Grid Infrastructure and will therefore not be discussed in this note.

    Note: Starting with 11gR2 all upgrades of and to Grid Infrastructure MUST be performed "out-of-place" (into a new software home) for this reason there may be more than one clusterware home, however only one will be the active.  This "inactive" Clusterware or GI home should be removed once satisfied that the upgrade was successful.
  •  One or more database (RDBMS) homes at the same or at a lower release level than the GI home (to the 4th dot in the release)

Types of Patches

What Types of Patches are available for Oracle Clusterware, Grid Infrastructure and/or the Database?

Generally patches for Clusterware, Grid Infrastructure and/or the Database are catagorized into the following:
  • Patchsets
  • Patchset Updates (PSUs)
  • Critical Patch Updates (CPUs)
  • Bundle Patches
  • Interim (one-off) Patches

Patchsets Q&A

What's a Patchset?
Compared to all other patch types, a Patchset is released the least frequently. It contains fixes for most known issues for the release and potentially also introduces new features. A patchset is cumulative and when applied it changes the fourth digit of the product release banner - for example, 10.2.0.5 is 4th patch set for 10.2, and 11.2.0.2 is the 1st patch set for 11.2.
A patchset must be installed via the Oracle Universal Installer (OUI) and is generally considered an "upgrade".

Prior to 11gR2, a base release had to be installed before a patchset could be applied. For example, to install 10.2.0.5 on Linux, the 10.2.0.1 base release has to be installed first and then upgraded to 10.2.0.5.

Prior to 11gR2, the same patchset download is used to patch the Clusterware, ASM and Database homes.  For example, Patch 8202632 is the 10.2.0.5 patchset, this same patch (Patch 8202632) will be used to patch the 10.2.0.x Clusterware, 10.2.0.x ASM and 10.2.0.x Database to 10.2.0.5.

Starting with 11gR2, patchset releases are now full releases and no longer require a "base" release, e.g. 11.2.0.2 can be installed directly without having to install 11.2.0.1 first.

Prior to 11gR2 - even though the CRS and RDBMS base releases were provided on separate media (downloadable zip file or separate DVD/CD) - the patchsets for both products were delivered as one i.e. the same patchset could be applied to the CRS as well as the RDBMS home.

Starting with 11gR2 the patchsets for Grid Infrastructure and RDBMS are delivered separately (as they are full releases).

Clusterware patchsets can be applied in a rolling fashion, while database patchsets cannot. For example, you can rolling upgrade clusterware to 11.2.0.2, but you have to shutdown database on all nodes to upgrade to database 11.2.0.2.

Patch Set Updates (PSUs) Q&A

What's a Patch Set Update (PSU)?
As the name implies PSUs are patches that are applied on top of a given patchset release. They are released on a quarterly basis and contain fixes for known critical issues for the patchset. PSUs are subject to thorough testing and do not include changes that would alter the functionality of the software.  With this in mind, PSUs are designed to provide "low risk" and "high value" allowing for customers to more easily adopt proactive patching strategies. Consider the following PSU facts:
  • All PSUs are installed via "opatch" and are not considered an "upgrade".
  • Database PSUs always contain the CPU for the respective quarter that the PSU is released in.  PSUs and CPUs are NOT compatible meaning if you apply the 11.2.0.2.2 Database PSU and then want to apply the 11.2.0.2 July CPU this would result in the rollback of the 11.2.0.2.2 Database PSU.  That said, once a PSU patching strategy is adopted it must be maintained.
  • Independent PSUs are released for both the Database and Clusterware or Grid Infrastructure installations.
    • Clusterware PSUs (pre-11.2) are refered to as CRS PSUs
    • Grid Infrastructure PSUs are refered to as GI PSUs
      • GI PSUs do contain the Database PSU for the corresponding release, e.g.  11.2.0.2.3 GI PSU contains the 11.2.0.2.3 Database PSU
    • Database PSUs hold true to their name
  • Both Clusterware/Grid Infrastructure and Database PSU patches are cumulative. Clusterware PSU refers to CRS PSU for pre-11gR2 and GI PSU for 11gR2.
  • GI PSUs are always cumulative meaning that you can apply higher version GI PSU directly without having to apply a lower version one first. For example, the 11.2.0.2.2 GI PSU can be applied to a 11.2.0.2 home without having to apply GI PSU 11.2.0.2.1 first.
  • Database PSUs can be subject to overlay PSU packaging.  In these cases, the PSUs are still cumulative, but a higher PSU may require a lower PSU to be applied first; for example, to apply database PSU 10.2.0.4.7, you must apply database PSU 10.2.0.4.4 first.  If a previous PSU is a prerequisite to a later PSU the requirement will be clearly documented in the PSU readme.
  • For more information on PSUs please review Document 854428.1.
What's the PSU release schedule?
Generally speaking PSU are released on quarterly basis for both Clusterware/Grid Infrastructure and Database.  There's cases where a Clusterware PSUs is not released for corresponding Database PSU. For example, there's database PSU 10.2.0.5.4 but no CRS PSU 10.2.0.5.4.
Will the 5th digit of the version be changed after PSU is applied?
A PSU will not physically update the 5th-digit of the release information, the updates to the 5th digit in the release are for Documentation purposes only. So the third GI PSU that was released for 11.2.0.2 will have a documentation version of 11.2.0.2.3. You will NOT see this change reflected in the actual software version if you query it from the inventory, clusterware or database.
What's included in a GI PSU ?
Unlike other Grid Infrastructure patches (discussed later), 11gR2 GI PSUs contains both GI PSU and Database PSU (YES, both GI and DB PSU) for a particular quarter. For example, 11.2.0.2.2 GI PSU contains both the 11.2.0.2.2 GI PSU and the 11.2.0.2.2 Database PSU.

You are able to take note of this when you extract a GI PSU, you will see 2 directories (named with the Patch number) one for the GI PSU and one for the RDBMS PSU.

How do I find out whether a bug is fixed in a Clusterware or Grid Infrastructure PSU ? To find out, check the patch readme and the following notes:
Document:405820.1 - 10.2 CRS PSU Known issues
Document:810663.1 - 11.1 CRS PSU Known issues
Document:1082394.1 - 11.2.0.1 GI PSU Known issues
Document:1272288.1 - 11.2.0.2 GI PSU Known Issues
Document:1508641.1 - 11.2.0.3.x Grid Infrastructure Bundle/PSU Known Issues



Once the GI PSU is applied, "opatch lsinventory" will show that both GI PSU and DB PSU are applied, i.e.:
Interim patches (2) :

Patch  9654983      : applied on Thu Feb 02 20:36:47 PST 2012
Patch  9655006      : applied on Thu Feb 02 20:35:53 PST 2012

And "opatch lsinventory -bugs_fixed" will list each individual bugs that have been fixed by all installed patches, i.e.:
List of Bugs fixed by Installed Patches:

Bug        Fixed by  Installed at                   Description
            Patch
---        --------  ------------                   -----------

7519406    9654983   Thu Feb 02 20:36:47 PST 2012   'J000' TRACE FILE REGARDING GATHER_STATS_JOB INTER
..
9783609    9655006   Thu Feb 02 20:35:53 PST 2012   CRS 11.2.0.1.0 Bundle

Can a Database PSU be applied to a clusterware home?
No, only CRS PSUs, GI PSUs or other Clusterware/GI patches can be applied to a Clusterware/GI home.

Critical Patch Updates (CPUs) Q&A

What are Critical Patch Updates (CPUs)?
CPU Patches are collection of high-priority fixes for security related issues and are only applicable to the Database Home (and pre-11.2 ASM Home(s)) .  CPUs are released quarterly on the same cycle as PSUs and are cumulative with respect to prior security fixes but may contain other fixes in order to address patch conflicts with non-security patches.  PSUs always contain the CPU for that respective quarter.  PSUs and CPUs are NOT compatible meaning if you apply the 11.2.0.2.2 Database PSU and then want to apply the 11.2.0.2 July CPU this would result in the rollback of the 11.2.0.2.2 Database PSU. That said, once a PSU patching strategy is adopted it must be maintained.  PSU patching is preferred over CPU Patching.

Bundle Patches Q&A

What's the difference between a Clusterware/Grid Infrastructure bundle patch and a PSU?
A Clusterware or Grid Infrastructure (GI) patch can be in the form of bundle or Patchset Update (PSU).  The biggest difference between a GI/Clusterware bundle and a PSU is that PSUs are bound to a quarterly release schedule while a bundle may be released at any time throughout the course of a given quarter.  If a GI/Clusterware bundle is released in a given quarter, the fixes in that bundle will be included in the PSU that will be released for that quarter.  This concept allows for development to provide critical bug fixes in a more granular timeline if necessary.

Interim (one-off) Patch Q&A

What's an interim patch (one-off patch)?
An interim patch contains fixes for one or in some cases several bugs (merge patch).

Clusterware interim patches are rare, they usually are build on top of the latest PSU (at the time) and include the entire PSU they were built on.

The same does not hold true for database interim patches, they usually do not include a PSU.

An interim clusterware patch on top of a PSU includes the PSU, for example, 11.2.0.2.2 patch 12647618 includes 11.2.0.2 GI PSU2 (11.2.0.2.2). But the same is not true for database interim patch, for example, 11.2.0.2.2 database patch 11890804 can only be applied on top of a 11.2.0.2.2 database home.

General Patch Q&A

What's the difference between Clusterware/Grid Infrastructure patches and Database patches?
Generally speaking Clusterware/Grid Infrastructure patches modify files that belong to the Clusterware or Grid Infrastructure product, while Database patches change files that belong to the database product.  As they apply to different sets of files, they do not conflict with each other.

Please note:
  •  "files" in this context can refer to binaries/executables, scripts, libraries etc
  •  Clusterware files can reside in all types of Oracle software homes like clusterware home, database home and ASM home
  •  Prior to 11gR2, RDBMS files reside in DB/ASM homes only, while with 11gR2 RDBMS files will also reside in the GI home (as ASM is now part of GI)
  •  A GI PSU is a special type of clusterware patch as it also includes a database PSU and modifies database binaries.
What does a Clusterware/Grid Infrastructure patch contain?
Clusterware and Grid Infrastructure patches have at least two portions, one for the clusterware home, and the other for the Database home(s). Those two portions have same version number. The clusterware home portion is in the top level directory of the unzipped location, while the other portion is in custom/server/<bug#>.  For example, if 11.2.0.2.2 GI PSU is unzipped to /op/db/11.2.0.2/GIPSU2, Grid Infrastructure home portion will be in /op/db/11.2.0.2/GIPSU2/12311357 and database home specific portion will be in /op/db/11.2.0.2/GIPSU2/12311357/custom/server/12311357.  This is just an example, full details will be in the README for the patch that is being applied.  ALWAYS consult the patch README prior to applying any patch.

Which Patch Applies to Which Home

What's Oracle's patching recommendation?

Oracle recommends to apply the latest patchset and latest PSU as a general best practice.

Which oracle home does a patch apply to?

A home must meet the patch version specification to be applicable.

In CRS environments, a Clusterware patch (interim, MLR, bundle or PSU) applies to all homes (CRS, ASM and database), while a Database patch applies to the ASM and Database home only

In Grid Infrastructure environments, a GI patch (interim, bundle or PSU) applies to all homes (GI and Database), while a Database patch could be applicable to GI/ASM home if the fix applies to ASM (in this case patch README will tell clearly that it applies to GI/ASM home).

How do I tell from the patch readme which home the patch applies to?

The patch README usually tells which home it applies to.

Common identifier in patch README:
# Product Patched : ORCL_PF ==> Applies to database and pre-11.2 ASM home
# Product Patched : RDBMS ==> Applies to database and pre-11.2 ASM home
# Product Patched : CRS/RDBMS ==> Applies to both clusterware and database homes

Oracle Database 11g Release 11.2.0.2.3 ==> Applies to database home.
Oracle Clusterware 11g Release 2 (11.2.0.2.2) ==> Applies to both clusterware and database homes

Do Exadata patches apply to GI or RAC homes?

General speaking, Exadata patches should only be applied to Exadata environments.

Can I upgrade the Clusterware or Grid Infrastructure to a higher version while leaving database at a lower version?


Yes, as long as Clusterware/Grid Infrastructure is at higher version than the RAC Database homes on the cluster this is perfectly acceptable, refer to Document 337737.1 for details

Do I need downtime to apply a Clusterware or Grid Infrastructure patch?

If the Clusterware/Grid Infrastructure home is not shared (common), Clusterware/Grid Infrastructure patches can be applied in rolling fashion so no downtime (of the entire cluster) is needed.

Which PSU patch applies to what home in mixed environments (clusterware and database at different version)?

Example 1:
11.2.0.2 GI + 11.2.0.2 DB, 11.1.0.7 DB and 10.2.0.5 DB

In this environment, 11.2.0.2 GI PSU applies to both 11.2.0.2 GI and 11.2.0.2 DB home(DB PSU does not apply to any home), 11.1.0.7 CRS PSU and 11.1.0.7 database PSU applies to 11.1.0.7 DB home, and 10.2.0.5 CRS PSU and 10.2.0.5 database PSU applies to 10.2.0.5 DB home.

Example 2:
11.1.0.7 CRS + 11.1.0.7 CRS, 10.2.0.5 DB

In this environment, 11.1.0.7 CRS PSU applies to 11.1.0.7 CRS home, 11.1.0.7 CRS PSU and 11.1.0.7 database PSU applies to 11.1.0.7 database home, 10.2.0.5 CRS PSU and 10.2.0.5 DB PSU applies to 10.2.0.5 DB home.

OPatch Q&A

How to find out the opatch version?

To find out the opatch version perform the following as the Oracle software owner:
% export ORACLE_HOME=<home-path>
% $ORACLE_HOME/OPatch/opatch version

How do I install the latest OPatch release?

Prior to applying a patch to a system it is HIGHLY recommended (often required) to download and install the latest version of the OPatch utility into every ORACLE_HOME (including Grid Infrastructure Oracle homes) on every cluster node that is to be patched.  The latest version of OPatch can be found on MOS under Patch 6880880.  Be sure to download the release of OPatch that is applicable for your platform and major release (e.g. Linux-x86-64 11.2.0.0.0).  Once OPatch has been downloaded and staged on your target system(s) it can be installed by executing the following as the Oracle Software installation owner:
% export ORACLE_HOME=<home-path>
% unzip -d $ORACLE_HOME p6880880_112000_Linux-x86-64.zip

Why is "opatch auto" not patching my RAC database home?

Why is "opatch auto" not patching my RAC database home, refer to Document:1479651.1 

What's the difference between manual opatch apply and opatch auto?

In a single instance environment to apply database patch it is very simple.  Shutdown all processes running from a home to be patched and invoke "opatch apply" or "opatch napply".  Very simple.

Now, to apply a Clusterware patch to a Clusterware Home or a GI patch to Grid Infrastructure Home, the clusterware stack needs to be down, the clusterware home needs to be unlocked, the configuration needs to be saved, then the patch can be applied; once the patch is applied, the configuration needs to be restored, the clusterware home needs to be re-locked again, then the clusterware stack can start.  Once the Clusterware patch or GI patch was applied to the Clusterware/GI home, you then have to go and apply the database components of that patch to the Database Home with a whole other list of steps.  This is a complex process that had proven to be error prone if instructions were not carefully followed.

This is where OPatch Auto comes in.  The whole goal of OPatch auto is for an administrator to only have to execute a single command to apply a patch to a RAC system, thus simplifying the process.

How do I apply a Grid Infrastructure patch before the root script (root.sh or rootupgrade.sh) is executed?

Refer to Document:1410202.1 for details.

How to apply a patch after the root script (root.sh or rootupgrade.sh) has failed?

In some cases root script fails because of missing patch (e.g. Document 1212703.1 Oracle Grid Infrastructure 11.2.0.2 Installation or Upgrade may fail due to Multicasting Requirement).

When this happens we must first determine whether the root script had made it to the point in which the ownership/permissions of the GI software has changed.  This will allow us to make the determination if unlocking the GI software is necessary prior to installing the patch.  To determine if unlocking is required, go to the GI home and execute:
# find $GRID_HOME -user root

If the above command returns no files, unlocking the software is not necessary.  Should the command return root owned files you must unlock the GI Home by executing the following as the root user:
# $GI_HOME/perl/bin/perl $GI_HOME/crs/install/rootcrs.pl -unlock

Once the GI home is unlocked (or it was determined that unlocking was not necessary, refer to answer for the question "How do I apply a Grid Infrastructure patch before the root script (root.sh or rootupgrade.sh) is executed?" to install the required patch.

How to apply a Clusterware or Grid Infrastructure patch manually?

It's recommended to use OPatch auto when applicable; but in the case that OPatch auto does not apply or fails to apply the patch, refer to patch README and the following notes for manual patch instructions.  The following notes may also be of use when in this situation:
Document 1089476.1 - Patch 11gR2 Grid Infrastructure Standalone (Oracle Restart)

Document 1064804.1 - Apply Grid Infrastructure/CRS Patch in Mixed environment

Examples

OPatch Auto Example to Apply a GI PSU (includes Database PSU)

  • Platform is Linux 64-bit
  • All Homes (GI and Database) are not shared
  • All Homes are 11.2.0.2
  • The Grid Infrastructure owner is grid
  • The Database owner is oracle
  • The Grid Infrastructure Home is /ocw/grid
  • Database Home1 is /db/11.2/db1
  • Database Home2 is /db/11.2/db2
  • The 11.2.0.2.3 GI PSU will be applied to in this example
  • ACFS is NOT in use on this cluster
Note: This example only covers the application of the patch iteslf.  It does NOT cover the additional database, ACFS or any other component specific steps required for the PSU installation.  That said you must ALWAYS consult the patch README prior to attempting to install any patch.


1. Create an EMPTY directory to stage the GI PSU as the GI software owner (our example uses a directory named gipsu3):
% mkdir /u01/stage/gipsu3

Note: The directory must be readable, writable by root, grid and all database users.

2. Extract the GI PSU into the empty stage directory as the GI software owner:
% unzip -d /u01/stage/gipsu3 p12419353_112020_Linux-x86-64.zip

3. Verify that opatch in ALL 11.2.0.2 homes that will be patched meet minimum version requirement documented in the PSU README (see "How to find out the opatch version?").  If the version of OPatch in any one (or all) of the homes does not meet the minimum version required for the patch, OPatch must be upgraded in this/these homes prior to continuing (see "How do I install the latest OPatch release?").

4. As grid user repeat the following to validate inventory for ALL applicable homes on ALL nodes:
% $GI_HOME/OPatch/opatch lsinventory -detail -oh <home-path>

Note: If any errors or inconsistencies are returned corrective action MUST be taken prior to applying the patch.

5. As the user root, execute OPatch auto to apply GI PSU 11.2.0.2.3 to all 11.2.0.2 Grid Infrastructure and Database homes:
# cd /u01/stage/gipsu3
# export GI_HOME=/ocw/grid
# $GI_HOME/OPatch/opatch auto /u01/stage/gipsu3

Note:  You can optionally apply the patch to an individual 11.2.0.2 home by specifying the -oh <home path> to the opatch auto command:

# $GI_HOME/OPatch/opatch auto /u01/stage/gipsu3 -oh /u01/stage/gipsu3

This would apply the 11.2.0.2.3 GI PSU to ONLY the 11.2.0.2 GI Home.

6. As the grid user repeat validate inventory for all patched homes:
% $GI_HOME/OPatch/opatch lsinventory -detail -oh <home-path>

7. Repeat steps 1-6 on each of the remaining cluster nodes, 1 node at a time.

8. If you applied the Database PSU to the Database Homes (as shown in this example), you must now load the Modified SQL Files into the Database(s) as follows:
Note:  The patch README should be consulted for additional instructions!

% cd $ORACLE_HOME/rdbms/admin
% sqlplus /nolog
SQL> CONNECT / AS SYSDBA
SQL> STARTUP
SQL> @catbundle.sql psu apply
SQL> QUIT

EXAMPLE:  Apply a CRS patch manually

The example assumes the following:
  • Platform is Solaris SPARC 64-bit
  • All homes (CRS, ASM and DB) are non-shared
  • All homes are version 11.1.0.7
  • The Clusterware Home is /ocw/crs
  • The Clusterware, ASM and Database owner is oracle
  • The ASM Home is /db/11.2/asm
  • Database Home 1 is /db/11.1/db1
  • Database Home 2 is /db/11.1/db2

Note: This example only covers the application of the patch iteslf. It does NOT cover the additional databaseor any other component specific steps required for the patch installation. That said you must ALWAYS consult the patch README prior to attempting to install any patch.

 1. Create an EMPTY directory to stage the 11.1.0.7.7 CRS PSU as the CRS software owner (our example uses a directory named crspsu7):
% mkdir /u01/stage/crspsu7

2. Extract the CRS PSU into the empty stage directory as the CRS software owner:
% unzip -d /u01/stage/crspsu7 p11724953_11107_Solaris-64.zip

3. Verify that opatch in ALL 11.1.0.7 homes (for the Database homes there is a Database compnent to CRS patches) meet minimum version requirement documented in the PSU README (see "How to find out the opatch version?"). If the version of OPatch in any one (or all) of the homes does not meet the minimum version required for the patch, OPatch must be upgraded in this/these homes prior to continuing (see "How do I install the latest OPatch release?").

4. As oracle repeat the following to validate inventory for ALL applicable homes:
% $CRS_HOME/OPatch/opatch lsinventory -detail -oh <home-path>

5. On the local node, stop all instances, listeners, ASM and CRS:
As oracle from the ORACLE_HOME in which the resource is running:
% $ORACLE_HOME/bin/srvctl stop instance -i <local instance name> -d <db name>
% $ASM_HOME/bin/srvctl stop asm -n <local node>
% $CRS_HOME/bin/srvctl stop nodeapps -n <local node>

As root:
# $CRS_HOME/bin/crsctl stop crs

6 As root execute the preroot patch script:
# /u01/stage/crspsu7/11724953/custom/scripts/prerootpatch.sh -crshome /ocw/crs -crsuser oracle

Note: the crsuser is not "root", it is the software install owner, this is a commonly made mistake 

7. As the oracle user execute the prepatch script for the CRS Home:
% /u01/stage/crspsu7/11724953/custom/scripts/prepatch.sh -crshome /ocw/crs

8. As the oracle user execute the prepatch script for the Database/ASM Homes:
% /u01/stage/crspsu7/11724953/custom/server/11724953/custom/scripts/prepatch.sh -dbhome /db/11.1/asm
% /u01/stage/crspsu7/11724953/custom/server/11724953/custom/scripts/prepatch.sh -dbhome /db/11.1/db1
% /u01/stage/crspsu7/11724953/custom/server/11724953/custom/scripts/prepatch.sh -dbhome /db/11.1/db2

9. As oracle  apply the CRS PSU to the CRS Home using opatch napply:
% export ORACLE_HOME=/ocw/crs
% $ORACLE_HOME/OPatch/opatch napply -oh $ORACLE_HOME -local /u01/stage/crspsu7/11724953

10. As the oracle user apply the database compnent of CRS PSU to the Database/ASM Homes using opatch napply:
% export ORACLE_HOME=/db/11.1/asm
% $ORACLE_HOME/OPatch/opatch napply -oh $ORACLE_HOME -local /u01/stage/crspsu7/11724953/11724953/custom/server/11724953/
% export ORACLE_HOME=/db/11.1/db1
% $ORACLE_HOME/OPatch/opatch napply -oh $ORACLE_HOME -local /u01/stage/crspsu7/11724953/11724953/custom/server/11724953/
% export ORACLE_HOME=/db/11.1/db2
% $ORACLE_HOME/OPatch/opatch napply -oh $ORACLE_HOME -local /u01/stage/crspsu7/11724953/11724953/custom/server/11724953/

11. As the oracle user execute the postpatch script for the CRS Home:
% /u01/stage/crspsu7/11724953/custom/scripts/postpatch.sh -crshome /ocw/crs

12. As the oracle user execute the postpatch script for the Database/ASM Homes:
% u01/stage/crspsu7/11724953/custom/server/11724953/custom/scripts/postpatch.sh -dbhome /db/11.1/asm
% u01/stage/crspsu7/11724953/custom/server/11724953/custom/scripts/postpatch.sh -dbhome /db/11.1/db1
% u01/stage/crspsu7/11724953/custom/server/11724953/custom/scripts/postpatch.sh -dbhome /db/11.1/db2

13. As root execute the postrootpatch script (this script will start the CRS stack):
# u01/stage/crspsu7/11724953/custom/scripts/postrootpatch.sh -crshome /ocw/crs

14. As the oracle user repeat the following to validate inventory for ALL applicable homes:
% $CRS_HOME/OPatch/opatch lsinventory -detail -oh <home-path>

15. Start all instances, listeners and ASM on local node (CRS was started with the postrootpatch script):
% $ORACLE_HOME/bin/srvctl start instance -i <local instance name> -d <db name>
% $ASM_HOME/bin/srvctl start asm -n <local node>
% $CRS_HOME/bin/srvctl start nodeapps -n <local node>

16. Repeat steps 1-15 on each node in the cluster, one node at a time.

EXAMPLE: Applying a GI PSU patch manually


The example assumes the following:
  • Platform is Linux 64-bit
  • All Homes (GI and Database) are not shared
  • The Grid Infrastructure owner is grid
  • The Database owner is oracle
  • The Grid Infrastructure Home is /ocw/grid
  • Database Home1 is /db/11.2/db1
  • Database Home2 is /db/11.2/db2
  • All Homes are 11.2.0.2
  • ACFS is NOT in use on this cluster
Note: This example only covers the application of the patch iteslf. It does NOT cover the additional databaseor any other component specific steps required for the patch installation. That said you must ALWAYS consult the patch README prior to attempting to install any patch.

1. Create an EMPTY directory to stage the GI PSU as the GI software owner (our example uses a directory named gipsu3):
% mkdir /u01/stage/gipsu3

Note: The directory must be readable, writable by root, grid and all database users.

2. Extract the GI PSU into the empty stage directory as the GI software owner:
% unzip -d /u01/stage/gipsu3 p12419353_112020_Linux-x86-64.zip

3. Verify that opatch in ALL 11.2.0.2 homes that will be patched meet minimum version requirement documented in the PSU README (see "How to find out the opatch version?"). If the version of OPatch in any one (or all) of the homes does not meet the minimum version required for the patch, OPatch must be upgraded in this/these homes prior to continuing (see "How do I install the latest OPatch release?").

4. As grid user repeat the following to validate inventory for ALL applicable homes on ALL nodes:
% $GI_HOME/OPatch/opatch lsinventory -detail -oh <home-path>

Note: If any errors or inconsistencies are returned corrective action MUST be taken prior to applying the patch.

5. On the local node, use the "srvctl stop home" command to stop the database resources:
% $GI_HOME/bin/srvctl stop home -o /db/11.2/db1 -s /tmp/statefile_db1.out -n <local node>
% $GI_HOME/bin/srvctl stop home -o /db/11.2/db2 -s /tmp/statefile_db2.out -n <local node>

6. As root unlock the Grid Infrastructure Home as follows:
# export ORACLE_HOME=/ocw/grid
# $ORACLE_HOME/perl/bin/perl $ORACLE_HOME/crs/install/rootcrs.pl -unlock ## execute this in Grid Infrastructure cluster, for Oracle Restart see the note below.

Note:  If you are in a Oracle Restart environment, you will need to use the roothas.pl script instead of the rootcrs.pl script as follows:
# $ORACLE_HOME/perl/bin/perl $ORACLE_HOME/crs/install/roothas.pl -unlock

7. Execute the prepatch script for the Database Homes as the oracle user:
% /u01/stage/gipsu3/12419353/custom/server/12419353/custom/scripts/prepatch.sh -dbhome /db/11.2/db1
% /u01/stage/gipsu3/12419353/custom/server/12419353/custom/scripts/prepatch.sh -dbhome /db/11.2/db2

8. As the grid user apply the patch to the local GI Home using opatch napply:
% export ORACLE_HOME=/ocw/grid
% $ORACLE_HOME/OPatch/opatch napply -oh $ORACLE_HOME -local /u01/stage/gipsu3/12419353
% $ORACLE_HOME/OPatch/opatch napply -oh $ORACLE_HOME -local /u01/stage/gipsu3/12419331

9. As the oracle user apply the patch to the local Database Homes using opatch napply:
% export ORACLE_HOME=/db/11.2/db1
% $ORACLE_HOME/OPatch/opatch napply -oh $ORACLE_HOME -local /u01/stage/gipsu3/12419353/custom/server/12419353
% $ORACLE_HOME/OPatch/opatch napply -oh $ORACLE_HOME -local /u01/stage/gipsu3/12419331

% export ORACLE_HOME=/db/11.2/db2
% $ORACLE_HOME/OPatch/opatch napply -oh $ORACLE_HOME -local /u01/stage/gipsu3/12419353/custom/server/12419353
% $ORACLE_HOME/OPatch/opatch napply -oh $ORACLE_HOME -local /u01/stage/gipsu3/12419331

10. Execute the postpatch script for the Database Homes as the oracle user:
% /u01/stage/gipsu3/12419353/custom/server/12419353/custom/scripts/postpatch.sh -dbhome /db/11.2/db1
% /u01/stage/gipsu3/12419353/custom/server/12419353/custom/scripts/postpatch.sh -dbhome /db/11.2/db2

11. At the root user execute the rootadd_rdbms script from the GI Home:
# export ORACLE_HOME=/ocw/grid
# $ORACLE_HOME/rdbms/install/rootadd_rdbms.sh

12. As root relock the Grid Infrastructure Home as follows (this will also start the GI stack):
# export ORACLE_HOME=/ocw/grid
# $ORACLE_HOME/perl/bin/perl $ORACLE_HOME/crs/install/rootcrs.pl -patch ## execute this in Grid Infrastructure cluster, for Oracle Restart see the note below.

Note: If you are in a Oracle Restart environment, you will need to use the roothas.pl script instead of the rootcrs.pl script as follows:
# $ORACLE_HOME/perl/bin/perl $ORACLE_HOME/crs/install/roothas.pl -patch

13. Restart the database resources on the local node using the "srvctl start home" command:

# $GI_HOME/bin/srvctl start home -o /db/11.2/db1 -s /tmp/statefile_db1.out -n <local node>
# $GI_HOME/bin/srvctl start home -o /db/11.2/db2 -s /tmp/statefile_db2.out -n <local node>

14. As grid user repeat the following to validate inventory for ALL patched homes:
% $GI_HOME/OPatch/opatch lsinventory -detail -oh <home-path>

15. Repeat steps 1-14 on each node in the cluster, one node at a time.

16. If you applied the Database PSU to the Database Homes (as shown in this example), you must now load the Modified SQL Files into the Database(s) as follows:
Note: The patch README should be consulted for additional instructions!

% cd $ORACLE_HOME/rdbms/admin
% sqlplus /nolog
SQL> CONNECT / AS SYSDBA
SQL> STARTUP
SQL> @catbundle.sql psu apply
SQL> QUIT

Common OPatch Failures and Troubleshooting

What files to review if opatch auto fails?

The key to OPatch Auto troubleshooting is understanding where and how the OPatch Auto logs are generated. When OPatch auto is invoked, it will generate a opatchauto<timestamp>.log log in the $ORACLE_HOME/cfgtoollogs directory of the ORACLE_HOME that opatch auto was invoked. This log will contain information around the execution of OPatch auto itself and will be the starting point for troubleshooting issues. Now, OPatch auto will also invoke additional OPatch sessions using the OPatch executable from the home that is being patch to perform the pre-req checks and actually apply the patch. Each of these subsequent invokations OPatch will generate a new opatch<timestamp>.log log under $ORACLE_HOME/cfgtoollogs/opatch within the ORACLE_HOME that is being patched. These logs are often times where the root cause of an OPatch auto failure will be found.

Should OPatch auto fail to restart the stack (noted in the error message) one should consult the rootcrs or roothas log (11.2 only) in $GRID_HOME/cfgtoollogs/crsconfig, as OPatch auto actually makes calls "rootcrs.pl" or "roothas.pl".  Should an SR need to be opened up in the event of of a OPatch auto failure it is recommended to collect the OPatch logs (discussed above) as well as the output of the diagcollection script as described Document 330358.1> and provide this diagnostic data at the time of SR creation.

"opatch apply" or "opatch napply" failure if clusterware home is not unlocked

If "opatch napply" or "opatch apply" is executed without unlocking clusterware home first, the following will be reported:
ApplySession failed: ApplySession failed to prepare the system. ApplySession was not able to create the patch_storage area: /ocw/grid/.patch_storage/<patch#_date>
..
OPatch failed with error code 73

OR

chmod: changing permissions of `/ocw/grid/bin':
Operation not permitted
..
mv: cannot move `/ocw/grid/bin/oracle' to `/ocw/grid/bin/oracleO': Permission denied
..


The solution is to carefully follow the instructions in the README packaged with the patch which will provide instructions use "opatch auto" if applicable or the manual opatch process.

OPatch reports: "Prerequisite check CheckSystemSpace failed"

On AIX, due to unpublished bug 9780505 opatch needs about 22GB of free disk space in $GRID_HOME to apply a GI PSU/Bundle compared to about 4.4GB on other platforms. "opatch util cleanup -oh <home-path>" can be used to reclaim space in .patch_storage, refer to Document:550522.1 and Document:1088455.1 for a full explanation on this issue as well as and other tips to work around this issue.

Common causes of "The opatch version check failed"

ZOP-49: Not able to execute the prereq. OPatch cannot inform if the patch satisfies minimum version requirement.
  • Incorrect patch location specified
  • OPatch version does not meet the requirements for the patch (specified in the README), you must upgrade OPatch.
  • The grid or database user can not read unzipped patch directory/files, check directory permissions

OPatch reports: "Patch nnn requires component(s) that are not installed in OracleHome"

These not-installed components are oracle.crs:<version>, oracle.usm:<version>

The patch is not applicable to the target home, i.e. if apply 11.2.0.3 GI PSU1 to 11.2.0.2 GI home, the error will be reported, refer to Document 763680.1 for troubleshooting instructions.

Common causes of "The opatch Component check failed"

  • Incorrect patch location specified
  • The patch was unzipped in non-empty directory
  • The proper version of the OPatch executable was is not installed into the ORACLE_HOME being patched and OPatch was not launched from that ORACLE_HOME
  • The current working directory is $ORACLE_HOME/OPatch of the ORACLE_HOME that is being patched
  • The $ORACLE_HOME/.patch_storage cannot be created, the solution is to create the directory manually with corresponding grid or database user with permission of 750
  • The listener is running from database home, the solution is to shutdown listeners from database home
  • The grid and database users are different, the workaround is to apply manually
  • The patch is intended for other version, i.e. when applying 11.2.0.2 GI PSU5 to 11.2.0.3 GI home
Refer to Document 1169036.1 for troubleshooting instructions.

Common causes of "The opatch Conflict check failed"

  • A true patch conflict, the solution is to open an SR and request merge patch
  • A Clusterware patch may report a conflict even it is super set of the currently installed patch.  Normally OPatch should rollback the installed installed patch in favor of the super set, but in some rare case, OPatch errors out; the solution is to manually rollback and apply the new patch.

Common causes of "The opatch Applicable check failed"

ZOP-46: The patch(es) are not applicable on the Oracle Home because some patch actions are not applicable. All required components, however, are installed.
Prereq "checkApplicable" for patch 13653086 failed.

ZOP-46: The patch(es) are not applicable on the Oracle Home because some patch actions are not applicable. All required components, however, are installed.

Prereq "checkApplicable" for patch 13653086 failed.
..
Copy Action: Desctination File "/u02/grid/11.2.0/bin/crsd.bin" is not writeable.
'oracle.crs, 11.2.0.2.0': Cannot copy file from 'crsd.bin' to '/u02/grid/11.2.0/bin/crsd.bin'

  • Some of the files in target home are still being locked - some processes still running, or "rootcrs.pl -unlock" / "roothas.pl -unlock" has not been executed
  • Unzipped patch files in stage area are unreadable by grid or database user: "Copy Action: Source File "/patches/B202GIPSU4/12827731/files/bin/appvipcfg.pl" does not exists or is not readable". The solution is to unzip the patch by any OS user that has primary group oinstall, for example, grid user.
  • Database control (Enterprise Manager)/EM agent is not stopped: "Copy Action: Desctination File "/ocw/grid/11.2/bin/emcrsp.bin" is not writeable.". To stop, as the ORACLE_HOME owner execute:  $ <ORACLE_HOME>/bin/emctl stop dbconsole
  • A directory is provides when prompted "OPatch  is bundled with OCM, Enter the absolute OCM response file path", the solution is to enter absolute OCM response file directory and filename
For other potential causes, review opatch logfile.  The OPatch log file can be found in $ORACLE_HOME/cfgtoollogs/opatch.

Common causes of "patch <patch-loc>  apply  failed  for home <ORACLE_HOME>"


OUI-67124:Copy failed from '/ocw/grid/.patch_storage/13540563_Jan_16_2012_03_31_27/files/bin/crsctl.bin' to '/ocw/grid/bin/crsctl.bin'...
OUI-67124:ApplySession failed in system modification phase... 'Copy failed from '/ocw/grid/.patch_storage/13540563_Jan_16_2012_03_31_27/files/bin/crsctl.bin' to '/ocw/grid/bin/crsctl.bin'...OUI-67124:NApply restored the home. Please check your ORACLE_HOME to make sure:

  • dbconsole is not down - it needs to be stopped prior to patching, as database home owner, execute "<ORACLE_HOME>/bin/emctl stop dbconsole" to stop it
  • opatch Bug:12732495, refer to Document:1472242.1 for more details.
  • opatch Bug:14308442 which is closed as duplicate of Bug:14358407, affects AIX only, refer to Document:1518022.1 for more details.
  • other opatch bug - use latest opatch to avoid known issues.
For other potential causes, review opatch logfile.  The OPatch log file can be found in $ORACLE_HOME/cfgtoollogs/opatch.

When applying online patch in RAC: Syntax Error... Unrecognized Option for Apply .. OPatch failed with error code 14

Basically the earlier patch readme template is wrong for online patch for RAC, refer to Document:1356093.1 for details.

opatch Fails to Rollback Online(Hot) Patch in RAC With oracle/ops/mgmt/cluster/NoSuchNodeException and error code 30

Refer to Document:1416823.1 for details.

opatch Fails to Rollback Online(Hot) Patch With Prerequisite check "CheckRollbackSid" and error code 30

Prerequisite check "CheckRollbackSid" failed.

Patch ID: 13413533

The details are:

There are no SIDs defined to check with the input SIDs.


Refer to Document:1422190.1 for details.

Common causes of "defined(@array) is deprecated at crs/crsconfig_lib.pm"

"opatch auto" reports the following warning message:
defined(@array) is deprecated at crs/crsconfig_lib.pm line 2149

The issue happens when applying 11.2.0.3.1 (Patch:13348650) with latest opatch, it's being tracked via internal Bug:13556626 Bug:13555176, the message can be safely ignored

opatch auto Reports: The path "<GRID_HOME>/bin/acfsdriverstate" does not exist

"opatch auto" reports the following message:
The path "/ocw/grid/bin/acfsdriverstate" does not exist
The message can be ignored, refer to Document:1469969.1 for details 

Applying [PSU] patch takes very long time (hours) after "verifying the update"

opatch or "opatch auto" takes hours to finish after the following message is printed:
[Feb 7, 2013 9:27:18 PM] verifying 1009 copy files.
<< no more display after this, it could stop here for hours, eventually complete
This is due to Bug:13963741 which at the time of this writing is still being worked by Development, the workaround is to unset environment variable "JAVA_COMPILER=NONE", refer to Document:1541186.1 for more details.

opatch auto Reports: Not able to retreive database home information

"opatch auto" reports the following message:
Not able to retreive database home information
This happens as GI is down, the solution is to start GI and re-run the same command.


 

New OPatch Features

To find out new features in opatch 11.2.0.3.2, refer to Document:1497639.1 

For searchability:
database home; RAC home; RDBMS home; GI home; DB home; ASM home
RDBMS_HOME; ASM_HOME; DB_HOME; ASM_HOME
GI user; ASM user; DB user; database user

REFERENCES

NOTE:337737.1 - Oracle Clusterware (CRS/GI) - ASM - Database Version Compatibility
NOTE:1089476.1 - Patch 11gR2 Grid Infrastructure Standalone (Oracle Restart)
NOTE:1272288.1 - 11.2.0.2.X Grid Infrastructure Bundle/PSU Known Issues
NOTE:1169036.1 - Applying GI PSU using "opatch auto" fails with "The opatch Component check failed"
NOTE:405820.1 - 10.2.0.X CRS Bundle Patch Information

NOTE:763680.1 - Opatch Error "UtilSession failed: Patch nnn requires component(s) that are not installed" When Applying CRS or Grid Infrastructure Merge, Bundle or PSU Patches
NOTE:1416823.1 - opatch Fails to Rollback Online(Hot) Patch in RAC With oracle/ops/mgmt/cluster/NoSuchNodeException and error code 30
NOTE:1356093.1 - Syntax error while applying and rolling back an online patch In RAC enivronment

NOTE:1064804.1 - Apply Grid Infrastructure/CRS Patch in Mixed Version RAC Database Environment
NOTE:1082394.1 - 11.2.0.1.X Grid Infrastructure PSU Known Issues
NOTE:1410202.1 - How to Apply a Grid Infrastructure Patch Before root script (root.sh or rootupgrade.sh) is Executed?
NOTE:1417268.1 - opatch report "ERROR: Prereq checkApplicable failed." when Applying Grid Infrastructure patch
NOTE:1422190.1 - opatch Fails to Rollback Online(Hot) Patch With Prerequisite check "CheckRollbackSid" failed and error code 30
NOTE:293369.1 - Master Note For OPatch
NOTE:810663.1 - 11.1.0.X CRS Bundle Patch Information
NOTE:1668630.1 - AIX: Apply PSU or Interim Patch Fails With Copy Failed as TFA Was Running
NOTE:854428.1 - Patch Set Updates for Oracle Products
NOTE:330358.1 - Oracle Clusterware 10gR2/ 11gR1/ 11gR2/ 12.1.0.1 Diagnostic Collection Guide


NOTE:1212703.1 - Grid Infrastructure Startup During Patching, Install or Upgrade May Fail Due to Multicasting Requirement
NOTE:1088455.1 - opatch CheckSystemSpace Fails With Error Code 73 While Applying GI PSU

Database Options/Management Packs Usage Reporting for Oracle Databases 11.2 and later (Doc ID 1317265.1)

  Database Options/Management Packs Usage Report You can determine whether an option is currently in use in a database by running options_pa...