Thursday, July 8, 2021
Oracle E-Business Suite Recommended Performance Patches (Doc ID 244040.1)
Oracle E-Business Suite Recommended Performance Patches
The latest version of this document can be found in My Oracle Support Document "Oracle E-Business Suite Recommended Performance Patches (Document 244040.1)"
This document contains a list of performance-related patches that Oracle E-Business Suite Performance Group recommends should be applied.
The most current version of this document can be obtained in My Oracle Support Knowledge Document 244040.1.
There is a change log at the end of this document.
In This Document
This document is divided into the following sections:
Overview
Section 1: Database and Technology Patches
Section 2: Applications Technology Patches
Change Log
Note: To receive automatic notifications of updates, subscribe to the Hot Topics Notification feature in My Oracle Support, as described in Document 793436.2, How to use Hot Topics Email notification to subscribe to Support Product News, SRs, Bugs, etc. and events so that you Never Miss an Important Update.
Overview
You should go through these lists and identify the patches that are applicable to your environment and current patch level.
In addition to these patches, you should ensure that you have applied all the patches recommended in the relevant Oracle Database certification knowledge documents available on My Oracle Support.
Be aware of the following important points:
Patches may not be available on all platforms. If you are encountering a Documented issue and a Patch is not available on your platform, please contact Oracle Support.
See bugs and patch readmes for further information on the problem and solution.
Patches related to Upgrade are listed in "Outstanding Performance Bugs, Issues and Solutions during Upgrade (Document 2357810.1)".
Section1: Database and Technology Patches
Most database and technology bugfixes and recommended default patches are listed in "Oracle E-Business Suite Release 12.2: Consolidated List of Patches and Technology Bug Fixes (Document 1594274.1)".
However, this section will contain bugfixes that impact the performance of Oracle E-Business Suite Release 12.2, that are not yet included in Document 1594274.1
Unless otherwise noted, the patch number is the same as the bug number.
Product Affected versions Bug No. Bug Desc
Further description of issue resolved
My Oracle Support document
Oracle Database 11.2.0.4 to 12.1.0.2 Bug 17305006 SEVERE PERFORMANCE SLOWDOWN WHEN RESULTS CACHE IS THRASHING
See:
Document 17305006.8
Document 1588763.1
Oracle Database 11.2.0.4 to 12.1.0.2 Bug 22146267 TRACKING BUG TO ADD DIAG PATCH PROVIDED IN 21953328 See Document 2097725.1
Oracle Database 12.1.0.2 Bug 26436717 RESULT CACHE: "BYPASS OBJECTS" INCORRECTLY USED BY WAITING PROCESSE
See:
Document 26436717.8
Document 2256587.1
Section 2: Applications Technology Patches
Module Patch Patch Desc Orig Bug (If Diff)
Further description of issue resolved
My Oracle Support document
Included Since
AD Patch 16400364:R12.AD.C INT1224.3:QREP:AIX:WIN:HPIPF:SOL:REPLICATE SEED DATA RUNNING FOR LONG TIME See Document 2003758.1 R12.AD.C.delta.6
AD Patch 17862026:R12.AD.C QREP1223.5:ADPATCH HANGS WHILE GENERATING CUSTOMALL.JAR IN NON-INTERACTIVE MODE
See Document 1987831.1
ADPATCH hangs, while applying a patch containing jcopy actions, in non-interactive mode on windows platform.
R12.AD.C.delta.7
AD Patch 19477748:R12.AD.C DURING 12.1.3 TO 12.2.3 UPGRADE, HIT PERFORMANCE ISSUE ON DBA_EDITIONING_VIEW_CO When generating EV for the same table, The SQL query (Sql ID: 55rgk1744skh4) in AD_ZD_TABLE.GENERATE_EV takes time even STATS on dictionary tables are present. R12.AD.C.delta.6, DBA CUP6
AD Patch 19543460:R12.AD.C R1222: STANDALONE READINESS REPORT AND GLOBAL STANDARDS COMPLIANCE CHECKER (GSCC) Bug 19566604 ADZDPMAN.sql long running (in SECTION-8 (MANUAL FIX: Base Column Names may only use '#' as last character). R12.AD.C.delta.6
AD Patch 20208298:R12.AD.C ADOP CUTOVER VERY SLOW AFTER APPLICATION OF LATEST NOVEMBER TXK AND AD PATCHES CUTOVER_INDEXES API taking time in table manager. Full table scan on dictionary tables. R12.AD.C.delta.6
AD Patch 27098139:R12.AD.C Performance fix to adgrants.sql Bug 26612557 adgrants.sql long running. SQL "SELECT count(1) from dba_tab_privs where ...." executes large number of times in loop and is inefficient.
R12.AD.C.delta.11
Patch 29273993: AD CONSOLIDATED PATCH II ON TOP OF DELTA 10
AD Patch 28986462:R12.AD.C ACTUALIZE_ALL CAUSING CONCURRENCY LOCKS ACTUALIZE_ALL long running. Contention (concurrency events) seen in run edition during actualize_all on patch edition.
R12.AD.C.delta.11
Patch 29273993: AD CONSOLIDATED PATCH II ON TOP OF DELTA 10
BNE Patch 24411756:R12.BNE.C 1OFF:19954813:EXCEL INITIALIZATION RUNS FOR TOO LONG, CT HOPE IT COMPLETES ABOUT Bug 19954813 Exports of items in PIM Item to excel takes long time. 12.2.7 RUP
FND Patch 16579895:R12.FND.C GETTING SESSION INFORMATION IS CONSUMING HIGH CPU Bug 16495843 A simple 'is trace enabled' function initializes several package variables on each call. Also, Init code called every time package is initialized even if the init code values are unnecessary (trace is not enabled). 12.2.4 RUP
FND Patch 17955567:R12.FND.C 1OFF:12.2.3:FND_FLEX_HIER_COMPILE_ALL.SQL RUNNING SLOW Changes for ER 10016213 resulted in LOV was moved to the back end to pre process all valid values and store in a new table. This new table is populated when the hierarchy compiler is run. Because the compiler has to process all the data at one time rather than based on responsibility, performance can be poor. 12.2.5 RUP
FND Patch 19001790:R12.FND.C SWITCHING RESPONSIBILITIES (TOP HAT) TAKES LONG TIME FOR SOME USERS
See Document 2000591.1
Long running SQL "SELECT .... FROM FND_RESPONSIBILITY_VL R, FND_USER_RESP_GROUPS U, FND_SECURITY_GROUPS_VL S WHERE ..."
12.2.5 RUP
FND Patch 19375417:R12.FND.C 1OFF:12.2.3:FND_FLEX_HIER_COMPILE_ALL.SQL RUNNING SLOW Process long running during patch, so moved to run as a concurrent request (in background). 12.2.5 RUP, 12.2.0 CUP6
FND Patch 19891697:R12.FND.C 1OFF:12.2:PERFORMANCE PROBLEMS RESULTS SET CACHE FND_PROFILE package is making redundant use of the RESULT CACHE directive filling up RC space and causing high 90% RESPONSE TIMES
12.2.5 RUP
In:
Patch 31563978:R12.FND.C FND Recommended Patch Collection (8/2020)
Patch 26560435:R12.FND.C FND Recommended Patch Collection (12/2017)
Patch 25796137:R12.FND.C FND Recommended Patch Collection (4/2017)
FND Patch 20213516:R12.FND.C QRE1225.2:GSI: LONG RUNNING : FND :AFUPDFMT.SQL (2 HRS: 29 MINS) Bug 17001252
Long running if large amount of data in FND_LOBS table.
File stubbed. For R12 the FND_LOBS records should be correctly set and not require the file_format be updated/corrected and the index be required to be dropped/rebuilt.
12.2.5 RUP
In:
Patch 31563978:R12.FND.C FND Recommended Patch Collection (8/2020)
Patch 26560435:R12.FND.C FND Recommended Patch Collection (12/2017)
Patch 25796137:R12.FND.C FND Recommended Patch Collection (4/2017)
FND Patch 20751767:R12.FND.C LOCKING ON FND_LOGINS WHEN LOGGING IN AND OUT WITH MULTIPLE USERS
See:
Document 2078703.1
Document 1938090.1
Document 2079471.1
12.2.6 RUP
Supercedes:
Patch 20439180:R12.FND.C
Patch 18903648:R12.FND.C
FND Patch 21612876:R12.FND.C 1OFF:12.2.4:CONSOLIDATED PATCH/CROSS VALIDATION PERFORMANCE ISSUES Long running when there are large numbers (thousands) of CVR rules. Number of code combinations has an impact too. 12.2.6 RUP
In:
Patch 31563978:R12.FND.C FND Recommended Patch Collection (8/2020)
Patch 26560435:R12.FND.C FND Recommended Patch Collection (12/2017)
Patch 25796137:R12.FND.C FND Recommended Patch Collection (4/2017)
FND Patch 23186361:R12.FND.C PERFORMANCE ISSUES AFTER APPLYING PATCH 20085935
See:
Document 2301460.1
Document 2162364.1
12.2.6 RUP
FND Patch 25571587:R12.FND.C USER-LEVEL LOCKS ARE GETTING CREATED WHILE RUNNING LDT FILES These were often from the triggers used to mark changed records (e.g. on FND_MENU_ENTRIES). 12.2.9 RUP
FND Patch 25878814:R12.FND.C FND_FUNCTION COMPILE - PERFORMANCE ANALYSIS On Function compile_menu_all ( FND_FUNCTION ) 12.2.9 RUP
FND Patch 25911647:R12.FND.C 1OFF:12.2.4:PERFORMANCE ISSUE WITH CROSS VALIDATION ENTERING A DISTRIBUTION CHARGE
Bug 25911647
Bug 27697277
When entering a distribution charge code against a purchase order line. 12.2.9 RUP
FND Patch 28141361:R12.FND.C RESOURCEAUDIT ISSUES
Connection leak when enabling the auditing in 12.2.7. Calls being made for each resource accessed, causing large number of connections, rapidly exceeding limits.
See Document 2465067.1
12.2.9 RUP
FND Patch 28978298:R12.FND.C 1OFF:12.2.4:AFTER 27697277 CVR Validation hangs from client side
Forms are hanging when cross validation is enabled on Client side.
See Document 2492829.1
12.2.9 RUP
FND Patch 29389395:R12.FND.C AOLJ CODE NOT HONOURING CONN LABELS(IN WLS POOL MECHANISM) LEADING TO IMPROPER TRIGGERING OF SERVER SIDE INITIALIZATIONS See Document 2538442.1
12.2.9 RUP
In:
Patch 31563978:R12.FND.C FND Recommended Patch Collection (8/2020)
FND Patch 29810066:R12.FND.C 1OFF:29340625:CONNECTION LEAKS @ ORACLE.APPS.FND.SECURITY.RESOURCEAUDIT$RECORD.R Bug 29340625 @ ORACLE.APPS.FND.SECURITY.RESOURCEAUDIT$RECORD.RUN(RESOURCEAUDIT.JAVA:358) 12.2.9 RUP
FND Patch 30134567:R12.FND.C DO NOT STORE LAST SESSION LANGUAGE FOR GUEST USER Bug 29338316 See Document 2578726.1 12.2.10 RUP
FND Patch 30486363:R12.FND.C TRACE NOT GETTING GENERATED FROM APPLICATION IF TRACE_FILE_IDENTIFIER IS TOO LONG.
See:
Document 2700633.1
Document 2707712.1
12.2.10 RUP
FWK Patch 20784744:R12.FWK.C VO QUERY GETTING FIRED MULTIPLE TIMES FOR THE SAME BINDS In PPR and apply action on one of product page. 12.2.6 RUP
FWK Patch 20810195:R12.FWK.C FWDPORT:20028688:PERFORMANCE ISSUE WITH PATCH 18388305
See Document 2044087.1
12.2.6 RUP
FWK Patch 21501307:R12.FWK.C RICH TABLE INTERACTIONS AFFECT THE PERFORMACE BY EXECUTING BLIND QUERY Views created by the personalization handler are causing the views panel of query beans to execute blind queries even when the query bean has a selective search criteria defined. 12.2.6 RUP
OWF Patch 23729834:R12.OWF.C 1OFF:12.2.3:[WORKFLOW BACKGROUND PROCESS] IT TAKES SOME HOURS TO RUN FNDWFBG
FNDWFBG long running when parameter "Process Stuck = Y". Due to SQL query "SELECT ... FROM WF_ITEM_ACTIVITY_STATUSES WIASP, WF_ITEMS WI, WF_PROCESS_ACTIVITIES WPAP, WF_ACTIVITIES WAP WHERE ... "
12.2.9 RUP
TXK Patch 20300514:R12.TXK.C ADOP CUTOVER WONT HAPPEN IN MORE THAN 10 NODES MULTI-NODE
See Document 1986770.1
R12.TXK.C.DELTA.6
adop issues troubleshooting scripts | ADZDPATCH
[STATEMENT] Checking status of adzdpatch concurrent program
[STATEMENT] Status of adzdpatch:
[STATEMENT] The concurrent manager to process this request is currently deactivated.
Contact your system administrator to reactivate the concurrent manager to allow processing of this request.
==>> Can you share the current adop.log for reviewing the error message to see if there is any change.
==>> See which managers are actually running:
SQL> select CONCURRENT_QUEUE_NAME, MAX_PROCESSES, RUNNING_PROCESSES from FND_CONCURRENT_QUEUES where MAX_PROCESSES > 0 order by 1;
==>> Share output of
SQL> select count(*) from fnd_concurrent_programs where concurrent_program_name='ADZDPATCH';
SQL> select request_id,
status_code,
phase_code,
concurrent_program_name
from fnd_concurrent_requests f,
fnd_concurrent_programs fc
where status_code='D' and phase_code='C'
and f.concurrent_program_id=fc.concurrent_program_id
and concurrent_program_name like'%ADZDPATCH%';
SQL> select owner, count(*) from dba_objects where status = 'INVALID' and owner in ('APPS','APPLSYS') group by owner;
SQL> select object_name,object_type, owner, status from dba_objects where status = 'INVALID' and object_name like 'FND%';
SQL> select REQUEST_ID , REQUEST_DATE,PHASE_CODE,STATUS_CODE from Fnd_Concurrent_Requests where REQUEST_ID=; ==>>( replace Request ID for the program ADZDPATCH)
SQL>select AD_ZD_ADOP.WAIT_FOR_CP_TO_RUN(55497979) from dual; ==>> This is taken from previous adop.log, you can replace it if there is change in new adop.log
Using the Online Patching Readiness Report in Oracle E-Business Suite Release 12.2 (Doc ID 1531121.1)
This document introduces the Global Standards Compliance Checker (GSCC) and Readiness Report, and outlines how it is used to help prepare for using Oracle E-Business Suite Release 12.2 with online patching, as described in the applicable version of Oracle E-Business Suite Upgrade Guide, available in Oracle E-Business Suite Documentation Web Library Release 12.2.
This document contains the following sections:
Section 1: Prerequisites for Oracle E-Business Suite Online Patching Readiness and GSCC Report Patch
Section 2: Using the Global Standards Compliance Checker (GSCC) and Readiness Report
Section 3: Check for Data Dictionary Corruption / Time Stamp Mismatch
Section 4: Run Script to Reduce Upgrade Time
Section 5: Patch Readme
References
There is a change log at the end of this document.
Section 1: Prerequisites for Oracle E-Business Suite Online Patching Readiness and GSCC Report Patch
You need to start by identifying and applying the correct Online Patching Readiness and GSCC Report patch for the codelevel from which you are upgrading.
Apply whichever of the following patches is relevant to your Oracle E-Business Suite release.
Oracle E-Business Suite Release Online Patching Readiness and GSCC Report Patch
12.2.3 or higher Patch 31026891:R12.AD.C
12.1 Patch 31026891:R12.AD.B
12.0 Patch 31026891:R12.AD.A
11i Patch 31026891
The patch delivers a new set of Online Patching Development Standards for Oracle E-Business Suite Release 12.2, plus a set of reports that will help you analyze the dependencies between database objects as specified by the new rules used by the Edition Based Redefinition (EBR) feature that was introduced in Oracle Database Release 11gR2.
You can then proceed to use the Global Standards Compliance Checker (GSCC) and Readiness Report, as described in Section 2 below.
Section 2: Using the Global Standards Compliance Checker (GSCC) and Readiness Report
This patch delivers scripts to automatically check that custom code complies with Oracle E-Business Suite online-patching coding standards. The implementation of online patching in Oracle E-Business Suite Release 12.2 relies on the Oracle Database 11gR2 EBR feature, and adds a new logical view over the database objects in Oracle E-Business Suite. Access to these database objects must be via the logical layer, and new coding standards help to ensure that code does this correctly. The implementation of the logical layer is such that the majority of application code already follows the new standards: however, this patch delivers scripts to scan for and identify many compliance issues that may exist.
The Global Standards Compliance Checker (GSCC) delivered in this patch consists of the main engine script, $FND_TOP/bin/gscc.pl, plus a variety of enforcement code (EFC) modules in $FND_TOP/perl/GSCC/OpenEFC that check for common standards issues.
Section 3: Check for Data Dictionary Corruption / Time Stamp Mismatch
If if not already applied, apply the correct Online Patching Readiness and GSCC Report Patch for the codelevel from which you are upgrading. (Refer to Section 1 and Section 2 of this document.)
3.1 Check for Data Dictionary Corruption
Run the $AD_TOP/patch/115/sql/adzddtsfix.sql script, following the usage instructions provided.
Run the $AD_TOP/sql/ADZDDBCC.sql script to identify whether logical data dictionary corruption is present.
If no corruption is found, proceed with the upgrade.
If corruption is present, follow Step 3.2 below.
3.2 Fix Data Dictionary Corruption (Conditional)
Note: Follow these steps only as a fix when logical data dictionary corruption (missing parent) is present.
Copy the script $AD_TOP/patch/115/sql/adzddmpfix.sql to the database tier.
On the database tier, connect to the database as SYSDBA.
Note: If you are on Oracle E-Business Suite Release 12.1.3 or Release 12.2 with a multitenant architecture, to connect to a multitenant architecture database as SYSDBA, run the following commands to set the database SID field to , the pluggable database name for your Oracle E-Business Suite database:
$ source _.env
$ export ORACLE_PDB_SID=;
$ sqlplus "/ as sysdba"
Run the $AD_TOP/patch/115/sql/adzddmpfix.sql script.
On the application tier, run the $AD_TOP/sql/ADZDDBCC.sql script again to identify whether data dictionary corruption is still present.
If no corruption is found, proceed with the upgrade or patching cycle.
If corruption is still present, proceed to Step 3.3 below.
3.3 Additional Actions for Data Dictionary Corruption (Conditional)
Note: Follow these steps only when data dictionary corruption is present after following the steps in 3.2 above.
On the database tier, go to the $ORACLE_HOME/rdbms/admin directory.
Run the utlirp.sql script , following the usage instructions provided.
Run the utlrp.sql script, following the usage instructions provided.
On the application tier, run the $AD_TOP/sql/ADZDDBCC.sql script again to identify whether data dictionary corruption is still present.
If no corruption is found, proceed with the upgrade or patching cycle.
If corruption is still present, contact Oracle Support and request a bug to be logged.
Section 4: Run Script to Reduce Upgrade Time
To reduce the overall upgrade time, you can optionally run the ADZDMVREFRESHNEEDED.SQL script before starting the 12.2.0 upgrade driver.
Section 5: Patch Readme
==============================================================================
R1222: STANDALONE READINESS REPORT AND GLOBAL STANDARDS COMPLIANCE CHECKER (GSCC)
==============================================================================
Update - 31026891
Product - Applications DBA
Release - R12
Platform - Generic Platform
Built - AUG-03-2020 06:27:15
Instructions For Applying This Patch
==============================================================================
Preparation Tasks
==============================================================================
The tasks in this section can be completed without taking any Applications
services or users offline.
There are no tasks to be performed in this section.
Pre-install Tasks
==============================================================================
For 12.0.X / 12.1.X / pre-upgrade patches (using adpatch), you must
shut down all Application tier services before performing the tasks in this
section.
For 12.2.X patches (using adop), you can perform the tasks in this
section without shutting down the Application tier services.
There are no tasks to be performed in this section.
Apply The Patch
==============================================================================
1. Apply patch
This patch contains the following unified driver file to be applied with
ADOP (AD Online patch):
u22200487.drv (This is the unified driver)
Post-install Tasks
==============================================================================
For 12.0.X / 12.1.X patches, you must complete the tasks in this
section before starting up Application tier services.
For 12.2.X patches, you may complete the tasks in this section at
any time after the update, without taking any services or users offline.
There are no tasks to be performed in this section.
Finishing Tasks
==============================================================================
You may complete the tasks in this section at any time after the update,
without taking any services or users offline.
There are no tasks to be performed in this section.
Additional Information
==============================================================================
There are no tasks to be performed in this section.
==============================================================================
Description
==============================================================================
Oracle E-Business Suite Online Patching Readiness and GSCC Report Patch
=======================================================================
The Oracle E-Business Suite Release 12.2 includes a capability to apply
patches online. This feature relies on the Edition Based Redefinition
feature of the Oracle Database introduced in Release 11gR2. This database
feature added some new rules for implementing dependencies between database
objects. The purpose of this patch is to deliver a set of reports that will
help you analyze these dependencies as well as a new set of Online Patching
Development standards.
ADZDPSUM.sql
Lists schemas containing objects that reference EBS objects that are not
editioned. If the schemas are registered with the application, the Online
Patching Enablement patch will enable editioning on those schemas avoiding
many EBR rule violations (Non-Editioned objects referencing Editioned
objects). This report also provides instructions on how to register your
custom schemas, you are adviced to re-run the script after registering your
schemas in order to reduce the number of violations in the next report.
ADZDPMAN.sql
Lists objects that violate EBR standards and that require manual intervention
in order to address the violations. It groups the objects per violation type
and provides instrutions on how to address those violations. Make sure you
address all the violations in this report before proceeding with the next
reports.
ADZDPAUT.sql
Lists objects that violate Online Patching Enablement standards. The objects
in this report will be adjusted automatically by the Online Patching
Enablement patch. This report is provided for information purposes only, no
action is required on the output of this report.
ADZDDBCC.sql
Lists objects that violate Online Patching Development standards and that
require manual intervention in order to address the violations. It groups
the objects per violation type and provides instrutions on how to address
those violations.
Apply patch
===========
1. Apply E-Business Suite:Online Patching Readiness and GSCC Report patch
31026891 applicable to your EBS instance's release version
2. Once it is applied, Auto-patch will copy SQL files from patch to
$AD_TOP/sql/ADZDPSUM.sql
$AD_TOP/sql/ADZDPAUT.sql
$AD_TOP/sql/ADZDPMAN.sql
$AD_TOP/sql/ADZDDBCC.sql
$AD_TOP/sql/ADZDPCUST.sql location.
STEP#1: Run $AD_TOP/sql/ADZDPSUM.sql
====================================
This script should be run by SYSTEM schema of ORACLE database.
For Example:
sqlplus system/@DB @$AD_TOP/sql/ADZDPSUM.sql
OR
sqlplus system@DB @$AD_TOP/sql/ADZDPSUM.sql
In latter case, sql script will prompt for SYSTEM schema password. The
execution-time of this script varies from machine to machine and may take
approximately 2-3 hours to complete.
Note:
The SQL script only queries the data dictionary, it does not query any
customer tables, or perform any data dictionary updates.
Make sure you review the instructions in the report, perform the
recommended actions and re-run the report until you have no more pending
actions.
STEP#2: and STEP#3:
==================
In the same way as STEP#1 has been performed for @$AD_TOP/sql/ADZDPSUM.sql,
it should be repeated for
@$AD_TOP/sql/ADZDPMAN.sql and
@$AD_TOP/sql/ADZDPAUT.sql also
STEP#4: Run ADZDDBCC.sql
========================
This script should be run by APPS schema of E-business Suite.
For Example:
sqlplus APPS/@DB @$AD_TOP/sql/ADZDDBCC.sql
NOTE: Currently these scripts share common objects names, hence should be
run sequentially.
Review generated report
=======================
- The ADZDPSUM.sql SQL script generates report file adzdpsum.txt. Each
section of this report should be reviewed carefully and take appropriate
actions.
- Review adzdpman.txt for detail and follow given instructions
and may review adzdpaut.txt(optional) also but these violations would
be fixed automatically when your instance is enabled for Online Patching.
- Review database compliance-checker report adzddbcc.lst
Global Standards Compliance Checker (GSCC)
==========================================
This patch delivers scripts to automatically check that custom code complies
with E-Business Suite (EBS) online-patching coding standards.
The implementation of Online Patching in Oracle E-Business Suite Release 12.2
relies on the Oracle Database 11gR2 Editioning feature and adds a new logical
view over the database objects in Oracle E-Business Suite. Attempts to access
these database objects must go through the logical layer and new coding
standards help to insure that code does this correctly. The implementation of
the logical layer has been done such that the majority of application code
already follows the new standards; however, this patch delivers scripts to scan
and identify many compliance issues if they exist.
The Global Standards Compliance Checker (GSCC) delivered in this patch consists
of the main engine script $FND_TOP/bin/gscc.pl and a variety of enforcement
code (EFC) modules in $FND_TOP/perl/GSCC/OpenEFC that check for common
standards issues.
The following table lists the messages that can appear in the output of the
GSCC script along with a reference to the applicable Development Standards
sections which are included at the end of this file.
GSCC Error Associated Coding Standard Standards Section
---------- -------------------------- -----------------
File.Sql.80 VPD Policy must be on the Editioning View or VPD
Table Synonym, not the table.
File.Gen.34 Table Name must not use '#' character. Table (Ordinary)
File.Gen.35 Table must be owned by an EBS product schema Table (Ordinary)
and not APPS.
File.Gen.36 Base Column Name may only use '#' as the last Table (Ordinary)
character.
File.Xdf.4 Base Column name should be 28 bytes or less. Table (Ordinary)
File.Gen.37 Column Type must be a built-in type or a Table (Ordinary)
user-defined type owned by a non-editioned user.
File.Xdf.4 Column Type must not be LONG or LONG RAW. Table (Ordinary)
File.Xdf.4 Column Type should not be ROWID Table (Ordinary)
File.Gen.41 Query/DML statements must access Tables via the Table (Ordinary)
Table Synonym or Editioning View.
File.Gen.38 Do not modify application-managed tables in an Table (Ordinary)
Online Patch.
File.Sql.81 Patch the table definition using ODF or XDF. Table (Ordinary)
File.Gen.39 Index Name must contain an underscore ('_'). Index
File.Sql.81 Deliver constraint definition using ODF or XDF. Integrity Constr.
File.Sql.82 Deliver Materialized View using XDF. Materialized View
Using GSCC
==========
The Global Standards Compliance Checker (GSCC) delivered in this patch consists
of the main, engine script $FND_TOP/bin/gscc.pl and a variety of standards
enforcement code (EFC) modules in $FND_TOP/perl/GSCC/OpenEFC/ that check for
common standards issues.
After applying this patch to install the GSCC code, source the applications
environment file and then run the GSCC on a set files under a directory tree
like this:
cd /home/yourdir
$FND_TOP/bin/gscc.pl -f '/home/yourdir/your-code/*'
In this example, gscc.pl will check all of the files located under the
your-code/ directory using the EFC modules located in
$FND_TOP/perl/GSCC/OpenEFC/ and generate a report named gscc-out.log in the
current working directory (/home/yourdir/ in this example). Invoking
gscc.pl without arguments will print a usage message with additional
information.
Reading GSCC Output
-------------------
GSCC output looks like this example (sans line numbers):
01: ---------------------------------------------------------------------------
02: GSCC Compliance Test Results:
03: ---------------------------------------------------------------------------
04: File: /home/yourdir/your-code/yourfile.xdf
05:
06: Errors (3):
07: File.Gen.34 (Code Reviewers: CODE_REVIEWER)
08: 341 - Table Name must not use '#' character.
09: File.Gen.35 (Code Reviewers: CODE_REVIEWER)
10: 340 - Table must be owned by an EBS product schema, not APPS.
11: File.Gen.36 (Code Reviewers: CODE_REVIEWER)
12: 387, 443, 518 - Base Column Name may only use '#' as last character
13:
14: ---------------------------------------------------------------------------
15: GSCC Totals:
16: Passes: 19
17: Fails: 3
18: Warnings: 0
19: Under Review: 0
20: GSCC Errors: 0
21:
22: --* Failed GSCC Compliance Test with Code Review Failures *--
Lines 07:, 09:, and 11: in the example report name the coding standards that
GSCC identified as potential problems in the file being checked (line 04:,
yourfile.xdf in this example).
Lines 08:, 10:, and 12: in the report then give the specific lines within
yourfile.xdf (lines 341, 340, 387, 443, and 518) where potential standards
compliance problems were found.
To correct the errors shown in this example, the table being created on line
341 of yourfile.xdf would need to be renamed to remove the '#' character from
the table name, the table would need to be created in an EBS schema other than
APPS, and the table columns created on lines 387, 443, and 518 of yourfile.xdf
would need to be renamed to remove the '#' character from their names.
Sections of the Database Object Development Standards
-----------------------------------------------------
The following sections of the Database Object Development Standards have
checked by automated GSCC standards.
Virtual Private Database (VPD) Policy
-------------------------------------
VPD Definition Standards:
VPD Policy must be on the Editioning View or Table Synonym, not the table.
VPD Dynamic DDL Standards: No special considerations
VPD Online Patching Compliance Standards: No special considerations.
VPD Tip: You can add or drop VPD policies by calling the
FND_ACCESS_CONTROL_UTIL package from a SQL script (&phase=plb).
For more information on Oracle Virtual Private Database, see: Oracle Database
Security Guide 11g Release 2 (11.2).
Table (Ordinary)
----------------
An Ordinary Table is created/altered/dropped during application patching, In
contrast, a Dynamic Table is created/altered/dropped by application
runtime. The standards in this section only apply to Ordinary Tables.
In order to implement effectively-editioned support for ordinary tables, the
online patching technology installs and maintains a new Editioning View layer
over each table. The editioning view maps logical column names used by the
application to the actual storage columns used to hold those attributes in each
edition. Developers must follow new procedures in order to create and alter
ordinary tables in a development database, and patch those changes to other
systems.
Table Definition Standards:
An Ordinary Table is a table that is used by Oracle E-Business
Suite runtime, and patched via Online Patching
Table Name must not use '#' character. (GSCC FIle.Gen.34)
Table Name must be unique within the first 29 bytes.
Table must be owned by an Oracle E-Business Suite product
schema or custom product schema, not APPS. (GSCC File.Gen.35)
Base Column Name may only use '#' as last character.
(GSCC File.Gen.36)
Base Column Name should be 28 bytes or fewer.
(GSCC File.Xdf.4)
Note: Online Patching currently does not support patching columns that
violate this standard. If a violating column must be patched, then it
must be replaced with compliant column (shorter name) as part of the
patch. Application code will need to be updated to reference the new
shorter column name.
Column Type must be a built-in type or a user-defined type owned by a
non-editioned user. (GSCC File.Gen.37)
Column Type must not be LONG or LONG RAW. For more information on this
standard, see: Oracle Database SecureFiles and Large Objects
Developer's Guide 11g Release 2 (11.2). Also see: LONG to CLOB
Conversion Procedures. (GSCC File.Xdf.4)
Column Type should not be ROWID. (GSCC File.Xdf.4)
ROWID references can become invalid if the target table is patched,
loaded, or rebuilt. It is not safe to store ROWID references across an
online patching boundary.
Table Usage Standards:
Query/DML statements must access Tables via the Table Synonym or Editioning
View. (GSCC File.Gen.41)
If you query, display, or store table column names in your application
runtime code, you should use Logical Column Names rather than Physical
Column Names in most cases.
Follow the Logical versus Physical Column Guidelines for runtime
application code.
Warning: Some dictionary views (such as ALL_TAB_COLUMNS) contain
information for both Logical and Physical table columns, depending on
whether you query the editioning view or base table data. Consult
Logical versus Physical Column Guidelinesfor details.
Review the section Oracle E-Business Suite Objects that Reference
ALL_TAB_COLUMNS or DBA_TAB_COLUMNS and ensure that your query is
getting the correct logical or physical column information for your
purposes.
DDL statements such as TRUNCATE will not work on an APPS table synonym that
points to an editioning view. To truncate a table, you must supply the
actual base table (owner.table_name) in the truncate command.
Table Dynamic DDL Standards:
Application-managed tables are tables that are created and maintained by
application logic during normal application runtime:
Application-managed (dynamic) tables must not have an Editioning View.
Do not modify application-managed tables in an Online Patch.
(GSCC File.Gen.38)
Ordinary tables are created and maintained by Online Patching (and will
have an editioning view)
If the application logic modifies an ordinary table at runtime, it must
use the AD_DDL interface to execute the dynamic DDL.
Do not modify ordinary tables in the Run Edition while a Patch Edition
exists
Table Online Patching Compliance Standards:
Patch the table definition using ODF or XDF.
(GSCC File.Sql.81)
(Automatic) The table will be covered by an editioning view of the
name: substrb(table_name, 1, 29)||'#'
(Automatic) The editioning view will map each logical column name to
the latest version of that column.
Example: Table "FND_PROFILE_OPTIONS" has an editioning view called
"FND_PROFILE_OPTIONS#"
Do not update existing columns in a way that is incompatible with the
running application. Instead, create Revised Columns and store updated data
in the Revised Columns:
A Revised Column Name has the form:
#
Version_tag is a string of the form: [0-9A-Z]+
Version tags are compared using the SQL '>' operator. A "greater"
tag is a later version.
Example: Column "AMOUNT" is replaced by revised
column "AMOUNT#1"
Example: Column "AMOUNT#1" is replaced by revised
column "AMOUNT#2"
Populate a Revised Column using a Forward Cross-Edition Trigger (FCET).
The FCET must be owned by APPS.
The FCET name should be of the form
_F.
Example: "FND_PROFILE_OPTIONS_F3"
The FCET must be created with the "disable" option.
The FCET for the second and subsequent change to a given table must
specify that it "follows " in the trigger
definition.
For information on the FOLLOW clause, see: Oracle Database PL/SQL
Language Reference.
The FCET must be created using a DBDRV command that executes the
create script in phase=ccet.
The FCET must be applied using a DBDRV command that calls
AD_ZD_TABLE_APPLY . in phase=ccet.
Tip: Use the Forward Cross-Edition Trigger template.
Do not drop an existing table until the Cleanup phase of patch execution.
For an example of a deferred drop, see: Code Example of a Deferred Drop.
Do not drop an existing column. Columns that are replaced by new revised
columns will be dropped automatically at cleanup. Dropping a logical column
is not supported.
Do not rename an existing table.
For more information on guidelines for managing tables, see: Oracle Database
Administrator's Guide 11g Release 2 (11.2).For more information on using XDF
(XML Definition File) features, see: XML Definition File Utility.
Index
-----
Index Definition Standards:
Index Name must contain an underscore ('_'). (GSCC File.Gen.39)
The Unique Index on a Seed Data Table must include ZD_EDITION_NAME.
Note: This will be implemented automatically when you call
"ad_zd_seed.upgrade" on your seed data table, but if you add a new unique
index to an existing seed data table you will need to include the
ZD_EDITION_NAME column in your index definition.
The Unique Index on a Seed Data Table should have at least one not-null
column besides ZD_EDITION_NAME.
Note: If the unique index has all nullable columns, then we expect each row
in the table to have at least one non-null column value for the indexed
columns. You must ensure that this is true as part of the Oracle E-Business
Suite 12.2 upgrade (select rows where all indexed columns are null and
either delete or update as needed to meet this standard).
Index Key Length should be less than 3125 bytes.
Index Key Length is the sum of the column lengths for each column in the
index, plus one byte for each column.
If the Index Key Length is greater than 3125 bytes, then the index cannot
be revised using "online" index definition, and a full table lock will be
held during index revision.
A function-based index must not reference editioned Oracle E-Business Suite
objects (built-in database functions such as "UPPER()" are acceptable).
Index Dynamic DDL Standards:
"CREATE INDEX ... ON ..." statement must specify the fully qualified table
name, not the APPS table synonym.
Good: create index SOME_TABLE_N1 on SCHEMA.SOME_TABLE ...
Bad: create index SOME_TABLE_N1 on SOME_TABLE ...
Index Online Patching Compliance Standards:
Deliver the index definition using ODF or XDF.
(Automatic) New or revised indexes will be initially created disabled and
with an alternate name. These indexes will be enabled and renamed during
cutover processing.
Do not drop an existing index until the Cleanup phase of patch execution.
For an example of a deferred drop, see: Code Example of a
Deferred Drop.
For more information on indexes, see: Oracle Database Concepts 11g
Release 2 (11.2).
For more information on using XDF (XML Definition File) features, see:
XML Definition File Utility.
Integrity Constraint
--------------------
Integrity Constraint Definition Standards:
Constraint Name must contain an underscore ('_').
Foreign Key Constraint should not be created, and must not reference
a Seed Data Table.
Integrity Constraint Dynamic DDL Standards:
If you dynamically create a primary or unique constraint against a Seed
Data Table, the constraint must include the ZD_EDITION_NAME column.
Integrity Constraint Online Patching Compliance Standards:
Deliver constraint definition using ODF or XDF. (GSCC File.Sql.81)
New or revised constraints will be initially created as disabled, and
will be enabled during Cutover.
For more information on integrity constraints, see: Oracle Database
Concepts 11g Release 2 (11.2).
For more information on using XDF (XML Definition File) features, see:
XML Definition File Utility.
Materialized View (MV)
----------------------
The Materialized View is a non-editioned object type, and therefore a
materialized view cannot directly reference editioned objects. To avoid this
limitation, Oracle E-Business Suite Online Patching technology implements a new
Effectively Editioned Materialized View compound object. Application developers
create and maintain the Materialized View Definition (query) in an ordinary
view. The Online Patching technology then automatically maintains a
corresponding Materialized View Implementation that is legal for editioned
databases.
MV Definition Standards:
A Materialized View Name must be unique within the first 29 bytes.
A Materialized View Definition must be stored in an ordinary view
called MV_NAME||'#'.
Create or replace the Materialized View Definition as an ordinary view
calledmv_name||'#'.
Test the MV Definition for accuracy before generating the MV
Implementation.
For example:
create or replace view FND_EXAMPLE_MV# as select ... ;
select * from fnd_example_mv#;
The Materialized View Implementation is automatically generated from the MV
Definition using the AD_ZD_MVIEW.UPGRADE procedure.
The syntax is exec ad_zd_mview.upgrade(, )
Do not attempt to directly create or replace the Materialized View
Implementation Query. To recreate an MV Implementation, call the
AD_ZD_MVIEW.UPGRADE procedure.
A Materialized View Definition must specify a column alias for each item in
the select list.
Failure to specify a column alias may cause the error
ORA-00998 "must name this expression with a column alias"
Example: change select sum(EMP.SALARY), ...
to select sum(EMP.SALARY) SUM_EMP_SALARY, ...
A Materialized View Query must not reference editioned PL/SQL functions.
If the MV definition references an editioned PL/SQL function, the MV
Implementation will fail to generate and the MV will be unusable.
For examples of replacing PL/SQL function calls with equivalent SQL in
Materialized Views, see: Examples of SQL Replacements for PL/SQL
Functions
A Materialized View should use 'REFRESH FORCE' instead of 'REFRESH
FAST'. The 'FORCE' option allows the MV to fall back to using a complete
refresh in situations where the fast refresh is not possible.
See: Oracle Database SQL Language Reference 11g Release 2 (11.2) for more
information on the "REFRESH FORCE" option.
If the MV Implementation content must be automatically refreshed after
patching, include the '/*AUTOREFRESH*/' comment tag in the MV Definition
query.
Do not specify the /*AUTOREFRESH*/ tag for large MVs that will take a
long time to refresh. For these cases use a concurrent program to
refresh the MV after patching cutover.
Example: create or replace view FND_EXAMPLE_MV# as select
/*AUTOREFRESH*/ ... ;
MV Usage Standards:
Do not assume that Fast Refresh is always possible. After an online patch,
Complete Refresh may be required. When refreshing a Materialized View, us
the 'FORCE' clause instead of 'FAST'.
See: Oracle Database SQL Language Reference 11g Release 2 (11.2) for more
information on the 'FORCE' option.
MV Dynamic DDL Standards:
Use AD_MV to execute Dynamic DDL for materialized views. Here is an
example of creating a materialized view using the AD_MV package:
--
-- Code Example: Create a materialized view using AD_MV interface.
--
-- Note:
-- when executed in the Run Edition, the MV is created immediately.
-- when executed in the Patch Edition, the MV is generated at CUTOVER.
--
begin
-- Create MV
ad_mv.create_mv('FND_EXAMPLE_MV',
'create materialized view FND_EXAMPLE_MV '||
' tablespace '||ad_mv.g_mv_data_tablespace||' '||
' build deferred refresh on demand as '||
'select /*AUTOREFRESH*/ '||
' upper(oracle_username) USERNAME '||
' , decode(read_only_flag,''C'',''pub'',''E'',''applsys'',''U'',''apps'')
USERTYPE '||
'from fnd_oracle_userid '||
'where read_only_flag in (''C'',''E'',''U'') ');
end;
-- End of Code Example.
MV Online Patching Compliance Standards:
Deliver Materialized View using XDF. (GSCC File.Sql.82)
Do not drop an existing materialized view until the Cleanup phase of patch
execution.
Do not attempt to upgrade, refresh or access a Materialized View
Implementation in the Patch Edition. Although the MV implementation is
visible to the Patch Edition, it continues to implement the Run Edition of
the definition until the cutover phase. MV implementations are
automatically regenerated as needed at the cutover phase.
If an online patch must manually refresh the MV Implementation
contents, submit a concurrent request to do the refresh. The concurrent
request will be run after cutover and therefore after the MV
implementation has been regenerated.
If the MV Definition specifies the /*AUTOREFRESH*/ comment tag, then
the MV Contents will be automatically refreshed whenever the MV
implementation is regenerated.
Do not drop an obsolete materialized view until the Cleanup phase of patch
execution.
For more information on materialized views, see: Oracle Database Concepts 11g
Release 2 (11.2).
For more information on using XDF (XML Definition File) features, see:
XML Definition File Utility.
Examples of SQL Replacements for PL/SQL Functions
-------------------------------------------------
To "Editions-enable" the APPS schema, non-Editionable objects must not depend
on Editionable objects (NE !-> E). To meet this requirement, the database
object development standards specify that Materialized Views (Materialized
Views, or MVs, are non-Editionable) must not call PL/SQL functions (which are
Editionable).
The examples below demonstrate how to replace frequently- used Oracle
Applications Technology PL/SQL function calls with an equivalent SQL in
Materialized Views. You may continue to call built-in PL/SQL functions such as
"upper()". fnd_profile.value() replaced with a SQL sub-select:
Before:
fnd_profile.value('MSC_HUB_REGION_INSTANCE')
After:
(select profile_option_value
from fnd_profile_option_values
where level_id = 10001
and (profile_option_id, application_id) =
(select profile_option_id, application_id
from fnd_profile_options
where profile_option_name = 'MSC_HUB_REGION_INSTANCE'))
Notes:
This replacement is valid ONLY in a Materialized View. For other uses of
fnd_profile.value(), continue using the normal PL/SQL call.
The general case for fetching profile option values is very complex, that
is why there is a PL/SQL package dedicated to doing it. But materialized
views results have to be valid in any context, so profile options
referenced in materialized views should only have site-level values, and
the replacement SQL only needs to support fetching the site level value.
This replacement SQL will only use the profile option value set at the site
level.
fnd_message.get_string() replaced with a SQL sub-select
Before:
fnd_message.get_string('MSC','MSC_HUB_UNASSIGNED')
After:
(select substrb(REPLACE(message_text, '&&', '&'),1,2000)
from fnd_new_messages m, fnd_application a
where m.message_name = 'MSC_HUB_UNASSIGNED'
and m.language_code = 'US'
and a.application_short_name = 'MSC'
and m.application_id = a.application_id)
Notes:
This replacement is valid ONLY in a Materialized View. For other uses of
fnd_message.get_string(), continue using the normal PL/SQL call.
This replacement SQL will only retrieve the US language message text and is
not sensitive to any session language settings.
MV queries cannot contain a sub-SELECT within the main SELECT clause;
therefore, the replacement SQL is a bit trickier if the function call was
used in the MV SELECT clause.
Before:
select fnd_message.get_string('FND', 'CANCEL')
from dual
where 1=1
/
After:
select fmgs.result
from dual
, (select substrb(REPLACE(message_text, '&&', '&'),1,2000) result
from fnd_new_messages m, fnd_application a
where m.message_name = 'CANCEL'
and m.language_code = 'US'
and a.application_short_name = 'FND'
and m.application_id = a.application_id) fmgs
where 1=1
/
fnd_global.lookup_security_group() replaced with a SQL sub-select
Before:
fnd_global.lookup_security_group('INTEREST_STATUS', 279)
After:
(select nvl(max(lt.security_group_id), 0)
from fnd_lookup_types lt
where lt.view_application_id = 279
and lt.lookup_type = 'INTEREST_STATUS'
and lt.security_group_id in (
0,
to_number(decode(substrb(userenv('CLIENT_INFO'),55,1),
' ', '0',
null, '0',
substrb(userenv('CLIENT_INFO'),55,10)))))
Note: This replacement is valid ONLY in a Materialized View. For other uses of
fnd_global.security_group(), continue using the normal PL/SQL call.
Example of a Deferred Drop
Here is a code example of a deferred drop. This example is for a table:
REM dbdrv: sql ~PROD ~PATH ~FILE \
REM dbdrv: none none none sqlplus &phase=last \
REM dbdrv: checkfile:~PROD:~PATH:~FILE &un_fnd
-- Example logic to drop a table under online patching
--
-- Note: This script drops example table "APPLSYS.FND_EXAMPLE_TABLE".
-- To use this logic for another table, you must substitute "&un_fnd" with the
-- actual table owner token ("&un_"), and
-- "FND_EXAMPLE_TABLE" with the actual table name.
--
-- Usage
-- @TABLE_DROP_SCRIPT
drop synonym FND_EXAMPLE_TABLE;
drop view &1..FND_EXAMPLE_TABLE#;
exec ad_zd.load_ddl('CLEANUP', 'drop table &1..FND_EXAMPLE_TABLE')
The contents of the Readme (included with the patch) are as follows:
References
For more information on the operation of Oracle E-Business Suite Online Patching, refer to Chapter 3, Patching Procedures, in Oracle E-Business Suite Maintenance Guide.
Fixed Object and Dictionary Statistics
There can sometimes be issues with Fixed Object and Dictionary Statistics in Online Patching Enablement, R12.2.n RUPs (online patching) or when producing AWR reports or SQLT.
A fixed object (X$ tables) resides in memory only, and typically records the information about the instance or memory structures. The v$ dynamic performance views are defined on top of X$ tables e.g. V$SQL and V$SQL_PLAN.
Data dictionary tables (e.g. SYS.USER$, SYS.TS$, SYS.SEG$, SYS.OBJ$, SYS.TAB$, SYS.FILE) are stored on data files like normal application tables.
Some fixed objects and dictionary objects will have grown significantly during the upgrade.
There are also changes to fixed objects due to Online Patching Enablement (and the underlying Edition-Based Redefinition). As a result internal SQL in Online Patching Enablement, R12.2.n RUPs and other online patches can sometimes be long running. Gathering fixed object statistics can help in these circumstances. Particularly on editioning objects.
There may be additional specific circumstances during the upgrade where fixed object or dictionary statistics need to be gathered (such as before importing schema statistics or running SQLT or AWR reports when AWR has grown significantly).
If there is internal SQL (on V$ views or on SYS/SYSTEM objects) appearing high in AWR and TKPROF reports, it is likely that dictionary and fixed object statistics need to be gathered.
Note that the FND_STATS API does not gather statistics for dictionary or fixed objects. The DBMS_STATS APIs need to be used.
The commands are:
execute DBMS_STATS.GATHER_FIXED_OBJECTS_STATS(no_invalidate=>FALSE);
execute DBMS_STATS.GATHER_DICTIONARY_STATS( estimate_percent => DBMS_STATS.AUTO_SAMPLE_SIZE, options => 'GATHER AUTO', no_invalidate=>FALSE)
Usually the "no_invalidate=>FALSE" argument will not be needed. However, the procedures DBMS_STATS.set_database_prefs, set_global_pefs, set_schema_prefs or set_table_prefs could have been used to set the default value for NO_INVALIDATE to TRUE.
If there are only a handful of internal SQLs with inefficient execution plans and only a few objects then specific objects could be targeted rather than gathering all dictionary or fixed object statistics.
e.g.
exec dbms_stats.gather_table_stats(ownname=>'SYS', tabname=>'OBJ$', no_invalidate=>false);
exec dbms_stats.gather_table_stats(ownname=>'SYS', tabname=>'X$KQFP', no_invalidate=>false);
See the following My Oracle Support documents for more information:
Best Practices for Gathering Statistics with Oracle E-Business Suite (Document 1586374.1)
Fixed Objects Statistics (GATHER_FIXED_OBJECTS_STATS) Considerations (Document 798257.1)
Express Diagnosis of Oracle E-Business Suite Release 12.2.n Upgrade Performance Issues The latest version of this document can be found in My Oracle Support Document "Express Diagnosis of Oracle E-Business Suite Release 12.2.n Upgrade Performance Issues (Document 1583752.1)" In This Document Executive Overview 1. Introduction 2. R12.2, Online Patching and Diagnostics 3. Before Running the Oracle E-Business Suite Upgrade 3.1 Set Statistics_Level = ALL (or _rowsource_execution_statistics = TRUE) 3.2 AWR Snapshot Interval and Retention Period 4. During the Oracle E-Business Suite Upgrade 4.1 Obtain Top SQL in Cursor Cache or AWR 4.2 Obtain Display Cursor Report for Long-Running SQL 4.3 Obtain SQL Monitor Report for SQL Using Parallel Query/DML 4.4 Identify when SQL ran 4.5 Match Long-Running SQL to Jobs 4.6 Report on CBO Statistics for All Oracle E-Business Suite Tables 5. After the Oracle E-Business Suite Upgrade 5.1 Obtain AD Job Timing Report 5.2 Identify Long-Running Upgrade Jobs 5.3 Obtain File versions for Long-Running Jobs 5.4 Obtain AWR Reports 5.5 SQLHC or SQLT 5.6 SQLHC 5.7 SQLT with XTRACT method 6. Online Patching Diagnostics 6.1 Online Patching Enablement - Specific Diagnostics 6.2 Online Patching (ADOP) Logs and Diagnostics 6.3 Non ADOP Logs 6.4 Online Patching Log Analyzer Utility 6.5 adzdshowlog.out 6.6 Check the current status of the adop cycle 6.7 SQL to determine status of ADOP phases 7. Fixed Object and Dictionary Statistics Executive Overview This document describes the diagnostic strategies and methods that can be used during an Oracle E-Business Suite Release 12.2 upgrade to minimize downtime and expedite resolution of any issues. The content applies to upgrades from 11.5.10, R12.0.n or R12.1.n to R12.2.n. The same principles can also be applied to other Oracle E-Business Suite upgrades. For more detailed information see the My Oracle Support document “Best Practices for Minimizing Oracle E-Business Suite Release 12.2.n Upgrade Downtime (Document 1581549.1)” Clarification : In all cases it is only necessary to produce diagnostics during test runs (and not during the final production run). 1. Introduction When analyzing Release 12.2.n Upgrade performance issues, the goal is to: Prevent wasted test iterations. Aim to provide solutions that solve the issue first time. Maximize the number of performance issues investigated on each iteration. Maximize the number of performance issues resolved. Upgrade jobs cannot be tested in isolation. They can only be tested on the next iteration. If a fix does not work, it is a potential wasted test iteration. To do this the following are needed: Actual statistics: So it is possible to see exactly which execution plan steps are inefficient, rather than those that might be inefficient. The likely impact of a performance fix can also be estimated. There will then be a higher probability of providing a fix that can solve the performance issue first time. Also, it will be possible to identify marginal fixes (i.e. fixes that reduce elapsed times by 10-50%, for example by having more specific index access). These fixes often reduce contention between workers. Diagnostics that are quick and easy to run, so that diagnostics can be obtained faster and on more jobs/SQL. Diagnostics that have very little impact on the performance of the Release 12.2 upgrade: if they can be run during the upgrade then the results are obtained sooner and the issue resolved more quickly. 2. R12.2, Online Patching and Diagnostics The new diagnostics available for ADOP are logs. These give timestamps for particular steps in the upgrade, or list errors that have occurred. However, they do not identify the SQLs or underlying events that caused the performance issue, so it is essential to use AWR reports, AD Job Timing Reports, Display Cursor Reports and SQLT Extracts etc. Online Patching (ADOP) and the underlying Edition-Based Redefinition are only used after "Online Patching Enablement" for the "R12.AD.C.Delta.n", "R12.TXK.C.Delta.n", "12.2.n RUP" and subsequent patches. Note that although the Online Patching method is used for the latter stages in the upgrade to R12.2.n, these are run during downtime (the application is not running). Since R12.AD.C.Delta.5 the 12.2.n RUP patches can be applied using "downtime" mode. So the performance advice and challenges for an upgrade to R12.2.0 are largely the same as for an upgrade to R12.1.n. The same method is used to apply many of the patches (AutoPatch). However, the Online Patching Enablement patch, Online Patching and the Release Update Packs for 12.2.n are quite different to the main R12.2.0 upgrade, so they may introduce new performance challenges, particularly with internal SQL and ADOP SQL (i.e. SQL run from or on AD_ZD objects). It is important to stress that although Online Patching (ADOP) provides new logs to track performance, the key diagnostics are still the same ones used for previous Oracle E-Business upgrades (e.g. Display Cursor reports, SQL Monitor reports, AWR reports, AD Job Timing Reports, SQLHC reports, SQLT Extracts etc.) 3. Before Running the Oracle E-Business Suite Upgrade 3.1 Set Statistics_Level = ALL (or _rowsource_execution_statistics = TRUE) This can be set using the command: SQL>alter system set statistics_level='ALL' This is the simplest way to see actual row source statistics (including elapsed time, physical reads, buffer gets etc) for each execution plan line (on SQLT and Display Cursor report). The alternative of SQL Trace and TKPROF requires editing standard code. Note that the internal views v$sql_plan_statistics and v$sql_plan_statistics_all will not contain any actual row source statistics for execution plan lines if statistics_level = TYPICAL, even if timed_statistics = TRUE. When the STATISTICS_LEVEL parameter is set to ALL, additional statistics are added to the set of statistics collected with the TYPICAL setting. The additional statistics are timed OS statistics and plan execution statistics, which include row source statistics. Using this strategy will typically speed up the resolution of issues significantly and may also allow the correct solution to be identified first time. Alternatively, the same actual execution statistics can be collected by setting the initialization parameter _rowsource_execution_statistics=TRUE (with statistics_level = 'TYPICAL'). This gives a lower overhead than statistics_level=ALL. Some technical architects and DBAs at customers (or implementing partners) can be resistant to setting statistics_level = ALL (or _rowsource_execution_statistics = TRUE), believing that this can slow down performance significantly. Two points are relevant here: Although setting statistics_level = ALL / _rowsource_execution_statistics = TRUE will have some performance impact, it is likely to be small and not significant. The Release 12.2 upgrade is made up of batch processes, and so the statistics workload is a much lower proportion of the total. Even if the performance impact is significant, the goal is to reduce the elapsed times for the latter dry runs and go live (when it will be feasible to revert statistics_level / _rowsource_execution_statistics to their previous values). So suffering an increase in elapsed time during an early stage of testing is not an issue. So there may be a small impact on elapsed time and the work that needs to be done initially, but it will help to subsequently reduce the elapsed time and amount of re-work that needs to be done. Note that setting statistics_level to ALL while AWR is enabled could significantly increase the number of rows inserted to the WRH$_LATCH_CHILDREN table. So monitor the SYSAUX tablespace to ensure that it does not run out of space. 3.2 AWR Snapshot Interval and Retention Period Automatic Workload Repository (AWR) should be enabled with a snapshot of 30 minutes (the default is 60 minutes). For short upgrades, a shorter snapshot may be more suitable. The AWR retention period should be long enough to cover the duration of the upgrade run and a significant period afterwards (to gather diagnostics and analyze). The suggestion is N+7 days, where N is the estimated upgrade time, but a longer period will provide more time to gather subsequent diagnostics and statistics. 4. During the Oracle E-Business Suite Upgrade 4.1 Obtain Top SQL in Cursor Cache or AWR This could be internal or application SQL. These scripts should be run regularly during the upgrade, particularly when there are long-running jobs. If SQL is still in memory (cursor cache) the following can be used to identify long running SQLs that may not have been written to the AWR yet (at last snapshot): SELECT * FROM (SELECT ss.sql_id, ROUND(SUM(ss.elapsed_time/1000000),0) elapsed_time_secs, ROUND(SUM(ss.cpu_time/1000000),0) cpu_time_secs, SUM(ss.disk_reads) disk_reads, SUM(ss.direct_writes) direct_writes, SUM(ss.buffer_gets) buffer_gets, SUM(ss.px_servers_executions) px_server_execs, SUM(ss.rows_processed) rows_processed, SUM(ss.executions) executions, SUM(ss.application_wait_time) apwait_secs, SUM(ss.sharable_mem) sharable_mem, SUM(ss.total_sharable_mem) total_sharable_mem FROM v$sqlstats ss GROUP BY ss.sql_id ORDER BY 2 DESC) WHERE ROWNUM <= 100; The following SQL script will report the longest running SQLs between two AWR snapshots. SELECT * FROM (SELECT dhs.sql_id, ROUND(SUM(dhs.elapsed_time_delta/1000000),0) elapsed_time_secs, ROUND(SUM(dhs.cpu_time_delta/1000000),0) cpu_time_secs, SUM(dhs.disk_reads_delta) disk_reads, SUM(dhs.buffer_gets_delta) buffer_gets, SUM(dhs.px_servers_execs_delta) px_server_execs, SUM(dhs.rows_processed_delta) rows_processed, SUM(dhs.executions_delta) executions, ROUND(SUM(dhs.iowait_delta/1000000),0) iowait_secs, ROUND(SUM(dhs.clwait_delta/1000000),0) clwait_secs, ROUND(SUM(dhs.ccwait_delta/1000000),0) ccwait_secs, ROUND(SUM(dhs.apwait_delta/1000000),0) apwait_secs FROM dba_hist_sqlstat dhs , v$database d WHERE dhs.dbid = d.dbid AND snap_id > AND snap_id <= GROUP BY dhs.sql_id ORDER BY 2 DESC) WHERE ROWNUM <= 100; Where and are the start and end snapshot IDs. The output of this statement will look similar to the following: SQL_ID ELAPSED_TIME_SECS CPU_TIME_SECS DISK_READS BUFFER_GETS …. ------------- ----------------- --------------- ---------- ----------- …. xxxxxxxxxxxxx 367440 42999 34838244 3795838289 …. xxxxxxxxxxxxx 264369 170788 441127 562033013 …. xxxxxxxxxxxxx 70370 6448 3599284 469639133 …. xxxxxxxxxxxxx 68298 38896 7125573 1327384554 …. xxxxxxxxxxxxx 63600 27402 20043712 587615960 …. The elapsed time is the maximum elapsed time for all workers of a job. Enterprise Manager can also be used to identify expensive SQL as it occurs. 4.2 Obtain Display Cursor Report for Long-Running SQL For long-running SQL reported by the above script, run a display cursor report (with ALL +ALLSTATS option). This displays the actual execution plan of any cursor loaded in the cursor cache. At the basic level it shows the runtime execution plan. However, the format ALL also includes extra information such as pruning, parallel execution, predicate, projection, alias and remote SQL information. This should be run (as soon as possible) whilst the jobs are running or very shortly afterwards. If running of the Display Cursor Report is delayed, the cursor may have been flushed from memory or invalidated and no actual statistics will be available and the report will display no data. The +ALLSTATS option (which includes IOSTATS and MEMSTATS) will include actual statistics for each execution plan step. These include: Elapsed time Physical reads Buffer gets Memory used (in PGA) for memory intensive operations (such as hash-joins, sorts, bitmap operators etc). However, this additional information is only provided if statistics_level=ALL / _rowsource_execution_statistics = TRUE Note that SQLT with XTRACT will also report actual row source statistics in the same circumstances. However, Display Cursor provides a simpler view of the information. It can also be run during the upgrade, while the long running SQL is in progress, without much of an overhead. The report can be produced by running the following SQL script: SET pages 0 SET lines 300 SET LONG 10000 SET LONGCHUNKSIZE 10000 SPOOL .txt SELECT * FROM TABLE(dbms_xplan.display_cursor('', NULL, 'ALL +ALLSTATS')); SPOOL OFF; For more information see the "Display Cursor" section in My Oracle Support document "Oracle E-Business Suite Performance Guide (Document 1672174.1)" If the SQL is no longer in memory, but is in the AWR, use the Display AWR report (DBMS_XPLAN.DISPLAY_AWR) instead. SET pages 0 SET lines 300 SET LONG 10000 SET LONGCHUNKSIZE 10000 SPOOL.txt SELECT * FROM TABLE(dbms_xplan.display_awr('', NULL, NULL, 'ALL')); SPOOL OFF; However, this does not report on actuals: it does not have a +ALLSTATS option, and there are no actual statistics for execution plan steps stored in AWR. Note that SQLT with XTRACT method will not report on actual statistics in this case either. Note that the display cursor and AWR reports only show the sql_text (first 1000 characters) and not the full_text. So, if necessary, run the following SQL script to obtain the full SQL text. SET pages 0 SET lines 300 SET LONG 10000 SET LONGCHUNKSIZE 10000 SPOOL .txt SELECT sql_id, sql_text, sql_fulltext FROM v$SQL WHERE sql_id = ''; SPOOL OFF; 4.2.1 Automate execution of Display Cursor reports for top SQLs The following script can be used (on linux) to automatically produce Display Cursor reports for the top SQL (by elapsed time) during each upgrade patch. It also produces Display AWR reports, which only provide the execution plan and no actual statistics, but which may be useful as a fall back if statistics_level or _rowsource_execution_statistics have not been set, or a particular SQL was no longer in cursor cache. Although this automated process will produce many reports that are never looked at. It will mean that if a long running SQL (as part of a long running job) is identified then the Display Cursor report will already be available for analysis. It will not be necessary to wait until the next test run to obtain one. There are 4 parameters that should be passed to the script: Apps password Number of iterations. This should be large enough so that the shell script keeps on producing Display Cursor reports throughout the upgrade patch. It is recommended that this is much larger than needed, the shell script can always be killed when the patch has completed. Iteration gap in minutes. This should be short enough so that all long running sqls of interest are reported, but not cause much overhead. 10 minutes is recommended. Number of Top SQLs to monitor (by elapsed time). This should be large enough so that all long running sqls of interest are reported, but not cause much overhead. 20 is recommended. So, if the apps password is represented by , the number of iterations is 72, the gap is 10 minutes and the number of top SQLs is 20, the script will be called as follows: xxauto_dc.sh 72 10 20 The shell script loops for the specified number of iterations. In each iteration of the loop it : Calls the SQL script xxautodc.sql, passing the number of top SQLs parameter. This builds a temporary SQL script dispcurbatch.sql to spool and run the Display Cursor reports for the top SQL and then executes that script. Removes the temporary SQL script dispcurbatch.sql (after it has been run). Sleeps for the specified number of minutes (before starting the next iteration of the loop). There are 3 sets of reports : Disp_Curs__.txt. This is the Display Cursor report for a top SQL in the AWR Disp_AWR__.txt. This is the Display AWR report for a top SQL in the AWR CC_Disp_Curs__.txt. This is the Display Cursor report for a top SQL in the Cursor Cache Where is the SQL ID for the particular SQL statement. indicates the approximate time that the report was produced (i.e. from which iteration) in format YYMMDDHH24MI Ensure that there is plenty of space on the file system (volume) available to accommodate the output files. # Script xxauto_dc.sh # Get arguments # 1. apps password # 2. number of iterations # 3. iteration gap - minutes # 4. No of Top SQLs export appspass="$1" export numit="$2" export gap="$3" export topsql="$4" iteration=1 until [ ${iteration} -gt ${numit} ] do echo "Iteration ${iteration}" sqlplus apps/${appspass} @xxautodc.sql $topsql # remove batch file (if it exists) rm -f dispcurbatch.sql iteration=`expr $iteration + 1` # Now sleep for requested number of minutes if [ ${iteration} -gt ${numit} ] then echo "Do not sleep" else sleep ${gap}m fi done The SQL script xxautodc.sql: Reads the "Number of Top SQLs" parameter. Runs SQL on the AWR and cursor cache to spool output to temporary SQL script file dispcurbatch.sql. The SQL script dispcurbatch.sql will contain commands to spool and run Display Cursor reports for the top SQLs (in AWR and cursor cache). Executes the SQL script dispcurbatch.sql. -- Script xxautodc.sql -- example script for building script to run Display Cursor for top sql ids -- It is absolutely essential that either statistics_level = ALL or _rowsource_execution_statistics = TRUE -- otherwise the Display Cursor report will not contain the actual rowsource statistics (for each plan line) -- that are essential in quickly identifying if and why a SQL has an inefficient execution plan. WHENEVER SQLERROR CONTINUE WHENEVER OSERROR CONTINUE VARIABLE l_topsql NUMBER -- pick up l_topsql and set bind var DEFINE topsql = '&&1' exec :l_topsql := '&topsql' set echo off heading off feedback off verify off set pages 0 termout off set linesize 100 spool dispcurbatch.sql SELECT 'WHENEVER SQLERROR CONTINUE' FROM DUAL; SELECT 'WHENEVER OSERROR CONTINUE' FROM DUAL; SELECT 'SET pages 0' FROM DUAL; SELECT 'SET lines 300' FROM DUAL; SELECT 'SET LONG 10000' FROM DUAL; SELECT 'SET LONGCHUNKSIZE 10000' FROM DUAL; SELECT ' ', 'SPOOL ' ||'Disp_Curs_' ||sql_id ||'_' ||TO_CHAR(SYSDATE,'YYMMDDHH24MI') ||'.txt ', 'SELECT * FROM TABLE(dbms_xplan.display_cursor(''' ||sql_id ||''', NULL, ''ALL +ALLSTATS'')); ', 'SPOOL OFF', ' ', 'SPOOL ' ||'Disp_AWR_' ||sql_id ||'_' ||TO_CHAR(SYSDATE,'YYMMDDHH24MI') ||'.txt ', 'SELECT * FROM TABLE(dbms_xplan.display_awr(''' ||sql_id ||''', format => ''ALL'')); ', 'SPOOL OFF' FROM (SELECT /*+ leading(d,i,dhss,dhst) */ dhss.sql_id, ROUND(SUM(dhss.elapsed_time_delta/1000000),0) elapsed_time_secs -- needs sum because of different plan (hash)s for same SQL FROM v$database d, v$instance i, dba_hist_sqlstat dhss, dba_hist_sqltext dhst WHERE dhss.dbid = d.dbid AND dhss.instance_number = i.instance_number AND dhst.dbid = dhss.dbid AND dhst.sql_id = dhss.sql_id AND dhst.command_type != 47 -- exclude anonymous blocks AND dhss.snap_id = (SELECT /*+ push_subq */ MAX(dhsn.snap_id) FROM dba_hist_snapshot dhsn, v$database d, v$instance i WHERE dhsn.dbid = d.dbid AND dhsn.instance_number = i.instance_number) -- Just report on latest completed snapshot GROUP BY dhss.sql_id ORDER BY 2 DESC) WHERE rownum <= :l_topsql -- limit to the top SQLs (ordered by elapsed time). ; -- Also get top SQLs from cursor cache - to fill any gaps SELECT ' ', 'SPOOL ' ||'CC_Disp_Curs_' ||sql_id ||'_' ||TO_CHAR(SYSDATE,'YYMMDDHH24MI') ||'.txt ', 'SELECT * FROM TABLE(dbms_xplan.display_cursor(''' ||sql_id ||''', NULL, ''ALL +ALLSTATS'')); ', 'SPOOL OFF' FROM (SELECT ss.sql_id, ROUND(SUM(ss.delta_elapsed_time/1000000),0) elapsed_time_secs FROM v$sqlstats ss, v$sql s WHERE s.sql_id = ss.sql_id AND s.child_address = ss.last_active_child_address AND s.command_type != 47 -- exclude anonymous blocks GROUP BY ss.sql_id ORDER BY 2 DESC) WHERE rownum <= :l_topsql -- limit to the top SQLs (ordered by elapsed time). ; spool off set termout on echo on verify on heading on feedback on set termout off -- Now run the script that has been built above @@dispcurbatch set termout on exit 4.3 Obtain SQL Monitor Report for SQL Using Parallel Query/DML Obtain SQL Monitor Reports for SQL that uses Parallel Query or DML The main advantage of this is that it gives a good view of how parallel SQL/DML performs across stages of the plan and parallel child (secondary) processes. It can also give a good idea of the actual executions and row counts for each execution plan line even if "statistics_level" initialization parameter is not set to ALL ( or "_rowsource_execution_statistics" is not set to TRUE) at the time the SQL is executed. It can be run during the upgrade, while the long running SQL is in progress (or shortly afterwards), without much of an overhead. It can be produced by running the following SQL script: set trimspool on set trim on set pages 0 set long 10000000 set longchunksize 10000000 set linesize 200 set termout off spool sql_monitor_for_.htm variable my_rept CLOB; BEGIN :my_rept := dbms_sqltune.report_sql_monitor(sql_id => '', report_level => 'ALL', type => 'HTML'); END; / print :my_rept spool off; set termout on For more information see the "SQL Monitor Report" section in My Oracle Support document "Oracle E-Business Suite Performance Guide (Document 1672174.1)" 4.4 Identify when SQL ran The following SQL will show when a particular piece of SQL ran (i.e. between which snapshots). This is useful in matching SQLs to jobs. SELECT dhs.sql_id, dsn.snap_id, dsn.begin_interval_time, dsn.end_interval_time, ROUND(SUM(dhs.elapsed_time_delta/1000000),0) elapsed_time_secs FROM dba_hist_sqlstat dhs , v$database d , dba_hist_snapshot dsn WHERE dhs.dbid = d.dbid AND dsn.snap_id = dhs.snap_id AND dsn.dbid = dhs.dbid AND dsn.instance_number = dhs.instance_number AND dhs.sql_id = '' AND dsn.snap_id > AND dsn.snap_id <= GROUP BY dhs.sql_id, dsn.snap_id, dsn.begin_interval_time, dsn.end_interval_time ORDER BY dsn.snap_id; Where and are the start and end snapshot IDs. The output of this statement will look similar to the following: SQL_ID SNAP_ID BEGIN_INTERVAL_TIME END_INTERVAL_TIME ELAPSED_TIME_SECS ------------- ------- ----------------------- ----------------------- ----------------- xxxxxxxxxxxxx xxxx 04-JAN-13 23.00.25.5560 05-JAN-13 00.00.21.1620 23123 xxxxxxxxxxxxx xxxx 05-JAN-13 00.00.21.1620 05-JAN-13 01.00.38.2680 37145 4.5 Match Long-Running SQL to Jobs The following SQL will give jobs running at any point between two time intervals, with the longest running jobs first. This is useful in matching SQLs to jobs. Where: = start of period to report in format YYYYMMDDHH24MISS = end of period to report in format YYYYMMDDHH24MISS Note that the job must have completed for it to be reported by this SQL script. SELECT phase, phase_name, product, job_name, max_elapsed_time, min_start_time, max_end_time, workers FROM (SELECT phase, phase_name, product, job_name, MAX(elapsed_time) elapsed_time_unconv, LPAD(FLOOR(MAX(elapsed_time)*24), 4)||':'|| LPAD(FLOOR((MAX(elapsed_time)*24-floor(MAX(elapsed_time)*24))*60), 2, '0')||':'|| LPAD(MOD(ROUND(MAX(elapsed_time)*86400), 60), 2, '0') max_elapsed_time, INITCAP(TO_CHAR(MIN(start_time),'MON DD HH24:MI', 'NLS_DATE_LANGUAGE = american')) min_start_time, INITCAP(TO_CHAR(MAX(end_time),'MON DD HH24:MI', 'NLS_DATE_LANGUAGE = american')) max_end_time, count(worker_id) workers FROM ad_task_timing WHERE session_id = AND ( start_time BETWEEN TO_DATE('','YYYYMMDDHH24MISS') AND TO_DATE('','YYYYMMDDHH24MISS') OR NVL(end_time, start_time+elapsed_time) BETWEEN TO_DATE('','YYYYMMDDHH24MISS') AND TO_DATE('','YYYYMMDDHH24MISS') ) GROUP BY phase, phase_name, product, job_name) ORDER BY elapsed_time_unconv DESC; The output of this statement will look similar to the following: Phase max elapsed phase name product job_name time min_start_time max_end_time workers ----- ------ ------- ----------------- ----------- -------------- ------------ ------- 255 upg+80 zx zxaptrxmigupd.sql 2:43:47 Mar 13 04:22 Mar 13 07:06 64 255 upg+80 ap apxlaupg.sql 1:38:57 Mar 13 04:03 Mar 13 05:42 1 To find upgrade AD jobs that are in progress, use adctrl option 1 (Show worker status). When they started can be determined by looking at the patch log file. e.g. $ cat u_merged.log|grep -A2 cstpostimportaad.sql Assigned: file cstpostimportaad.sql on worker 48 for product bom username BOM. Time is: Fri Mar 22 2013 22:48:54 4.6 Report on CBO Statistics for All Oracle E-Business Suite Tables Report on the CBO statistics for Oracle E-Business Suite tables during the upgrade, before adsstats.sql is run (in 12.2.0 CUP). The script adsstats.sql will populate the statistics correctly before the end of the upgrade. The fact that tables may have incorrect statistics during the upgrade will not be visible. So it may not be possible to see that a table had null, zero or inaccurate CBO statistics, and that this is the reason for an expensive execution plan. The following script will report on the CBO statistics for all Oracle E-Business Suite tables: SELECT owner, table_name, num_rows, TO_CHAR(last_analyzed,'DD-MON-YYYY HH24:MI:SS') last_analyzed FROM all_tables WHERE owner IN (SELECT upper(oracle_username) sname FROM fnd_oracle_userid WHERE oracle_id BETWEEN 900 AND 999 AND read_only_flag = 'U' UNION ALL SELECT DISTINCT upper(oracle_username) sname FROM fnd_oracle_userid a, fnd_product_installations b WHERE a.oracle_id = b.oracle_id ) ORDER BY owner, table_name; The output of this statement will look similar to the following: OWNER TABLE_NAME NUM_ROWS LAST_ANALYZED ---------------------- ---------------------------- ---------- ------------------------ ABM ABM_ACC_MAP_SUM_REP 0 06-JAN-2013 08:46:33 ABM ABM_ACT_ACC_RU_DAT 0 06-JAN-2013 08:46:35 ABM ABM_ACT_STA_RU_DAT 0 06-JAN-2013 08:46:36 ABM ABM_ACT_TAGS 0 06-JAN-2013 08:46:37 ABM ABM_API_TEMPLATES 38 06-JAN-2013 08:44:53 ABM ABM_API_TEMPLATES_TL 722 06-JAN-2013 08:41:16 ABM ABM_API_TEMPLATE_ATTRIBUTES 248 06-JAN-2013 08:44:34 5. After the Oracle E-Business Suite Upgrade These are diagnostics to be obtained directly after each of the main patches/RUPs have completed. This will be directly after each of 12.2.0 CUP, Online Patching Enablement, R12.AD.C.Delta.n, R12.TXK.C.Delta.n and 12.2.n RUP. 5.1 AD Job Timing Report This reports: Number of successful, failed, deferred, re-started or skipped jobs. The top 100 time consuming jobs. The failed, deferred, re-started and skipped jobs. The timing of each upgrade phase, with the total number of jobs, and the number deferred, re-started and skipped. However, it only reports the Top 100 Time Consuming Jobs, and for AD Parallel jobs it considers each worker to be a different job. This means it may only report a handful of jobs. An alternative is to query the AD_TASK_TIMING table for long running jobs directly. When ADOP, AutoPatch or AD Administration is run, it automatically generates an AD Job Timing report (adt.lst). The contents of this report can be accessed from Oracle Application Manager, or reports can be obtained for completed upgrade sessions from the APPL_TOP/admin//out directory. The report is called adt.lst. The AD Job Timing Report can also be run for AD Administration jobs from the command line. $ cd $APPL_TOP/admin//out $ sqlplus / @$AD_TOP/admin/sql/adtimrpt.sql \
The following SQL statement will show the status for each adop phase along with its corresponding session id.
SELECT adop_session_id, prepare_status, apply_status, finalize_status,
cutover_status, cleanup_status, abort_status, status, abandon_flag, node_name
FROM ad_adop_sessions
ORDER BY adop_session_id;
This is effectively a history of online patching in an environment.
The following statuses apply to all phases:
Y : the phase is done
N : the phase has not been completed
X : the phase is not applicable
R : the phase is running (in progress)
F : the phase has failed
P : (is applicable only to APPLY phase) at least one patch is already applied for the session id
C : the status of this ADOP session has completed
Note: Numerical statuses are only relevant for the cutover phase. These status values are updated when a step has completed, and are as follows:
0 : cutover/force_shutdown has started
1 : "force_shutdown" step has successfully executed
3 : "db_cutover" step has successfully executed
4 : "fs_cutover" step has successfully executed
6 : "force_startup" step has successfully executed
Cutover statuses
cutover_status='Y' 'COMPLETED'
cutover_status not in ('N','Y','X') and status='F' 'FAILED'
cutover_status='0' 'CUTOVER STARTED'
cutover_status='1' 'SERVICES SHUTDOWN COMPLETED'
cutover_status='3' 'DB CUTOVER COMPLETED'
cutover_status='D' 'FLIP SNAPSHOTS COMPLETED'
cutover_status='4' 'FS CUTOVER COMPLETED'
cutover_status='5' 'ADMIN STARTUP COMPLETED'
cutover_status='6' 'SERVICES STARTUP COMPLETED'
cutover_status='N' 'NOT STARTED'
cutover_status='X' 'NOT APPLICABLE'
Express Diagnosis of Oracle E-Business Suite Release 12.2.n Upgrade Performance Issues (Doc ID 1583752.1)
Express Diagnosis of Oracle E-Business Suite Release 12.2.n Upgrade Performance Issues
The latest version of this document can be found in My Oracle Support Document "Express Diagnosis of Oracle E-Business Suite Release 12.2.n Upgrade Performance Issues (Document 1583752.1)"
In This Document
Executive Overview
1. Introduction
2. R12.2, Online Patching and Diagnostics
3. Before Running the Oracle E-Business Suite Upgrade
3.1 Set Statistics_Level = ALL (or _rowsource_execution_statistics = TRUE)
3.2 AWR Snapshot Interval and Retention Period
4. During the Oracle E-Business Suite Upgrade
4.1 Obtain Top SQL in Cursor Cache or AWR
4.2 Obtain Display Cursor Report for Long-Running SQL
4.3 Obtain SQL Monitor Report for SQL Using Parallel Query/DML
4.4 Identify when SQL ran
4.5 Match Long-Running SQL to Jobs
4.6 Report on CBO Statistics for All Oracle E-Business Suite Tables
5. After the Oracle E-Business Suite Upgrade
5.1 Obtain AD Job Timing Report
5.2 Identify Long-Running Upgrade Jobs
5.3 Obtain File versions for Long-Running Jobs
5.4 Obtain AWR Reports
5.5 SQLHC or SQLT
5.6 SQLHC
5.7 SQLT with XTRACT method
6. Online Patching Diagnostics
6.1 Online Patching Enablement - Specific Diagnostics
6.2 Online Patching (ADOP) Logs and Diagnostics
6.3 Non ADOP Logs
6.4 Online Patching Log Analyzer Utility
6.5 adzdshowlog.out
6.6 Check the current status of the adop cycle
6.7 SQL to determine status of ADOP phases
7. Fixed Object and Dictionary Statistics
Executive Overview
This document describes the diagnostic strategies and methods that can be used during an Oracle E-Business Suite Release 12.2 upgrade to minimize downtime and expedite resolution of any issues.
The content applies to upgrades from 11.5.10, R12.0.n or R12.1.n to R12.2.n.
The same principles can also be applied to other Oracle E-Business Suite upgrades.
For more detailed information see the My Oracle Support document “Best Practices for Minimizing Oracle E-Business Suite Release 12.2.n Upgrade Downtime (Document 1581549.1)”
Clarification : In all cases it is only necessary to produce diagnostics during test runs (and not during the final production run).
1. Introduction
When analyzing Release 12.2.n Upgrade performance issues, the goal is to:
Prevent wasted test iterations. Aim to provide solutions that solve the issue first time.
Maximize the number of performance issues investigated on each iteration.
Maximize the number of performance issues resolved.
Upgrade jobs cannot be tested in isolation. They can only be tested on the next iteration. If a fix does not work, it is a potential wasted test iteration.
To do this the following are needed:
Actual statistics: So it is possible to see exactly which execution plan steps are inefficient, rather than those that might be inefficient. The likely impact of a performance fix can also be estimated. There will then be a higher probability of providing a fix that can solve the performance issue first time. Also, it will be possible to identify marginal fixes (i.e. fixes that reduce elapsed times by 10-50%, for example by having more specific index access). These fixes often reduce contention between workers.
Diagnostics that are quick and easy to run, so that diagnostics can be obtained faster and on more jobs/SQL.
Diagnostics that have very little impact on the performance of the Release 12.2 upgrade: if they can be run during the upgrade then the results are obtained sooner and the issue resolved more quickly.
2. R12.2, Online Patching and Diagnostics
The new diagnostics available for ADOP are logs. These give timestamps for particular steps in the upgrade, or list errors that have occurred. However, they do not identify the SQLs or underlying events that caused the performance issue, so it is essential to use AWR reports, AD Job Timing Reports, Display Cursor Reports and SQLT Extracts etc.
Online Patching (ADOP) and the underlying Edition-Based Redefinition are only used after "Online Patching Enablement" for the "R12.AD.C.Delta.n", "R12.TXK.C.Delta.n", "12.2.n RUP" and subsequent patches.
Note that although the Online Patching method is used for the latter stages in the upgrade to R12.2.n, these are run during downtime (the application is not running).
Since R12.AD.C.Delta.5 the 12.2.n RUP patches can be applied using "downtime" mode.
So the performance advice and challenges for an upgrade to R12.2.0 are largely the same as for an upgrade to R12.1.n. The same method is used to apply many of the patches (AutoPatch).
However, the Online Patching Enablement patch, Online Patching and the Release Update Packs for 12.2.n are quite different to the main R12.2.0 upgrade, so they may introduce new performance challenges, particularly with internal SQL and ADOP SQL (i.e. SQL run from or on AD_ZD objects).
It is important to stress that although Online Patching (ADOP) provides new logs to track performance, the key diagnostics are still the same ones used for previous Oracle E-Business upgrades (e.g. Display Cursor reports, SQL Monitor reports, AWR reports, AD Job Timing Reports, SQLHC reports, SQLT Extracts etc.)
3. Before Running the Oracle E-Business Suite Upgrade
3.1 Set Statistics_Level = ALL (or _rowsource_execution_statistics = TRUE)
This can be set using the command:
SQL>alter system set statistics_level='ALL'
This is the simplest way to see actual row source statistics (including elapsed time, physical reads, buffer gets etc) for each execution plan line (on SQLT and Display Cursor report). The alternative of SQL Trace and TKPROF requires editing standard code.
Note that the internal views v$sql_plan_statistics and v$sql_plan_statistics_all will not contain any actual row source statistics for execution plan lines if statistics_level = TYPICAL, even if timed_statistics = TRUE.
When the STATISTICS_LEVEL parameter is set to ALL, additional statistics are added to the set of statistics collected with the TYPICAL setting. The additional statistics are timed OS statistics and plan execution statistics, which include row source statistics.
Using this strategy will typically speed up the resolution of issues significantly and may also allow the correct solution to be identified first time.
Alternatively, the same actual execution statistics can be collected by setting the initialization parameter _rowsource_execution_statistics=TRUE (with statistics_level = 'TYPICAL'). This gives a lower overhead than statistics_level=ALL.
Some technical architects and DBAs at customers (or implementing partners) can be resistant to setting statistics_level = ALL (or _rowsource_execution_statistics = TRUE), believing that this can slow down performance significantly.
Two points are relevant here:
Although setting statistics_level = ALL / _rowsource_execution_statistics = TRUE will have some performance impact, it is likely to be small and not significant. The Release 12.2 upgrade is made up of batch processes, and so the statistics workload is a much lower proportion of the total.
Even if the performance impact is significant, the goal is to reduce the elapsed times for the latter dry runs and go live (when it will be feasible to revert statistics_level / _rowsource_execution_statistics to their previous values). So suffering an increase in elapsed time during an early stage of testing is not an issue.
So there may be a small impact on elapsed time and the work that needs to be done initially, but it will help to subsequently reduce the elapsed time and amount of re-work that needs to be done.
Note that setting statistics_level to ALL while AWR is enabled could significantly increase the number of rows inserted to the WRH$_LATCH_CHILDREN table. So monitor the SYSAUX tablespace to ensure that it does not run out of space.
3.2 AWR Snapshot Interval and Retention Period
Automatic Workload Repository (AWR) should be enabled with a snapshot of 30 minutes (the default is 60 minutes). For short upgrades, a shorter snapshot may be more suitable.
The AWR retention period should be long enough to cover the duration of the upgrade run and a significant period afterwards (to gather diagnostics and analyze). The suggestion is N+7 days, where N is the estimated upgrade time, but a longer period will provide more time to gather subsequent diagnostics and statistics.
4. During the Oracle E-Business Suite Upgrade
4.1 Obtain Top SQL in Cursor Cache or AWR
This could be internal or application SQL. These scripts should be run regularly during the upgrade, particularly when there are long-running jobs.
If SQL is still in memory (cursor cache) the following can be used to identify long running SQLs that may not have been written to the AWR yet (at last snapshot):
SELECT * FROM
(SELECT
ss.sql_id,
ROUND(SUM(ss.elapsed_time/1000000),0) elapsed_time_secs,
ROUND(SUM(ss.cpu_time/1000000),0) cpu_time_secs,
SUM(ss.disk_reads) disk_reads,
SUM(ss.direct_writes) direct_writes,
SUM(ss.buffer_gets) buffer_gets,
SUM(ss.px_servers_executions) px_server_execs,
SUM(ss.rows_processed) rows_processed,
SUM(ss.executions) executions,
SUM(ss.application_wait_time) apwait_secs,
SUM(ss.sharable_mem) sharable_mem,
SUM(ss.total_sharable_mem) total_sharable_mem
FROM v$sqlstats ss
GROUP BY ss.sql_id
ORDER BY 2 DESC)
WHERE ROWNUM <= 100;
The following SQL script will report the longest running SQLs between two AWR snapshots.
SELECT * FROM
(SELECT
dhs.sql_id,
ROUND(SUM(dhs.elapsed_time_delta/1000000),0) elapsed_time_secs,
ROUND(SUM(dhs.cpu_time_delta/1000000),0) cpu_time_secs,
SUM(dhs.disk_reads_delta) disk_reads,
SUM(dhs.buffer_gets_delta) buffer_gets,
SUM(dhs.px_servers_execs_delta) px_server_execs,
SUM(dhs.rows_processed_delta) rows_processed,
SUM(dhs.executions_delta) executions,
ROUND(SUM(dhs.iowait_delta/1000000),0) iowait_secs,
ROUND(SUM(dhs.clwait_delta/1000000),0) clwait_secs,
ROUND(SUM(dhs.ccwait_delta/1000000),0) ccwait_secs,
ROUND(SUM(dhs.apwait_delta/1000000),0) apwait_secs
FROM dba_hist_sqlstat dhs ,
v$database d
WHERE dhs.dbid = d.dbid
AND snap_id >
AND snap_id <=
GROUP BY dhs.sql_id
ORDER BY 2 DESC)
WHERE ROWNUM <= 100;
Where and are the start and end snapshot IDs.
The output of this statement will look similar to the following:
SQL_ID ELAPSED_TIME_SECS CPU_TIME_SECS DISK_READS BUFFER_GETS ….
------------- ----------------- --------------- ---------- ----------- ….
xxxxxxxxxxxxx 367440 42999 34838244 3795838289 ….
xxxxxxxxxxxxx 264369 170788 441127 562033013 ….
xxxxxxxxxxxxx 70370 6448 3599284 469639133 ….
xxxxxxxxxxxxx 68298 38896 7125573 1327384554 ….
xxxxxxxxxxxxx 63600 27402 20043712 587615960 ….
The elapsed time is the maximum elapsed time for all workers of a job.
Enterprise Manager can also be used to identify expensive SQL as it occurs.
4.2 Obtain Display Cursor Report for Long-Running SQL
For long-running SQL reported by the above script, run a display cursor report (with ALL +ALLSTATS option).
This displays the actual execution plan of any cursor loaded in the cursor cache. At the basic level it shows the runtime execution plan. However, the format ALL also includes extra information such as pruning, parallel execution, predicate, projection, alias and remote SQL information.
This should be run (as soon as possible) whilst the jobs are running or very shortly afterwards. If running of the Display Cursor Report is delayed, the cursor may have been flushed from memory or invalidated and no actual statistics will be available and the report will display no data.
The +ALLSTATS option (which includes IOSTATS and MEMSTATS) will include actual statistics for each execution plan step. These include:
Elapsed time
Physical reads
Buffer gets
Memory used (in PGA) for memory intensive operations (such as hash-joins, sorts, bitmap operators etc).
However, this additional information is only provided if statistics_level=ALL / _rowsource_execution_statistics = TRUE
Note that SQLT with XTRACT will also report actual row source statistics in the same circumstances. However, Display Cursor provides a simpler view of the information. It can also be run during the upgrade, while the long running SQL is in progress, without much of an overhead.
The report can be produced by running the following SQL script:
SET pages 0
SET lines 300
SET LONG 10000
SET LONGCHUNKSIZE 10000
SPOOL .txt
SELECT * FROM TABLE(dbms_xplan.display_cursor('', NULL, 'ALL +ALLSTATS'));
SPOOL OFF;
For more information see the "Display Cursor" section in My Oracle Support document "Oracle E-Business Suite Performance Guide (Document 1672174.1)"
If the SQL is no longer in memory, but is in the AWR, use the Display AWR report (DBMS_XPLAN.DISPLAY_AWR) instead.
SET pages 0
SET lines 300
SET LONG 10000
SET LONGCHUNKSIZE 10000
SPOOL.txt
SELECT * FROM TABLE(dbms_xplan.display_awr('', NULL, NULL, 'ALL'));
SPOOL OFF;
However, this does not report on actuals: it does not have a +ALLSTATS option, and there are no actual statistics for execution plan steps stored in AWR.
Note that SQLT with XTRACT method will not report on actual statistics in this case either.
Note that the display cursor and AWR reports only show the sql_text (first 1000 characters) and not the full_text.
So, if necessary, run the following SQL script to obtain the full SQL text.
SET pages 0
SET lines 300
SET LONG 10000
SET LONGCHUNKSIZE 10000
SPOOL .txt
SELECT sql_id, sql_text, sql_fulltext FROM v$SQL
WHERE sql_id = '';
SPOOL OFF;
4.2.1 Automate execution of Display Cursor reports for top SQLs
The following script can be used (on linux) to automatically produce Display Cursor reports for the top SQL (by elapsed time) during each upgrade patch. It also produces Display AWR reports, which only provide the execution plan and no actual statistics, but which may be useful as a fall back if statistics_level or _rowsource_execution_statistics have not been set, or a particular SQL was no longer in cursor cache.
Although this automated process will produce many reports that are never looked at. It will mean that if a long running SQL (as part of a long running job) is identified then the Display Cursor report will already be available for analysis. It will not be necessary to wait until the next test run to obtain one.
There are 4 parameters that should be passed to the script:
Apps password
Number of iterations. This should be large enough so that the shell script keeps on producing Display Cursor reports throughout the upgrade patch. It is recommended that this is much larger than needed, the shell script can always be killed when the patch has completed.
Iteration gap in minutes. This should be short enough so that all long running sqls of interest are reported, but not cause much overhead. 10 minutes is recommended.
Number of Top SQLs to monitor (by elapsed time). This should be large enough so that all long running sqls of interest are reported, but not cause much overhead. 20 is recommended.
So, if the apps password is represented by , the number of iterations is 72, the gap is 10 minutes and the number of top SQLs is 20, the script will be called as follows:
xxauto_dc.sh 72 10 20
The shell script loops for the specified number of iterations.
In each iteration of the loop it :
Calls the SQL script xxautodc.sql, passing the number of top SQLs parameter. This builds a temporary SQL script dispcurbatch.sql to spool and run the Display Cursor reports for the top SQL and then executes that script.
Removes the temporary SQL script dispcurbatch.sql (after it has been run).
Sleeps for the specified number of minutes (before starting the next iteration of the loop).
There are 3 sets of reports :
Disp_Curs__.txt. This is the Display Cursor report for a top SQL in the AWR
Disp_AWR__.txt. This is the Display AWR report for a top SQL in the AWR
CC_Disp_Curs__.txt. This is the Display Cursor report for a top SQL in the Cursor Cache
Where
is the SQL ID for the particular SQL statement.
indicates the approximate time that the report was produced (i.e. from which iteration) in format YYMMDDHH24MI
Ensure that there is plenty of space on the file system (volume) available to accommodate the output files.
# Script xxauto_dc.sh
# Get arguments
# 1. apps password
# 2. number of iterations
# 3. iteration gap - minutes
# 4. No of Top SQLs
export appspass="$1"
export numit="$2"
export gap="$3"
export topsql="$4"
iteration=1
until [ ${iteration} -gt ${numit} ]
do
echo "Iteration ${iteration}"
sqlplus apps/${appspass} @xxautodc.sql $topsql
# remove batch file (if it exists)
rm -f dispcurbatch.sql
iteration=`expr $iteration + 1`
# Now sleep for requested number of minutes
if [ ${iteration} -gt ${numit} ]
then
echo "Do not sleep"
else
sleep ${gap}m
fi
done
The SQL script xxautodc.sql:
Reads the "Number of Top SQLs" parameter.
Runs SQL on the AWR and cursor cache to spool output to temporary SQL script file dispcurbatch.sql. The SQL script dispcurbatch.sql will contain commands to spool and run Display Cursor reports for the top SQLs (in AWR and cursor cache).
Executes the SQL script dispcurbatch.sql.
-- Script xxautodc.sql
-- example script for building script to run Display Cursor for top sql ids
-- It is absolutely essential that either statistics_level = ALL or _rowsource_execution_statistics = TRUE
-- otherwise the Display Cursor report will not contain the actual rowsource statistics (for each plan line)
-- that are essential in quickly identifying if and why a SQL has an inefficient execution plan.
WHENEVER SQLERROR CONTINUE
WHENEVER OSERROR CONTINUE
VARIABLE l_topsql NUMBER
-- pick up l_topsql and set bind var
DEFINE topsql = '&&1'
exec :l_topsql := '&topsql'
set echo off heading off feedback off verify off
set pages 0 termout off
set linesize 100
spool dispcurbatch.sql
SELECT 'WHENEVER SQLERROR CONTINUE' FROM DUAL;
SELECT 'WHENEVER OSERROR CONTINUE' FROM DUAL;
SELECT 'SET pages 0' FROM DUAL;
SELECT 'SET lines 300' FROM DUAL;
SELECT 'SET LONG 10000' FROM DUAL;
SELECT 'SET LONGCHUNKSIZE 10000' FROM DUAL;
SELECT
' ',
'SPOOL '
||'Disp_Curs_'
||sql_id
||'_'
||TO_CHAR(SYSDATE,'YYMMDDHH24MI')
||'.txt ',
'SELECT * FROM TABLE(dbms_xplan.display_cursor('''
||sql_id
||''', NULL, ''ALL +ALLSTATS'')); ',
'SPOOL OFF',
' ',
'SPOOL '
||'Disp_AWR_'
||sql_id
||'_'
||TO_CHAR(SYSDATE,'YYMMDDHH24MI') ||'.txt ',
'SELECT * FROM TABLE(dbms_xplan.display_awr('''
||sql_id
||''', format => ''ALL'')); ',
'SPOOL OFF'
FROM
(SELECT /*+ leading(d,i,dhss,dhst) */
dhss.sql_id,
ROUND(SUM(dhss.elapsed_time_delta/1000000),0) elapsed_time_secs -- needs sum because of different plan (hash)s for same SQL
FROM v$database d,
v$instance i,
dba_hist_sqlstat dhss,
dba_hist_sqltext dhst
WHERE dhss.dbid = d.dbid
AND dhss.instance_number = i.instance_number
AND dhst.dbid = dhss.dbid
AND dhst.sql_id = dhss.sql_id
AND dhst.command_type != 47 -- exclude anonymous blocks
AND dhss.snap_id = (SELECT /*+ push_subq */ MAX(dhsn.snap_id)
FROM dba_hist_snapshot dhsn,
v$database d,
v$instance i
WHERE dhsn.dbid = d.dbid
AND dhsn.instance_number = i.instance_number) -- Just report on latest completed snapshot
GROUP BY dhss.sql_id
ORDER BY 2 DESC)
WHERE rownum <= :l_topsql -- limit to the top SQLs (ordered by elapsed time).
;
-- Also get top SQLs from cursor cache - to fill any gaps
SELECT
' ',
'SPOOL '
||'CC_Disp_Curs_'
||sql_id
||'_'
||TO_CHAR(SYSDATE,'YYMMDDHH24MI')
||'.txt ',
'SELECT * FROM TABLE(dbms_xplan.display_cursor('''
||sql_id
||''', NULL, ''ALL +ALLSTATS'')); ',
'SPOOL OFF'
FROM
(SELECT
ss.sql_id,
ROUND(SUM(ss.delta_elapsed_time/1000000),0) elapsed_time_secs
FROM v$sqlstats ss,
v$sql s
WHERE s.sql_id = ss.sql_id
AND s.child_address = ss.last_active_child_address
AND s.command_type != 47 -- exclude anonymous blocks
GROUP BY ss.sql_id
ORDER BY 2 DESC)
WHERE rownum <= :l_topsql -- limit to the top SQLs (ordered by elapsed time).
;
spool off
set termout on echo on verify on heading on feedback on
set termout off
-- Now run the script that has been built above
@@dispcurbatch
set termout on
exit
4.3 Obtain SQL Monitor Report for SQL Using Parallel Query/DML
Obtain SQL Monitor Reports for SQL that uses Parallel Query or DML
The main advantage of this is that it gives a good view of how parallel SQL/DML performs across stages of the plan and parallel child (secondary) processes.
It can also give a good idea of the actual executions and row counts for each execution plan line even if "statistics_level" initialization parameter is not set to ALL ( or "_rowsource_execution_statistics" is not set to TRUE) at the time the SQL is executed.
It can be run during the upgrade, while the long running SQL is in progress (or shortly afterwards), without much of an overhead.
It can be produced by running the following SQL script:
set trimspool on
set trim on
set pages 0
set long 10000000
set longchunksize 10000000
set linesize 200
set termout off
spool sql_monitor_for_.htm
variable my_rept CLOB;
BEGIN
:my_rept := dbms_sqltune.report_sql_monitor(sql_id => '', report_level => 'ALL', type => 'HTML');
END;
/
print :my_rept
spool off;
set termout on
For more information see the "SQL Monitor Report" section in My Oracle Support document "Oracle E-Business Suite Performance Guide (Document 1672174.1)"
4.4 Identify when SQL ran
The following SQL will show when a particular piece of SQL ran (i.e. between which snapshots). This is useful in matching SQLs to jobs.
SELECT
dhs.sql_id,
dsn.snap_id,
dsn.begin_interval_time,
dsn.end_interval_time,
ROUND(SUM(dhs.elapsed_time_delta/1000000),0) elapsed_time_secs
FROM dba_hist_sqlstat dhs ,
v$database d ,
dba_hist_snapshot dsn
WHERE dhs.dbid = d.dbid
AND dsn.snap_id = dhs.snap_id
AND dsn.dbid = dhs.dbid
AND dsn.instance_number = dhs.instance_number
AND dhs.sql_id = ''
AND dsn.snap_id >
AND dsn.snap_id <=
GROUP BY dhs.sql_id, dsn.snap_id, dsn.begin_interval_time, dsn.end_interval_time
ORDER BY dsn.snap_id;
Where and are the start and end snapshot IDs.
The output of this statement will look similar to the following:
SQL_ID SNAP_ID BEGIN_INTERVAL_TIME END_INTERVAL_TIME ELAPSED_TIME_SECS
------------- ------- ----------------------- ----------------------- -----------------
xxxxxxxxxxxxx xxxx 04-JAN-13 23.00.25.5560 05-JAN-13 00.00.21.1620 23123
xxxxxxxxxxxxx xxxx 05-JAN-13 00.00.21.1620 05-JAN-13 01.00.38.2680 37145
4.5 Match Long-Running SQL to Jobs
The following SQL will give jobs running at any point between two time intervals, with the longest running jobs first. This is useful in matching SQLs to jobs.
Where:
= start of period to report in format YYYYMMDDHH24MISS
= end of period to report in format YYYYMMDDHH24MISS
Note that the job must have completed for it to be reported by this SQL script.
SELECT
phase,
phase_name,
product,
job_name,
max_elapsed_time,
min_start_time,
max_end_time,
workers
FROM
(SELECT
phase,
phase_name,
product,
job_name,
MAX(elapsed_time) elapsed_time_unconv,
LPAD(FLOOR(MAX(elapsed_time)*24), 4)||':'||
LPAD(FLOOR((MAX(elapsed_time)*24-floor(MAX(elapsed_time)*24))*60), 2, '0')||':'||
LPAD(MOD(ROUND(MAX(elapsed_time)*86400), 60), 2, '0') max_elapsed_time,
INITCAP(TO_CHAR(MIN(start_time),'MON DD HH24:MI', 'NLS_DATE_LANGUAGE = american')) min_start_time,
INITCAP(TO_CHAR(MAX(end_time),'MON DD HH24:MI', 'NLS_DATE_LANGUAGE = american')) max_end_time,
count(worker_id) workers
FROM ad_task_timing
WHERE session_id =
AND ( start_time BETWEEN TO_DATE('','YYYYMMDDHH24MISS') AND TO_DATE('','YYYYMMDDHH24MISS')
OR
NVL(end_time, start_time+elapsed_time) BETWEEN TO_DATE('','YYYYMMDDHH24MISS') AND TO_DATE('','YYYYMMDDHH24MISS') )
GROUP BY phase, phase_name, product, job_name)
ORDER BY elapsed_time_unconv DESC;
The output of this statement will look similar to the following:
Phase max elapsed
phase name product job_name time min_start_time max_end_time workers
----- ------ ------- ----------------- ----------- -------------- ------------ -------
255 upg+80 zx zxaptrxmigupd.sql 2:43:47 Mar 13 04:22 Mar 13 07:06 64
255 upg+80 ap apxlaupg.sql 1:38:57 Mar 13 04:03 Mar 13 05:42 1
To find upgrade AD jobs that are in progress, use adctrl option 1 (Show worker status).
When they started can be determined by looking at the patch log file. e.g.
$ cat u_merged.log|grep -A2 cstpostimportaad.sql
Assigned: file cstpostimportaad.sql on worker 48 for product bom username BOM. Time is: Fri Mar 22 2013 22:48:54
4.6 Report on CBO Statistics for All Oracle E-Business Suite Tables
Report on the CBO statistics for Oracle E-Business Suite tables during the upgrade, before adsstats.sql is run (in 12.2.0 CUP).
The script adsstats.sql will populate the statistics correctly before the end of the upgrade. The fact that tables may have incorrect statistics during the upgrade will not be visible. So it may not be possible to see that a table had null, zero or inaccurate CBO statistics, and that this is the reason for an expensive execution plan.
The following script will report on the CBO statistics for all Oracle E-Business Suite tables:
SELECT
owner,
table_name,
num_rows,
TO_CHAR(last_analyzed,'DD-MON-YYYY HH24:MI:SS') last_analyzed
FROM all_tables
WHERE owner IN (SELECT upper(oracle_username) sname
FROM fnd_oracle_userid
WHERE oracle_id BETWEEN 900
AND 999
AND read_only_flag = 'U'
UNION ALL
SELECT DISTINCT upper(oracle_username) sname
FROM fnd_oracle_userid a,
fnd_product_installations b
WHERE a.oracle_id = b.oracle_id )
ORDER BY owner, table_name;
The output of this statement will look similar to the following:
OWNER TABLE_NAME NUM_ROWS LAST_ANALYZED
---------------------- ---------------------------- ---------- ------------------------
ABM ABM_ACC_MAP_SUM_REP 0 06-JAN-2013 08:46:33
ABM ABM_ACT_ACC_RU_DAT 0 06-JAN-2013 08:46:35
ABM ABM_ACT_STA_RU_DAT 0 06-JAN-2013 08:46:36
ABM ABM_ACT_TAGS 0 06-JAN-2013 08:46:37
ABM ABM_API_TEMPLATES 38 06-JAN-2013 08:44:53
ABM ABM_API_TEMPLATES_TL 722 06-JAN-2013 08:41:16
ABM ABM_API_TEMPLATE_ATTRIBUTES 248 06-JAN-2013 08:44:34
5. After the Oracle E-Business Suite Upgrade
These are diagnostics to be obtained directly after each of the main patches/RUPs have completed.
This will be directly after each of 12.2.0 CUP, Online Patching Enablement, R12.AD.C.Delta.n, R12.TXK.C.Delta.n and 12.2.n RUP.
5.1 AD Job Timing Report
This reports:
Number of successful, failed, deferred, re-started or skipped jobs.
The top 100 time consuming jobs.
The failed, deferred, re-started and skipped jobs.
The timing of each upgrade phase, with the total number of jobs, and the number deferred, re-started and skipped.
However, it only reports the Top 100 Time Consuming Jobs, and for AD Parallel jobs it considers each worker to be a different job. This means it may only report a handful of jobs. An alternative is to query the AD_TASK_TIMING table for long running jobs directly.
When ADOP, AutoPatch or AD Administration is run, it automatically generates an AD Job Timing report (adt.lst). The contents of this report can be accessed from Oracle Application Manager, or reports can be obtained for completed upgrade sessions from the APPL_TOP/admin//out directory. The report is called adt.lst.
The AD Job Timing Report can also be run for AD Administration jobs from the command line.
$ cd $APPL_TOP/admin//out
$ sqlplus / @$AD_TOP/admin/sql/adtimrpt.sql \
Where is the session of the timing statistics required, and is the name of the file where the statistics will be written.
$AD_TOP/admin/sql/adtimdet.sql can also be run in a similar way. This gives details on all jobs ordered by phase or elapsed time. This is useful for finding out how long any job took to run, and also where the "Top 100 Time Consuming Jobs" is dominated by multiple workers of a few jobs.
Note that the SQL scripts may be in $AD_TOP/sql (not admin/sql).
See "Oracle E-Business Suite Maintenance Guide Release 12.2" for more information.
5.2 Identify Long-Running Upgrade Jobs
Note that the "Top 100 Time Consuming Jobs" section of the standard adtimrpt.sql report lists all workers for AD Parallel jobs separately. So the top 100 can be dominated by a handful of jobs.
The following SQL can be used to list all jobs in order of maximum elapsed time (descending), but reporting all workers of an AD Parallel job in one line. It only reports completed jobs.
Note that is the ID for the upgrade "session" and not a user session.
Be aware of jobs that are called multiple times in the same phase (e.g. akload.class, LoadMap.class, XDOLoader.class).
SELECT
phase,
phase_name,
product,
job_name,
max_elapsed_time,
min_start_time,
max_end_time,
workers
FROM
(SELECT
phase,
phase_name,
product,
job_name,
MAX(elapsed_time) elapsed_time_unconv,
LPAD(FLOOR(MAX(elapsed_time)*24), 4)||':'||
LPAD(FLOOR((MAX(elapsed_time)*24-floor(MAX(elapsed_time)*24))*60), 2, '0')||':'||
LPAD(MOD(ROUND(MAX(elapsed_time)*86400), 60), 2, '0') max_elapsed_time,
INITCAP(TO_CHAR(MIN(start_time),'MON DD HH24:MI', 'NLS_DATE_LANGUAGE = american')) min_start_time,
INITCAP(TO_CHAR(MAX(end_time),'MON DD HH24:MI', 'NLS_DATE_LANGUAGE = american')) max_end_time,
count(worker_id) workers
FROM ad_task_timing
WHERE session_id =
GROUP BY phase, phase_name, product, job_name)
ORDER BY elapsed_time_unconv DESC;
The output of this statement will look similar to the following:
Phs MaxElpsd Min Max Num
Num Phase Prod Job Time Start Time End Time Wrkr
----- ------------ ----- ----------------- ---------- ------------- ------------- -----
255 upg+80 zx zxaptrxmigupd.sql 2:43:47 Mar 13 04:22 Mar 13 07:06 64
351 last+63 ad adsstats.sql 2:09:42 Mar 13 11:14 Mar 13 13:23 1
255 upg+80 ap apxlaupg.sql 1:38:57 Mar 13 04:03 Mar 13 05:42 1
251 upg+76 ap apilnupg.sql 0:55:45 Mar 13 01:24 Mar 13 02:19 64
241 upg+57 ont ontup251.sql 0:25:07 Mar 13 00:11 Mar 13 00:36 1
….
5.3 Obtain File versions for Long-Running Jobs
When an SR or Bug is raised, Oracle Support and Development will ask for the version of the job (file) that has the issue, or the version of code (file) used by the job.
It may be necessary to check what is done in a particular job, or the exact metadata/object definition in a file. When doing this, be sure to identify the version of the files used in the upgrade.
For example, for an R12.2.n upgrade, the version present in the file system or database after all the upgrade steps – including R12.2.n RUP and post upgrade steps – may be different from the one used during the R12.2.0 upgrade.
To find out the version of a file used during the R12.2.0 upgrade, check the unified driver used for that part of the upgrade: the same principle applies to the 12.2.n RUP. For example:
$ cat u_merged.drv|grep –A5 cstpostimportaad.sql
Note : Do not rely on the version of the file or object in the file system or database, as it could be a later version applied by a subsequent patch.
5.4 Obtain AWR Reports
Obtain AWR reports for:
The whole period that the upgrade is running.
For the duration of long-running jobs (i.e. between the snapshots taken just before the job starts and just after it finishes).
Each individual snapshot.
Initially, the following can be identified:
Long running SQL (SQL Statistics/SQL Ordered By)
Contention and bottlenecks (Top 5 Timed Foreground Events/Foreground Wait Events)
CPU utilization
However, many other useful statistics can be obtained from AWR reports, depending on the situation e.g. :
Advisory Statistics on SGA/PGA sizing etc.
Segment Statistics to identify the objects with the highest physical/logical reads or waits (Row Lock, ITL, Buffer Busy).
Wait Event Histogram to identify the distribution of wait times and how serious contention is.
If there are high levels of a particular wait(s), first check to see if it only occurs with particular long-running SQL and jobs, before assuming that it is a system configuration or resource issue.
awrrpt.sql will typically be used to generate the AWR reports. Always choose HTML report type. On an Oracle RAC instance, awrrpti.sql will usually suffice, as the upgrade will be run on one Oracle RAC node only.
AWR reports can be automated. This is useful if producing a large number of AWR reports, particularly for successive snapshots. See the "Automating AWR Reports" section in My Oracle Support document "Performance Diagnosis with Automatic Workload Repository (Document 1674086.1)".
Note that some fixed objects and dictionary objects (particularly WRH$_LATCH_CHILDREN) will have grown significantly during the upgrade. This is especially the case if statistics_level = ALL, or there is a high retention period or a short snapshot interval.
So fixed object and dictionary statistics may need to be gathered before running AWRs.
See My Oracle Support document "Performance Diagnosis with Automatic Workload Repository (Document 1674086.1)" for more information.
5.5 SQLHC or SQLT
Run SQLHC or SQLT for the long running SQLs.
There are occasions when SQLHC should be used rather than SQLT:
For example a customer may object to installing SQLT on their production environment or they may be concerned about the performance impact on a production system.
It may be necessary to obtain more information on a SQL during an upgrade patch/pack.
Or the customer may not have installed SQLT yet.
SQLT is more complicated and may require more explanation, so it may be quicker and simpler to ask for SQLHC.
There are situations where SQLT is preferred over SQLHC:
The SQL ID (and runtime execution plan) may not be in memory or AWR. So SQLHC will not provide any output (and SQLT with XPLAIN or XECUTE method is required).
More information is required.
5.6 SQLHC
When long running SQLs with possible sub-optimal execution plans have been identified then use SQLHC (sqlhc.sql).
This provides html reports, which contain most of the data required to identify the reasons why the CBO chose a sub-optimal execution plan. e.g. Health Check (Observations), CBO Statistics, Metadata for tables and indexes, Execution plans, Runtime Statistics, System/Instance (Initialization) Parameters, SQL Profiles/Baselines associated with the SQL and SQL Activity Report.
It saves a huge amount of time by providing all information in one action (and in one zip file sqlhc_YYYYMMDD_HHMISS_SQLID.zip).
It does not require any installation on the environment (other than the script file itself) and has a negligible performance impact. If necessary it can be run whilst the upgrade is in progress.
It is particularly useful if engineers/analysts do not have access to the instance.
Note that SQLHC will only provide actual row source statistics if the "statistics_level" initialization parameter is set to ALL ( or "_rowsource_execution_statistics" is set to TRUE) at the time the SQL is executed.
See the following My Oracle Support documents for more information:
Oracle E-Business Suite Performance Guide (Document 1672174.1), section "SQLHC"
SQL Tuning Health-Check Script (SQLHC) (Document 1366133.1)
5.7 SQLT with XTRACT method
This uses a significant amount of system resources, so should not be run during the upgrade. Instead, Display Cursor report and SQLHC can be used to obtain much of the information that is included in the SQLT output.
The SQLT provides execution plans, CBO statistics, database parameters, CBO parameters, performance statistics, and metadata (schema object definitions and so on) associated with the SQL.
Depending on the SQLT parameters it can also contain supporting information such as AWR reports, ASH Reports, ADDM Reports, CBO (10053) trace, SQL Monitor report, EBS Initialization Parameters healthcheck report (bde_chk_cbo), Test Case Builder and SQL Tuning Advisor.
It is particularly useful for passing information to analysts/engineers, if access to the instance is not possible. Even if access is possible, it gives all the information in one place (one zip file).
The SQLT should be provided on the same environment where the performance issue was observed, and should be run as soon after the relevant program/process as possible.
Be aware of any actions that may alter the data that SQLT is capturing (that is, actions that take place after the observed performance issue, but before SQLT is run). For example, statistics being gathered, removed, locked, imported, or data being deleted from temporary or interface tables.
To run SQL with XTRACT the sql_id or hash_value will be needed.
In Oracle Database 11g, the TKPROF will give the sql id and hash value.
For Oracle 11g. the raw trace gives the sql_id (sqlid=). In 10g and before, it gives the hash value (hv=).
Other methods of obtaining the sql_id include using AWR, Oracle Enterprise Manager, and directly querying V$SQL.
The XTRACT method is needed because:
It obtains the runtime execution plan from memory or AWR
It is RAC aware.
If the SQL was executed when "statistics_level" was set to ALL (or "_rowsource_execution_statistics" is set to TRUE) then it will contain actual row source statistics (I/O, buffer gets, elapsed time) on the execution plan (provided the cursor is still in memory (cursor cache).
Note that SQLT runs AWR and ASH reports. Some dictionary objects (particularly WRH$_LATCH_CHILDREN, especially if statistics_level is set to ALL) will have grown significantly during the upgrade. So, it may be necessary to gather fixed object and dictionary statistics before running SQLT.
SQLT can take quite a while to run.
See the following My Oracle Support documents for more information:
Oracle E-Business Suite Performance Guide (Document 1672174.1), section "SQLT"
All About the SQLT Diagnostic Tool (Document 215187.1)
5.7.1 Reducing workload / time taken producing SQLT
To reduce the workload, it is recommended that the following are run (from SQL*Plus) before running sqltxtract.sql:
To disable Test Case Builder TCB and/or SQL Tuning Advisor
EXEC sqltxplain.sqlt$a.set_param('test_case_builder', 'N');
EXEC sqltxplain.sqlt$a.set_param('sta_time_limit_secs', '30');
To disable the automatic export of a test case repository
EXEC sqltxplain.sqlt$a.set_param('export_repository', 'N');
If SQLT still takes a long time, and the schema objects used by the SQL contain a large number of sub-partitions, the granularity of the data collected can be reduced as follows:
EXEC sqltxplain.sqlt$a.set_param('c_gran_segm', 'PARTITION');
EXEC sqltxplain.sqlt$a.set_param('c_gran_cols', 'PARTITION');
EXEC sqltxplain.sqlt$a.set_param('c_gran_hgrm', 'PARTITION');
Note that these commands can all be run as APPS. They do not need to be run as user SQLTXPLAIN.
These values are stored in a table called SQLTXPLAIN.SQLI$_PARAMETER. Once they are set, they do not need to be re-set for each execution of SQLT. The current values can be checked by querying this table.
To reduce the time further the counting of rows on tables can be disabled, by running the following. However, information on the actual number of rows in each table will be lost.
EXEC sqltxplain.sqlt$a.set_param('count_star_threshold', '0');
6. Online Patching Diagnostics
6.1 Online Patching Enablement - Specific Diagnostics
The Online Patching Enablement patch is applied using AutoPatch (adpatch). In addition to the general diagnostics above the output from the following script will be useful during Online Patching Enablement:
$ sqlplus apps @$AD_TOP/sql/ADZDSHOWDDLS.sql
6.2 Online Patching (ADOP) Logs and Diagnostics
All the ADOP logs are located on the non-editioned file system (fs_ne) in the /fs_ne/EBSapps/log/adop directory e.g.
/u01/PROD/fs_ne/EBSapps/log/adop
Each cycle of ADOP creates a subdirectory corresponding to the patch session ID, e.g.
/u01/PROD/fs_ne/EBSapps/log/adop/n
Where n is the session ID.
It is easiest and quickest to produce a zip of the entire directory.
The main files of interest are the ADOP logs (e.g. adop_YYYYMMDD_HHMISS.log).
But the adzdshowlog.out, adworker*.log, u*.log, u*.lgi, admrgpch*.log files are all useful and under the same path.
When running ADOP the on screen terminal output will mention which ADOP session ID is in use. e.g.
/u01/PROD/fs_ne/EBSapps/log/adop/9/apply_20121011_024437
The session ID directory will contain a trace file for each phase (e.g. adop_20130316_091340.log) and a corresponding log directory for each phase containing other logs (e.g. apply_20130316_091340).
The timestamp of the trace file and the corresponding log directory will match.
6.3 Non ADOP Logs
The same log directory for each phase (e.g. apply_20130316_091340) also contains some AD Utility and worker logs.
These include adrelink.log, adlibin.log, adlibout.log, adworknnn.log. The most useful are the adworknnn.log files that show the jobs run on each AD Parallel worker along with timestamps.
6.4 Online Patching Log Analyzer Utility
This is delivered in R12.AD.C.Delta.n (since R12.AD.C.Delta.4).
This utility analyzes adop log directories for errors and warnings, and displays messages to help the user quickly identify any problems that may have occurred during an adop run. It thereby offers an alternative to reviewing log files manually.
The Log Analyzer utility can be run without options:
To scan all log directories of the latest adop session for errors:
$ adopscanlog
The utility can also be run with various options. Examples include:
To scan log directories relating to the latest run of adop in the latest session:
$ adopscanlog -latest=yes
To scan log directories relating to the latest run of the specified phase, in the latest session:
$ adopscanlog -latest=yes -phase=
To scan all log directories of a given session (represented by a session_id) for errors:
$ adopscanlog -session_id=
To see a complete list of supported parameters:
$ adopscanlog -help
6.5 adzdshowlog.out
This reports the contents of the AD_ZD_LOGS table. This contains messages on the progress of online patching with timestamps. The contents of this table will be truncated every time cleanup/prepare phase is run.
This can also be obtained for previous phases by running the following script:
$ sqlplus apps @$AD_TOP/sql/ADZDSHOWLOG.sql
Or running the SQL
SELECT * FROM ad_zd_logs ORDER BY log_sequence desc;
6.6 Check the current status of the adop cycle
To check the status of the current ADOP cycle, source the run filesystem environment file and run command
adop -status
Usage:
adop -status generates a summary report
adop -status generates a summary report for that session ID
adop -status -detail generates a detailed report
6.7 SQL to determine status of ADOP phases
The following SQL statement will show the status for each adop phase along with its corresponding session id.
SELECT adop_session_id, prepare_status, apply_status, finalize_status,
cutover_status, cleanup_status, abort_status, status, abandon_flag, node_name
FROM ad_adop_sessions
ORDER BY adop_session_id;
This is effectively a history of online patching in an environment.
The following statuses apply to all phases:
Y : the phase is done
N : the phase has not been completed
X : the phase is not applicable
R : the phase is running (in progress)
F : the phase has failed
P : (is applicable only to APPLY phase) at least one patch is already applied for the session id
C : the status of this ADOP session has completed
Note: Numerical statuses are only relevant for the cutover phase. These status values are updated when a step has completed, and are as follows:
0 : cutover/force_shutdown has started
1 : "force_shutdown" step has successfully executed
3 : "db_cutover" step has successfully executed
4 : "fs_cutover" step has successfully executed
6 : "force_startup" step has successfully executed
Cutover statuses
cutover_status='Y' 'COMPLETED'
cutover_status not in ('N','Y','X') and status='F' 'FAILED'
cutover_status='0' 'CUTOVER STARTED'
cutover_status='1' 'SERVICES SHUTDOWN COMPLETED'
cutover_status='3' 'DB CUTOVER COMPLETED'
cutover_status='D' 'FLIP SNAPSHOTS COMPLETED'
cutover_status='4' 'FS CUTOVER COMPLETED'
cutover_status='5' 'ADMIN STARTUP COMPLETED'
cutover_status='6' 'SERVICES STARTUP COMPLETED'
cutover_status='N' 'NOT STARTED'
cutover_status='X' 'NOT APPLICABLE'
7. Fixed Object and Dictionary Statistics
There can sometimes be issues with Fixed Object and Dictionary Statistics in Online Patching Enablement, R12.2.n RUPs (online patching) or when producing AWR reports or SQLT.
A fixed object (X$ tables) resides in memory only, and typically records the information about the instance or memory structures. The v$ dynamic performance views are defined on top of X$ tables e.g. V$SQL and V$SQL_PLAN.
Data dictionary tables (e.g. SYS.USER$, SYS.TS$, SYS.SEG$, SYS.OBJ$, SYS.TAB$, SYS.FILE) are stored on data files like normal application tables.
Some fixed objects and dictionary objects will have grown significantly during the upgrade.
There are also changes to fixed objects due to Online Patching Enablement (and the underlying Edition-Based Redefinition). As a result internal SQL in Online Patching Enablement, R12.2.n RUPs and other online patches can sometimes be long running. Gathering fixed object statistics can help in these circumstances. Particularly on editioning objects.
There may be additional specific circumstances during the upgrade where fixed object or dictionary statistics need to be gathered (such as before importing schema statistics or running SQLT or AWR reports when AWR has grown significantly).
If there is internal SQL (on V$ views or on SYS/SYSTEM objects) appearing high in AWR and TKPROF reports, it is likely that dictionary and fixed object statistics need to be gathered.
Note that the FND_STATS API does not gather statistics for dictionary or fixed objects. The DBMS_STATS APIs need to be used.
The commands are:
execute DBMS_STATS.GATHER_FIXED_OBJECTS_STATS(no_invalidate=>FALSE);
execute DBMS_STATS.GATHER_DICTIONARY_STATS( estimate_percent => DBMS_STATS.AUTO_SAMPLE_SIZE, options => 'GATHER AUTO', no_invalidate=>FALSE)
Usually the "no_invalidate=>FALSE" argument will not be needed. However, the procedures DBMS_STATS.set_database_prefs, set_global_pefs, set_schema_prefs or set_table_prefs could have been used to set the default value for NO_INVALIDATE to TRUE.
If there are only a handful of internal SQLs with inefficient execution plans and only a few objects then specific objects could be targeted rather than gathering all dictionary or fixed object statistics.
e.g.
exec dbms_stats.gather_table_stats(ownname=>'SYS', tabname=>'OBJ$', no_invalidate=>false);
exec dbms_stats.gather_table_stats(ownname=>'SYS', tabname=>'X$KQFP', no_invalidate=>false);
See the following My Oracle Support documents for more information:
Best Practices for Gathering Statistics with Oracle E-Business Suite (Document 1586374.1)
Fixed Objects Statistics (GATHER_FIXED_OBJECTS_STATS) Considerations (Document 798257.1)
Change Log
TRCANLZR (TRCA): SQL_TRACE/Event 10046 Trace File Analyzer - Tool for Interpreting Raw SQL Traces (NO LONGER SUPPORTED - Use SQLTXPLAIN sqltrcanlzr.sql) (Doc ID 224270.1)
APPLIES TO:
Oracle Application Object Library - Version 11.5.10.0 to 12.1.3 [Release 11.5 to 12.1]
Oracle Database - Enterprise Edition - Version 9.2.0.1 and later
Oracle Database Exadata Cloud Machine - Version N/A and later
Oracle Cloud Infrastructure - Database Service - Version N/A and later
Oracle Database Exadata Express Cloud Service - Version N/A and later
Information in this document applies to any platform.
PURPOSE
Trace Analyzer, also known as TRCANLZR or TRCA, is depracated tool provided by Oracle Server Technologies Center of Expertise - ST CoE. TRCA inputs one or several SQL trace(s) generated by Event 10046 and outputs a diagnostics report in two formats (html and text). These reports are commonly used to diagnose processes performing poorly.
The functionality provided by TRCANLZR is now included in SQLTXPLAIN as a script sqltrcanlzr.sql. A standalone installation of TRCANLZR is not supported.
TRCA identifies expensive SQL out of the provided trace(s), then it connects to the database and collects their explain plans, Cost-based Optimizer CBO statistics, metadata, configuration parameters, and similar elements that influence the performance of the process being analyzed.
For best output results TRCA must be installed and used in the same system and database where the trace is generated. Understanding that installing TRCA in a production system is not always possible, TRCA is capable of analyzing a trace using a different system.Refer to the included instructions for details.
TROUBLESHOOTING STEPS
For 9i, 10g and 11g instances, download tool trca.zip from attachments. Read trca_instructions.html included into zip file.
If zip file seems corrupted (IE browser), or for other issues during the installation, please add a comment to this article.
For sample output reports, download trca_samples.zip.
Discuss TRCANLZR Installation Issues!
The window below is a live link (not a screenshot) to a commuity discussion for TRCANLZR installation issues. If you encounter TRCANLZR installation issues, we encourage you to join the discussion by clicking the "Reply" link below for the entry you would like to provide feedback on. Additionally, if you have questions or implementation issues with the information in the article above, you can also share that below.
REFERENCES
NOTE:215187.1 - All About the SQLT Diagnostic Tool
NOTE:39817.1 - Interpreting Raw SQL_TRACE output
Subscribe to:
Posts (Atom)
Database Options/Management Packs Usage Reporting for Oracle Databases 11.2 and later (Doc ID 1317265.1)
Database Options/Management Packs Usage Report You can determine whether an option is currently in use in a database by running options_pa...
-
The information in this document applies to Oracle E-Business Suite Release 11 i and R12.x . The most current version of this document ...
-
This document also provides links to two presentations on the subject: Oracle OpenWorld presentation "Technical Upgrade Best Practice...
-
The purpose of this document is to present known issues and solutions for the 19c Database Upgrade with Oracle E-Business Suite (EBS). Imp...