Saturday, May 29, 2021

Oracle E-Business Suite Performance Guide (Doc ID 1672174.1)

 

n This Document

1 Purpose

The purpose of the E-Business Suite Performance Tuning Guide is to give an overview of :

  • The typical process flow for resolving performance issues on Oracle E-Business Suite.

  • How to determine which diagnostic tools and data to use.

  • The methods available for gathering diagnostics.

  • The common mistakes to avoid.

2 Introduction

This is the first of a series of several documents.

This document gives an overview of how to approach an E-Business Suite performance issue, where to start and how to proceed.

It references other documents that give the detail on each of the main diagnostic tools, the key actions that can be taken to resolve issues and the key areas for performance tuning.

Where there is already comprehensive and up to date content elsewhere (e.g. My Oracle Support, Documentation Library or Blogs) this is referenced rather than repeating the detail of that content in these document.

The aim is to give the user:

  • The minimum number of documents to read and reference.

  • Full action plan templates or instructions including any caveats, pre-requisites or important notes.

  • A flow through each of the diagnostic tools and key actions.

without repeating significant content.

3 Solution Flow Diagram

The following diagram is a very simplified view of the process.

There are many other paths and actions that can be taken depending on the information received and the analysts level of experience. There are also other diagnostics that can be obtained. However, this diagram covers the most likely paths.

Solution Flow Diagram

4 Where to Start

The usual place to start is the SQL Trace and TKPROF. However this is not always the best place.

If the performance issue is not (yet) linked to a specific module or is system wide then an AWR report would be the best starting point.

If the specific SQLs with the performance issues are known, the SQL is still in memory (the cursor is still loaded in the cursor cache) and the "statistics_level" initialization parameter is set to ALL (or the initialization parameter "_rowsource_execution_statistics" is set to TRUE) when the SQLs are executed, then the display cursor or SQL Monitor report can be a good place to start.

5 Reducing Iterations

5.1 Sufficient Information

It is important to get the balance right, between getting all necessary information on the first pass and asking for too much information.

Round trips between parties should be minimized. This can be a significant delay, especially across different time zones, and can create a lot more work.

However, gathering too much data can:

  • Delay the start of investigation/analysis

  • Create unnecessary work for the other party (customer) and possibly impact working relationships

So, if the module or program is known, then typically ask for the AWR report and the SQL Trace/TKPROF.

However, many of the other diagnostics should not be requested until later.

  • Some, like the Forms Trace (Forms Runtime Diagnostics) or Reports Client Process Trace are rarely needed.

  • Others, like the SQLT, cannot be requested until further information is obtained (like the specific SQL/sql_id).

5.2 Correct and Complete Diagnostics

To avoid repeating requests for the same information

  • Specify exactly what is needed in detail and any common mistakes to avoid. There are some useful "Action Plan" templates in this document.

  • Review the files as soon as they are received to check that they are correct / complete.

For example :

  • The SQL Trace should be complete, for the correct program / module and environment.

  • The SQLT should normally be produced with XTRACT method and run just after the program has completed. It must be for the correct SQL and from the correct environment.

  • The AWR report / Operating Statistics must be for the correct period of time (snapshots).

  • The AWR should contain the expected SQL/program and be from the correct environment.

6 Solution Process Overview

6.1 SQL Trace / TKPROF

Around 90% of performance issues on Oracle E-Business Suite are SQL related.

So, if the module/program is known, then start with the SQL Trace (Level 8 – with waits) and TKPROF.

The timed_statistics initialization parameter must be set to TRUE, so that the SQL Trace contains row source statistics (STAT lines) and CPU times.

In most circumstances this will show the actual runtime execution plan with row source statistics, so it should be possible to identify any inefficient execution plan steps from the TKPROF. However, it does not show which predicates are applied on each execution plan line.

Row source statistics are the actual (not estimated) statistics (elapsed time, physical reads/writes, consistent reads) for each execution plan line.

It will also identify any time consuming wait events.

For more information on obtaining and interpreting SQL Trace and TKPROF see My Oracle Support document "Oracle E-Business SQL Trace and TKPROF Guide (Document 1674024.1)".

For more information on identifying SQLs with performance issues from the SQL Trace / TKPROF see the "What is the challenging SQL? > 1 From Trace" section in My Oracle Support document "Oracle E-Business Suite SQL Tuning Tips for Customers (Document 2098522.1)".

6.2 AWR

The AWR report should normally be obtained on the first pass as well. This should be for the period (between snapshots) that the module / program was running.

This can help identify expensive SQLs, time consuming wait events, system resource issues or DB configuration issues.

It does not report execution plans (runtime or explain plan).

Even if the module/program or SQLs with the performance issue are known:

  • It could be caused or amplified by general performance issues, configuration or system capacity (I/O, memory, processing etc).

  • It could be due to interaction or contention with other sessions/processes.

  • The AWR may have supplementary information that can help diagnose the issue.

In some cases Active Session History (ASH) reports can be used to give more detailed information on which SQLs, Wait Events, Objects and Row Sources the performance issue occurs.

For more information on obtaining and interpreting AWR reports see My Oracle Support document "Performance Diagnosis with Automatic Workload Repository (AWR) (Document 1674086.1)".

For more information on identifying SQLs with performance issues from the AWR see the "What is the challenging SQL? > 2 From Automatic Workload Repository (AWR)" section in My Oracle Support document "Oracle E-Business Suite SQL Tuning Tips for Customers (Document 2098522.1)".

6.3 Display Cursor / SQL Monitor Report

If specific SQLs with the potential performance issues are known, the SQL is still in memory (the cursor is still loaded in the cursor cache) and "statistics_level" is set to ALL (or "_rowsource_execution_statistics" is set to TRUE) when the SQL is executed, then the Display Cursor or SQL Monitor report can be a good place to start.

These reports can be produced whilst a SQL is still executing.

If it is not possible to obtain a complete SQL trace (or one that contains row source statistics (STAT) lines) then these reports can also be useful in providing actual runtime execution plan and row source statistics. See "Runtime Execution Plans / Row Source Operation Counts", "Incomplete Trace File" and "Extended SQL Trace" sections in the My Oracle Support document "Oracle E-Business SQL Trace and TKPROF Guide (Document 1674024.1)"

To produce these reports it will normally be necessary to know the sql_id of SQLs with potential performance issues. See here for more information.

The Display Cursor report provides the actual runtime execution plan and row source statistics along with the predicates applied on each execution plan line. Row source statistics are the actual (not estimated) statistics (starts, number of rows, elapsed time, physical reads, buffer gets, memory used (in PGA)) for each execution plan line.

The Display Cursor report will only provide actual row source statistics if the "statistics_level" initialization parameter is set to ALL ( or "_rowsource_execution_statistics" is set to TRUE) at the time the SQL is executed.

The SQL Monitor report also provides the actual runtime execution plan. It shows row source statistics (executions, rows, memory usage, I/O Requests, CPU Activity, Wait Activity) for each runtime execution plan line, but there are differences with those provided on the Display Cursor report.

It will show executions, rows and memory usage and I/O requests even if the "statistics_level" initialization parameter is not set to ALL ( or "_rowsource_execution_statistics" is not set to TRUE) at the time the SQL is executed.

However, it will only show Active Period, CPU Activity and Wait Activity for execution plan lines if the "statistics_level" initialization parameter is set to ALL ( or "_rowsource_execution_statistics" is set to TRUE) at the time the SQL is executed. So these parameter values are preferred for SQL Monitor reports.

It also gives a good view of how parallel SQL/DML performs across stages of the plan and parallel child (secondary) processes.

Display Cursor report is preferred if the SQL/DML is non-parallel.

SQL Monitor report is preferred if the SQL/DML is parallel.

The advantage of these diagnostics is that they can be obtained quickly and after the event. There is often no need for another iteration to obtain the SQL Trace / TKPROF.

See here for more information on Display Cursor report.

See here for more information on SQL Monitor report.

It is also possible to get information directly from the V$SQL_PLAN_STATISTICS or V$SQL_PLAN_MONITOR tables that are used by the Display Cursor and SQL Monitor reports.

For more information see the section "How to get the necessary information? > 6. Steps to get SQL execution plan with V$SQL_PLAN_MONITOR or V$SQL_PLAN_STATISTICS" in the My Oracle Support document "Oracle E-Business Suite SQL Tuning Tips for Customers (Document 2098522.1)".

6.4 Most of the time spent in the application tier

High levels of "SQL*Net message from client" can indicate a large amount of time spent in the client application or on the application tier. This will not just be a high total time, but high average wait time as well.

Normally the "SQL*Net message from client" waits will average around 1ms or less. However, it really depends on the hardware/network configuration and bandwidth between tiers.

Note that:

  • Very large "SQL*Net message from client" waits are usually due to waiting for user entry on the client, particularly at the start and end of the trace.

  • A high number of short "SQL*Net message from client" waits usually indicates a high number of SQL executions (which should closely match the execution count of the SQLs) and does not indicate a performance issue on the application tier.

  • A consistently high level of "SQL*Net message from client" wait times might also indicate network bandwidth or latency issues.

Network Bandwidth or Latency Issues

The following My Oracle Support documents could assist with identifying and resolving Network issues:

  • Oracle E-Business Suite Network Utilities: Best Practices (Document 556738.1)

  • Troubleshooting gc block lost and Poor Network Performance in a RAC Environment (Document 563566.1)

  • Symmetrical Network Acceleration with Oracle E-Business Suite Release 12 (Document 967992.1)

6.5 Applications Tier Diagnostics

If, after analyzing the SQL Trace / TKPROF (or even the AWR), the majority of time is not spent in the database then diagnostics from the application tier may be needed.

Even if the application tier is on the same physical host as the database tier, diagnostics will still be needed.

These include:

6.6 Most Spent in SQL Execution

Typically, the majority of time will be spent parsing, executing or fetching in SQL.

However, if

  • The majority of the reported run time is accounted for in the SQL Trace/TKPROF

  • There is a high elapsed time on PL/SQL procedures / functions /blocks reported in the TKPROF, but that time is not reported on the SQLs contained in the PL/SQL

  • Client processing wait events (e.g. "SQL*Net message from client") do not add up to the unaccounted time

then the issue could be in the server side PL/SQL.

6.7 PL/SQL Profiler

The PL/SQL Profiler and PL/SQL Hierarchical profiler can be used to analyze which steps in the server side PL/SQL are consuming the time.

More

6.8 SQLHC or SQLT

There are occasions when SQLHC should be used rather than SQLT:

  • For example a customer may object to installing SQLT on their production environment or they may be concerned about the performance impact on a production system.

  • Or the customer may not have installed SQLT yet.

  • SQLT is more complicated and may require more explanation, so it may be quicker and simpler to ask for SQLHC.

There are situations where SQLT is preferred over SQLHC:

  • The SQL ID (and runtime execution plan) may not be in memory or AWR. So SQLHC will not provide any output (and SQLT with XPLAIN or XECUTE method is required).

  • More information is required.

6.9 SQLHC

When SQLs with possible sub-optimal execution plans have been identified then use SQLHC (sqlhc.sql).

This provides html reports, which contain most of the data required to identify the reasons why the CBO chose a sub-optimal execution plan. e.g. Health Check (Observations), CBO Statistics, Execution plans, Runtime Statistics, System/Instance (Initialization) Parameters and SQL Profiles/Baselines associated with the SQL.

It saves a huge amount of time by providing all information in one action (and in one zip file sqlhc_YYYYMMDD_HHMISS_SQLID.zip).

It does not require any installation on the environment (other than the script file itself) and has a negligible performance impact.

It is particularly useful if engineers/analysts do not have access to the instance.

Note that SQLHC will only provide actual row source statistics if the "statistics_level" initialization parameter is set to ALL ( or "_rowsource_execution_statistics" is set to TRUE) at the time the SQL is executed.

It will require the SQL ID.

See here for more information on SQLHC.

6.10 SQLT with XTRACT Method

When SQLs with possible sub-optimal execution plans have been identified then obtain the SQLT with XTRACT method for those SQLs.

This provides an html report (sqlt_snnnnn_main.sql), which contains most of the data required to identify the reasons why the CBO chose the execution plan.

It contains execution plans, runtime statistics, CBO statistics, DB parameters, CBO parameters, performance statistics, SQL Profiles/Baselines and metadata (schema object definitions etc) associated with the SQL.

It saves a huge amount of time by providing all information in one action (and in one zip file sqlt_snnnnn_method_*.zip).

It is particularly useful if engineers/analysts do not have access to the instance.

The XTRACT method is recommended as it gives actual SQL execution runtime data (if it is available).

Note that the SQLT (with XTRACT and XECUTE methods) will only provide actual row source statistics if the "statistics_level" initialization parameter is set to ALL ( or "_rowsource_execution_statistics" is set to TRUE) at the time the SQL is executed.

The XTRACT method will require the SQL ID or SQL Hash Value.

The XPLAIN and XECUTE methods will require a file containing the SQL (and application context / bind variables declared and populated for XECUTE).

Sometimes it is not possible to use SQLT on an environment. For example a customer may object to installing SQLT on their production environment or they may be concerned about the performance impact on a production system. So SQLHC could be used instead to provide some of the information.

See here for more information on SQLT.

6.11 Operating System Statistics

While AWR and trace data give a good initial picture of whether there are issues with I/O, CPU or memory; on linux/unix environments there are a number of individual operating system commands that can be used to monitor overall resource usage, or the resource usage for individual processes. e.g. "sar", "ps", "top", "ifconfig", "mpstat", "iostat", "netstat" "vmstat" etc.

The OS Watcher and LTOM tools provide automated ways of running some of these commands and then collating and presenting the results.

These diagnostics will not be needed if the performance issue is due to sub-optimal SQL execution plans, DB Configuration, Initialization Parameters, Incorrect/Missing CBO Statistics, high numbers of executions etc.

More

6.12 SQL Tuning

There are many ways to fix an inefficient execution plan.

These are covered in the "How to tune the issue SQL ?" section in the My Oracle Support document "Oracle E-Business Suite SQL Tuning Tips for Customers (Document 2098522.1)", which describes how a more efficient execution plan could be obtained by:

  • Changing the CBO initialization parameters to E-Business Suite recommended values. See My Oracle Support documents "Database Initialization Parameters for Oracle E-Business Suite Release 12 (Document 396009.1)" or "Database Initialization Parameters for Oracle Applications Release 11i (Document 216205.1)"

  • The script bde_chk_cbo.sql can be used to check the values. See My Oracle Support document "EBS initialization parameters - Healthcheck (Document 174605.1)"

  • Correcting missing, incomplete or out of date CBO statistics. Also see My Oracle Support document "Best Practices for Gathering Statistics with Oracle E-Business Suite (Document 1586374.1)"

  • Creating a SQL Profile to replicate a previous efficient execution plan

  • Applying recommendations from the ADDM report

  • Using SQL Tuning Advisor (if execution plan is very complex)

  • Removing a throwaway issue. (Where number of rows processed on one or more steps of the execution plan are unusually high. This is normally because filters, predicates are applied too late and unnecessary rows are included in result sets). This is often done by changing/specifying the join order, join method, adding more selective indexes or expanding OR conditions.

  • Creating a histogram for columns that have non-uniform distributions

  • Creating custom indexes to give more selective access

  • Creating a SQL Profile to apply hints

  • Controlling the use of SQL Plan Directives (SPD) (12c only)

  • Using the optimizer_features_enable CBO initialization parameter to revert to CBO functionality from a previous database version (if execution plan has regressed after a database upgrade)

The My Oracle Support document "Interpreting Explain Plan (10g and Above) (Document 1616894.1)" explains how to interpret execution / explain plans.

The My Oracle Support document "White Papers and Blog Entries for Oracle Optimizer (Document 1337116.1)" lists all the whitepapers and blogs for the Cost Based Optimizer.

In some cases the execution plan could be the most efficient, but there are still high levels of buffer gets (logical reads) or disk reads (physical reads) for certain execution plan lines.

In this case the time taken to read rows for specific objects (tables / indexes) could be reduced by:

  • Rebuilding fragmented objects

  • Caching frequently used large objects in the shared pool

  • Partitioning tables or indexes. Also see My Oracle Support document "Using Database Partitioning with Oracle E-Business Suite (Document 554539.1)"

  • Populating tables, partitions, sub partitions and materialized views in an In-Memory column store (12c). Also see My Oracle Support document "Using Oracle Database In-Memory with Oracle E-Business Suite (Document 2025309.1)"

6.13 Performance Tuning

The issue could be a system or application module wide performance issue caused by:

  • System or application configuration

  • Unecessary historic data in the application tables that requires purging. See My Oracle Support document "Reducing Your Oracle E-Business Suite Data Footprint using Archiving, Purging, and Information Lifecycle Management (Document 752322.1)".

  • Incorrect database initialization parameters (or CBO parameters). See My Oracle Support documents "Database Initialization Parameters for Oracle E-Business Suite Release 12 (Document 396009.1)" or "Database Initialization Parameters for Oracle Applications Release 11i (Document 216205.1)".

    The script bde_chk_cbo.sql can be used to check the values. See My Oracle Support document "EBS initialization parameters - Healthcheck (Document 174605.1)".

  • Incorrect values for SGA or PGA. For example increasing them may reduce I/O.

The "How to tune the issue SQL ?" section in the My Oracle Support document "Oracle E-Business Suite SQL Tuning Tips for Customers (Document 2098522.1)" outlines the following methods for resolving general performance issues:

  • Correct missing, incomplete or out of date CBO statistics. Also see My Oracle Support document "Best Practices for Gathering Statistics with Oracle E-Business Suite (Document 1586374.1)".

  • Controlling the use of SQL Plan Directives (SPD) (12c only)

  • Using the optimizer_features_enable CBO initialization parameter to revert to CBO functionality from a previous database version (if execution plan has regressed after a database upgrade)

  • Rebuilding fragmented objects

  • Caching frequently used large objects in the shared pool

  • Partitioning tables or indexes. Also see My Oracle Support document "Using Database Partitioning with Oracle E-Business Suite (Document 554539.1)". Note that modifying the partitioning for tables that have seeded partitioning in EBS is not supported

  • Populating tables, partitions, sub partitions and materialized views in an In-Memory column store (12c). Also see My Oracle Support document "Using Oracle Database In-Memory with Oracle E-Business Suite (Document 2025309.1)"

6.13.1 Exadata

The following My Oracle Support documents contain some useful advice for configuring Exadata on E-Business Suite:

  • Oracle E-Business Suite on Exadata Resources (Document 1512711.1). "Performance and Scalablity" section of the "E-Business Suite on Exadata" whitepaper.

  • Exadata V2 Starter Kit (Document 1244344.1)

  • Oracle Sun Database Machine Setup/Configuration Best Practices (Document 1274318.1)

  • Oracle Sun Database Machine X2-2/X2-8, X3-2/X3-8 and X4-2 Performance Best Practices (Document 1274475.1)

6.13.2 RAC

If the following places in the AWR show high levels of RAC wait events ("gc" wait events) then the RAC configuration may need to be changed.

  • Top N Timed Foreground Events , Foreground Wait Events, Wait Event Histogram

  • SQL ordered by Cluster Wait Time

  • Segments by Global Cache Buffer Busy

See the following My Oracle Support documents for more information:

  • RAC: Frequently Asked Questions (Document 220970.1)

  • RAC and Oracle Clusterware Best Practices and Starter Kit (Platform Independent) (Document 810394.1)

  • Using Oracle 11g Release 2 Real Application Clusters with Oracle E-Business Suite Release 12 (Document 823587.1)

  • Troubleshooting gc block lost and Poor Network Performance in a RAC Environment (Document 563566.1)

Also see Chapter 13 Monitoring Performance in the "Real Application Clusters Administration and Deployment Guide".

6.13.3 Concurrent Manager / Processing

See the following My Oracle Support documents for more information:

  • Concurrent Processing - Best Practices for Performance for Concurrent Managers in E-Business Suite (Document 1057802.1)

  • FAQ: EBS Concurrent Processing Performance and Best Practices (Document 1373727.1)

  • EBS - Webcast Recording Best Practices E-Business Suite Concurrent Manager (Document 1539512.1)

  • EBS Concurrent Processing (CP) Analyzer (Document 1411723.1)

  • Tuning Output Post Processor (OPP) to Improve Performance (Document 1399454.1)

  • Concurrent Processing - How To Run the Purge Concurrent Request FNDCPPUR, Which Tables Are Purged, And Known Issues Like Files Are Not Deleted From File System (Document 822368.1)

6.13.4 AQ / Streams

See the following My Oracle Support documents for more information:

Also see Chapter 5 Oracle Database Advanced Queueing Performance and Scalability in the "Database Advanced Queuing User's Guide".

7 Display Cursor Report

This displays the runtime execution plan of any cursor loaded in the cursor cache.

At a basic level, it shows the runtime execution plan.

However, the format ALL also includes extra information such as pruning, parallel execution, predicate, projection, alias and remote SQL information.

It can be produced by running the dbms_xplan.display_cursor function with the ALL +ALLSTATS option.

The +ALLSTATS option will include actual row source statistics for each execution plan step. ALLSTATS includes IOSTATS and MEMSTATS, e.g. :

  • Elapsed time

  • Physical reads

  • Buffer gets

  • Memory used (in PGA) for memory intensive operations (such as hash joins, sorts, and bitmap operators).

However, this additional information is only provided if STATISTICS_LEVEL=ALL when the SQL is executed.

Setting the initialization parameter "statistics_level" to ALL can be an overhead, so may not be advisable on a production system.

Alternatively the initialization parameter "_rowsource_execution_statistics" can be set to TRUE, which gives a lower overhead.

These parameters can be set for a specific session (e.g. using the profile option "Initialization SQL Statement - Custom").

If testing from SQL*Plus then the hint /*+ gather_plan_statistics*/ can be used to collect row source statistics for a specific SQL (even if "statistics_level" is not ALL and "_rowsource_execution_statistics" is FALSE).

Note that SQLT with XTRACT method will also report actual row source statistics in the same circumstances.

The Display Cursor report should be run as soon as possible. If it is delayed, the cursor may have been flushed from memory or invalidated, and no data will be available.

The SQL ID will be required to be able to run the Display Cursor report. See here for information on how to obtain it.

See here for instructions on how to run the Display Cursor report.

By default, the plan statistics are shown for all executions of the cursor, however the keyword LAST can be added to the options to see the statistics for the last execution only.

Here is an example of the output from a display cursor report.

SQL_ID  8d9ny4cf9qmvm, child number 1 
     ------------------------------------- 
     INSERT INTO HZ_CUST_SITE_USES_ALL ( SITE_USE_ID , CUST_ACCT_SITE_ID, 
     SITE_USE_CODE , PRIMARY_FLAG , STATUS , LOCATION , ORG_ID , 
     OBJECT_VERSION_NUMBER, CREATED_BY_MODULE, LAST_UPDATE_DATE , 
     CREATION_DATE , LAST_UPDATED_BY , CREATED_BY , LAST_UPDATE_LOGIN) 
     SELECT HZ_CUST_SITE_USES_S.NEXTVAL, L.CUST_ACCT_SITE_ID, 'LATE_CHARGE', 
     'Y', 'A', TO_CHAR(HZ_CUST_SITE_USES_S.CURRVAL), L.ORG_ID, 1, 
     'AR_LATE_CHARGE_UPG', :B3 , :B3 , -1551, -1551, -1551 FROM (SELECT 
     CAS.CUST_ACCT_SITE_ID, CAS.ORG_ID FROM HZ_CUST_ACCT_SITES_ALL CAS, 
     HZ_CUST_SITE_USES_ALL CSU WHERE CAS.ROWID >= :B2 AND CAS.ROWID <= :B1 
     AND CAS.STATUS = 'A' AND CAS.CUST_ACCT_SITE_ID = CSU.CUST_ACCT_SITE_ID 
     AND CSU.STATUS = 'A' AND (( CSU.SITE_USE_CODE = 'STMTS' AND NOT EXISTS 
     (SELECT NULL FROM HZ_CUST_SITE_USES_ALL B WHERE B.CUST_ACCT_SITE_ID = 
     CAS.CUST_ACCT_SITE_ID AND B.SITE_USE_CODE = 'DUN' AND B.STATUS = 'A')) 
     OR CSU.SITE_USE_CODE = 'DUN')) L 
     
     Plan hash value: 1449143904 
     
     ----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- 
     | Id  | Operation                        | Name                   | Starts | E-Rows |E-Bytes| Cost (%CPU)| E-Time   | A-Rows |   A-Time   | Buffers | Reads  |  OMem |  1Mem |  O/1/M   | 
     ----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- 
     |   0 | INSERT STATEMENT                 |                        |    655 |        |       | 52008 (100)|          |      0 |15:49:30.79 |     147M|    101K|       |       |          | 
     |   1 |  LOAD TABLE CONVENTIONAL         |                        |    655 |        |       |            |          |      0 |15:49:30.79 |     147M|    101K|       |       |          | 
     |   2 |   SEQUENCE                       | HZ_CUST_SITE_USES_S    |    655 |        |       |            |          |    109K|15:20:52.78 |     145M|  98825 |       |       |          | 
     |*  3 |    FILTER                        |                        |    655 |        |       |            |          |    109K|15:19:30.01 |     145M|  98825 |       |       |          | 
     |*  4 |     FILTER                       |                        |    655 |        |       |            |          |   5920K|15:02:20.32 |     144M|  80273 |       |       |          | 
     |*  5 |      HASH JOIN                   |                        |    655 |  42541 |  1744K| 52008   (1)| 00:18:18 |   5920K|15:01:52.29 |     144M|  80273 |   963K|   963K|   655/0/0| 
     |*  6 |       TABLE ACCESS BY ROWID RANGE| HZ_CUST_ACCT_SITES_ALL |    655 |  30763 |   751K| 14546   (1)| 00:05:07 |   3507K|00:02:21.51 |   93535 |  80273 |       |       |          | 
     |*  7 |       TABLE ACCESS FULL          | HZ_CUST_SITE_USES_ALL  |    655 |   3356K|    54M| 37454   (1)| 00:13:11 |   4050M|06:58:51.26 |     144M|      0 |       |       |          | 
     |*  8 |     TABLE ACCESS BY INDEX ROWID  | HZ_CUST_SITE_USES_ALL  |    109K|      1 |    17 |     4   (0)| 00:00:01 |      0 |00:20:29.26 |     328K|  18552 |       |       |          | 
     |*  9 |      INDEX RANGE SCAN            | HZ_CUST_SITE_USES_N1   |    109K|      1 |       |     3   (0)| 00:00:01 |      0 |00:20:28.15 |     328K|  18552 |       |       |          | 
     ----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- 
     
     Query Block Name / Object Alias (identified by operation id): 
     ------------------------------------------------------------- 
     
     1 - SEL$F5BB74E1 
     6 - SEL$F5BB74E1 / CAS@SEL$2 
     7 - SEL$F5BB74E1 / CSU@SEL$2 
     8 - SEL$3        / B@SEL$3 
     9 - SEL$3        / B@SEL$3 
   
     Predicate Information (identified by operation id): 
     --------------------------------------------------- 
   
     3 - filter((("CSU"."SITE_USE_CODE"='STMTS' AND  IS NULL) OR "CSU"."SITE_USE_CODE"='DUN')) 
     4 - filter(CHARTOROWID(:B2)<=CHARTOROWID(:B1)) 
     5 - access("CAS"."CUST_ACCT_SITE_ID"="CSU"."CUST_ACCT_SITE_ID") 
     6 - access("CAS".ROWID>=CHARTOROWID(:B2) AND "CAS".ROWID<=CHARTOROWID(:B1)) 
     filter("CAS"."STATUS"='A') 
     7 - filter("CSU"."STATUS"='A') 
     8 - filter("B"."STATUS"='A') 
     9 - access("B"."CUST_ACCT_SITE_ID"=:B1 AND "B"."SITE_USE_CODE"='DUN') 
   
     Column Projection Information (identified by operation id): 
     ----------------------------------------------------------- 
   
     2 - "CAS"."CUST_ACCT_SITE_ID"[NUMBER,22], "CSU"."CUST_ACCT_SITE_ID"[NUMBER,22], "CAS".ROWID[ROWID,10], "CAS"."ORG_ID"[NUMBER,22], "CAS"."STATUS"[VARCHAR2,1], 
   "CSU"."STATUS"[VARCHAR2,1], "CSU"."SITE_USE_CODE"[VARCHAR2,30] 
     3 - "CAS"."CUST_ACCT_SITE_ID"[NUMBER,22], "CSU"."CUST_ACCT_SITE_ID"[NUMBER,22], "CAS".ROWID[ROWID,10], "CAS"."ORG_ID"[NUMBER,22], "CAS"."STATUS"[VARCHAR2,1], 
   "CSU"."STATUS"[VARCHAR2,1], "CSU"."SITE_USE_CODE"[VARCHAR2,30] 
     4 - "CAS"."CUST_ACCT_SITE_ID"[NUMBER,22], "CSU"."CUST_ACCT_SITE_ID"[NUMBER,22], "CAS".ROWID[ROWID,10], "CAS"."ORG_ID"[NUMBER,22], "CAS"."STATUS"[VARCHAR2,1], 
   "CSU"."STATUS"[VARCHAR2,1], "CSU"."SITE_USE_CODE"[VARCHAR2,30] 
     5 - (#keys=1) "CAS"."CUST_ACCT_SITE_ID"[NUMBER,22], "CSU"."CUST_ACCT_SITE_ID"[NUMBER,22], "CAS".ROWID[ROWID,10], "CAS"."ORG_ID"[NUMBER,22], "CAS"."STATUS"[VARCHAR2,1], 
   "CSU"."STATUS"[VARCHAR2,1], "CSU"."SITE_USE_CODE"[VARCHAR2,30] 
     6 - "CAS".ROWID[ROWID,10], "CAS"."CUST_ACCT_SITE_ID"[NUMBER,22], "CAS"."STATUS"[VARCHAR2,1], "CAS"."ORG_ID"[NUMBER,22] 
     7 - "CSU"."CUST_ACCT_SITE_ID"[NUMBER,22], "CSU"."SITE_USE_CODE"[VARCHAR2,30], "CSU"."STATUS"[VARCHAR2,1] 
     8 - "B".ROWID[ROWID,10], "B"."CUST_ACCT_SITE_ID"[NUMBER,22], "B"."SITE_USE_CODE"[VARCHAR2,30], "B"."STATUS"[VARCHAR2,1] 
     9 - "B".ROWID[ROWID,10], "B"."CUST_ACCT_SITE_ID"[NUMBER,22], "B"."SITE_USE_CODE"[VARCHAR2,30] 
   
     Note 
     ----- 
     - dynamic sampling used for this statement (level=7)
     
     150 rows selected. 
   

In the above example, it can clearly be seen that the performance issue is related to a full table scan of HZ_CUST_SITE_USES_ALL (alias CSU) and the subsequent hash join.

The columns Starts, A-Rows, A-Time, Buffers, Reads, OMem, 1Mem and O/1/M are actual statistics. The columns E-Rows, E-Bytes, Cost (%CPU) and E-Time are estimates (calculated by the Cost Based Optimizer).

Most of the information is also included in SQLT with XTRACT method. However, display_cursor provides a simpler view of the information and can be obtained very quickly with little overhead.

The runtime execution plan and row source statistics are also available in views V$SQL_PLAN and V$SQL_PLAN_STATISTICS.

For more information see the section "How to get the necessary information? > 6. Steps to get SQL execution plan with V$SQL_PLAN_MONITOR or V$SQL_PLAN_STATISTICS" in the My Oracle Support document "Oracle E-Business Suite SQL Tuning Tips for Customers (Document 2098522.1)".

Note : serveroutput must be set to "off" for dbms_xplan to work.

8 SQL Monitor Report

The main advantage of this is that it gives a good view of how parallel SQL/DML performs across stages of the plan and parallel child (secondary) processes.

It can also give a good idea of the actual executions and row counts for each execution plan line even if "statistics_level" initialization parameter is not set to ALL ( or "_rowsource_execution_statistics" is not set to TRUE) at the time the SQL is executed.

For more information see Oracle Database Performance Tuning Guide and My Oracle Support document "Monitoring SQL statements with Real-Time SQL Monitoring (Document 1380492.1)".

The SQL Monitor report will only show the Active Period, CPU Activity and Wait Activity for execution plan lines if the "statistics_level" initialization parameter is set to ALL ( or "_rowsource_execution_statistics" is set to TRUE) at the time the SQL is executed.

This report uses data from several fixed views (e.g. GV$SQL_MONITOR, GV$SQL_PLAN_MONITOR, GV$SQL_PLAN, GV$ACTIVE_SESSION_HISTORY, GV$SESSION_LONGOPS, GV$SQL), so the database user must have the privilege to select from these (e.g. SELECT_CATALOG role).

It can be produced by running the dbms_sqltune.report_sql_monitor function with report_level = ALL.

The SQL ID will be required to be able to run the SQL Monitor report. See here for information on how to obtain it.

See here for instructions on how to run the SQL Monitor report.

By default the SQL Monitor report shows the last (monitored) execution of the SQL. The sql_exec_start and sql_exec_id parameters can be used to display other executions. See the Oracle Database PL/SQL Packages and Types Reference for more information.

SQL Monitor reports are also available from Oracle Enterprise Manager.

Here is an example of the output from a SQL Monitor report:

SQL Monitor Example Part 1

SQL Monitor Example Part 2

The Estimated Rows and Cost columns in the Monitoring Details section contain estimates, the remaining columns contain actual statistics.

Note that pointing to any of the colored bars on the HTML report will display more information about that statistic. e.g. For Wait Activity it displays the name of the wait events.

Real-Time SQL Monitoring requires the Oracle Tuning Pack. The Oracle Diagnostics Pack is a prerequisite for the Oracle Tuning Pack.

So Oracle Tuning Pack and Oracle Diagnostics must be licensed in order to use Real-Time SQL Monitoring. They can only be purchased with the Enterprise Edition of the Oracle Database.

See My Oracle Support document "Monitoring SQL statements with Real-Time SQL Monitoring (Document 1380492.1)" or Oracle Database Licensing Information in the documentation library.

SQL monitoring is enabled when the initialization parameter "statistics_level" is set to "ALL" or "TYPICAL" (default). The "control_management_pack_access" parameter should also be set to "DIAGNOSTIC+TUNING" (SQL monitoring is part of the Oracle Database Tuning Pack).

SQLs are monitored if they consume more than 5 seconds of CPU or I/O time in a single execution or if they use parallel execution (SQL, DML or DDL).

The runtime execution plan and row source statistics reported in the SQL Monitor Report are also available in the view V$SQL_PLAN_MONITOR.

For more information see the section "How to get the necessary information? > 6. Steps to get SQL execution plan with V$SQL_PLAN_MONITOR or V$SQL_PLAN_STATISTICS" in the My Oracle Support document "Oracle E-Business Suite SQL Tuning Tips for Customers (Document 2098522.1)".

9 Forms Trace and Forms Runtime Diagnostics

Forms Trace was introduced with Forms 9i. It an be used with Oracle E-Business Suite R12 to provide information on events in Forms and when they occurred (relative to each other). It can be used to diagnose and investigate Forms performance issues, when it is clear that the time is being consumed in the Form itself and not in the database.

The output an be provided in HTML or XML format.

It can record events such as:

  • Errors

  • Navigation

  • Trigger, LOV, Program Unit, Built-in and User Exit (Start & End)

  • SQL and Database Events

  • Communication (Network) Events

Forms Runtime Diagnostics (FRD) is the primary diagnostic tool used in Forms 6i (and hence Oracle E-Business Suite 11i). It also reports on events in Forms and when they occur. The events include

  • Navigation

  • Triggers

  • Built-ins

  • Messages

  • Opening of forms executables (.fmx) and menu executables (.mmx)

  • Unhandled exceptions (including error message/number)

  • External user events

For R12.0 see My Oracle Support document "How To Collect And Use Forms Trace (FRD) in Oracle Applications Release 12 (Document 373548.1)". Also see "R12: Forms Runtime Diagnostics (FRD), Tracing And Logging For Forms In Oracle Applications (Document 438652.1)" for more information.

For Release 11i, see My Oracle Support document "11i/R12: Obtaining Forms Runtime Diagnostics (FRD) In Oracle Applications 11i and Release 12 (Document 150168.1)"

Note that the location in the FORMS_TRACE_DIR variable can also be obtained through Help in Oracle Forms (Go to Help->About Oracle Applications, section "Forms Server Environment Variables"). If the value is not visible then it may be necessary to set profile option "FND: Diagnostics" to "Yes" and bounce the middle tier.

9.1 Using Oracle Applications Manager

From R12 onwards Forms Runtime Diagnostics (FRD) can be obtained for another Forms Session using the Monitoring feature within Oracle Applications Manager (Monitoring > Forms Sessions (under Current Activity)).

Click on the Session Details button or the AUDSID to view database information for the selected Forms session.

The Diagnostics button can then be clicked to enable Forms Runtime Diagnostics (FRD) for that Forms Session.

The View Diagnostics button can be used to view the Forms Runtime Diagnostics (FRD) log file.

When the relevant actions are completed, remember to switch off diagnostics using the Diagnostics button.

Note that the profile option "Sign-On:Audit Level" must be be set to "Form" to be able to use this functionality.

If this button is disabled then check that the Forms patchset level is 12 or higher (i.e. 6.0.8.20 or higher) and set the environment variable FORMS60_OAM_FRD for the Forms Listener process.

See the Oracle E-Business Suite Maintenance Guide (R12.2 and onwards) or System Administrator's Guide – Maintenance (prior to R12.2) for more information.

10 Reports Client Process Trace

Reports can be traced in three different ways:

  • Add Options to the Concurrent Program Definition (in 11i and R12).

  • Specify Debug Options when submitting a Concurrent Request (in R12 and 11.5.10 (with latest ATG patch levels)).

  • Using Debug Workbench in Oracle Applications Manager (in R12 and 11.5.10 (with latest ATG patch levels)).

These produce identical output.

See here for instructions on how to run.

There are a number of Trace Options. None of them are ideal. For detail reports TRACE_ALL can give significantly distorted timings for the formatting part of the report (Reports Time), as well as produce a very large Reports Trace log file. TRACE_PRF gives an overview of where time is spent; either in formatting the report or retrieving data from the database (UPI or SQL), but no detail.

So, for detail reports it is advisable to use the "Log report performance statistics" (TRACE_PRF) option. If this indicates that the issue is in "Reports Time" or "UPI" time then run the report again with the "Log all possible trace information" (TRACE_ALL) option (which includes TRACE_APP, TRACE_PLS, TRACE_TMS etc).

For summary reports use the "Log all possible trace information" (TRACE_ALL) option.

Choosing Reports Trace Options (Debug Option Values)

The following trace options (TRACEOPTS) and Debug Option Values are available:

Debug Option ValuesTRACEOPTSComments
Log all possible trace informationTRACE_ALL

This contains all information sources (including the most verbose - TRACE_APP)

All sources have timestamps.

This is the default.

If an incorrect trace option is entered (in Concurrent Program options field) then the this is what will be traced.

Log report object trace informationTRACE_APP

This includes all objects in the report including boilerplate, fields, database columns, frames and groups presented in the report.

If the report outputs a high number of lines then this can result in a large number of lines and a very large file. It can also add a very significant overhead to the report timings.

There is no timestamp unless combined with TRACE_TMS option.

Log report distribution informationTRACE_DST

This lists report output destination information.

There is no timestamp unless combined with TRACE_TMS option.

Log report error and warning messagesTRACE_ERRThere is no timestamp unless combined with TRACE_TMS option.
Log report plsql object trace informationTRACE_PLSThere is no timestamp unless combined with TRACE_TMS option.
Log report performance statisticsTRACE_PRF

This only outputs overall performance statistics:

Total Elapsed Time : This is the total execution time of the report.

Reports Time : This is the time that was spent on formatting the data and outputting it.

Oracle Time : This is the time spent retrieving the data. This is split into UPI and SQL Time.

UPI: This is the time spent connecting to the database, parsing and executing the SQL.

SQL: This is the time spent in the database (time spent executing SRW.DO_SQL() statements, EXEC_SQL statements, PL/SQL Cursors, etc.).

Log report SQL trace informationTRACE_SQL

This reports the SQL executed by the Report (including SQL Text and Bind values).

Note that this is only SQL sent from the Report to the database to be executed. It does not include any additional SQL executed by server side objects (e.g. triggers, functions, procedures).

There is no timestamp unless combined with TRACE_TMS option.

Log report time stamp informationTRACE_TMSThis traces nothing on it's own. However, if combined with other options it will add timestamps to each entry in the log.
 TRACE_BRK

Log report breakpoint information

All Debug, Info, State and Exception sources with timestamp

This is only available from the Concurrent Program definition. It cannot be selected in Debug Options.

Combining options e.g. TRACEOPTS=(TRACE_APP,TRACE_PLS,TRACE_TMS) will include multiple source of information. Note that there are no spaces before or after the comma delimiters.

The most useful options are TRACE_APP, TRACE_PLS and TRACE_PRF.

However, for reports where the output is large compared to the database activity (e.g. detail reports) then TRACE_APP will give significantly distorted timings for the formatting part of the report (Reports Time), as well as produce a very large Reports Trace log file.

So, it is advisable to start with TRACE_PRF. If TRACE_PRF indicates that the issue is in "Reports Time" or "UPI" time then run the report again with TRACE_ALL option (which includes TRACE_APP, TRACE_PLS, TRACE_TMS etc).

Note that the times on the TRACE_ALL for report objects (e.g. boilerplate, fields, columns - TRACE_APP) will contain an overhead for each line, however an analysis of any large time gaps within this section (or elsewhere in the trace) should reveal where time is being consumed (when the report is not being traced).

The difference in performance statistics ("Reports Time") between the TRACE_ALL and TRACE_PRF traces should give an indication of the overhead of the TRACE_ALL option.

11 Application Tier Operating System Statistics

If a significant amount of time has been spent in the application server then Operating System Statistics should be provided for the application tier.

See Operating System Statistics for more information on how to gather and present these using Oracle's OS Watcher and Lite Onboard Monitor (LTOM) tools.

For Forms the "Forms Process ID" can be obtained through Help in Oracle Forms (Go to Help->About Oracle Applications, section "Forms Server"). If the value is not visible then it may be necessary to set profile option "FND: Diagnostics" to "Yes" and bounce the middle tier.

For Self Service (OAF) the process can be obtained through the "About this Page" link at the bottom left hand side of the page. Go to the "Java System Properties" tab and get the value for the CLIENT_PROCESSID. Note that the "About this Page" link is only available when profile FND_DIAGNOSTICS set to YES.

12 PL/SQL Profiler

The PL/SQL Profiler can be used to analyze which steps in the server side PL/SQL are consuming the time.

The output is an html report with summary and detail sections, listing the program units and lines with the time consumed.

There are three ways of producing a PL/SQL Profiler report in Oracle E-Business Suite, depending on the module being profiled.

  • From Oracle Forms

  • From Self Service (Oracle Applications Framework)

  • From a Concurrent Request

See here for more information on how to use these methods.

12.1 Enable PL/SQL Profiler manually in PL/SQL

PL/SQL Profiler data can be collected manually by enabling and disabling directly in PL/SQL.

This could be useful if testing directly from SQL*Plus / SQL Developer (i.e. outside of E-Business Suite).

This requires updating the PL/SQL code and re-compiling.

Care is also needed to ensure that the commands to enable and disable are placed at the correct points (where they will definitely be executed and capture all the necessary actions).

Enable PL/SQL Profiler using:

dbms_profiler.start_profiler(run_comment => ‘<comment>’, run_comment1 => ‘<comment 1>’); 

Disable PL/SQL Profiler using:

dbms_profiler.stop_profiler; 

Note that profiler data is stored in memory and is flushed into profiler tables when the profiler is stopped. It can be advisable to run dbms_profiler.flush_data at intervals to flush data to tables and free up memory.

See the following for more information:

12.2 Other reporting options

Additionally, the script profiler.sql can be run as follows to list the top resource consuming PL/SQL lines and the code lines around them.

START profiler.sql <run id>;

This can be downloaded from My Oracle Support document "Script to produce HTML report with top consumers out of PL/SQL Profiler DBMS_PROFILER data (Document 243755.1)". Sample output can also be downloaded.

There are also reporting scripts profsum.sql and profrep.sql available on older database versions.

12.3 Interpretation

The standard E-Business Suite "PL/SQL Profiler Report" contents are self-explanatory.

It contains the total elapsed time for the run (run ID).

If the run is part of a related run then the grand total for the related run is also reported.

The "Execution Summary By Program Units" section reports on the total elapsed time for each program unit (e.g. Package Body) where the total time is greater than 1% of the total.

The "Execution Details For Program Units" sections give a breakdown of the total elapsed time and executions of each source code line within the program units. This is sorted by line number and not by total time.

There is an example in My Oracle Support document "How to generate pl/sql profiler output for the concurrent program under EBS? (Document 808005.1)".

12.4 Technical detail

The PL/SQL Profiler implemented in E-Business Suite comprises Oracle Database functionality, augmented by E-Business Suite functionality.

There is also a reporting script provided by Oracle Server Technologies Center of Expertise.

Here is a description of the components:

and
ObjectSourceTypeComments
plsql_profiler_runsDatabaseTable

Contains information for the PL/SQL Profiler Runs

plsql_profiler_unitsDatabaseTable

Contains information for each program unit (e.g. Package)

plsql_profiler_dataDatabaseTable

Contains information for each line of code in program units

$ORACLE_HOME/rdbms/admin/proftab.sqlDatabaseSQL ScriptInstalls the tables plsql_profiler_runs, plsql_profiler_units and plsql_profiler_data
dbms_profilerDatabasePackage

Used to start, stop, pause, resume and flush the profiler

$ORACLE_HOME/rdbms/admin/profload.sqlDatabaseSQL ScriptInstalls the package dbms_profiler. Calls dbmspbp.sql and prvtpbp.plb
dbmspbp.sqlDatabaseSQL Scriptdbms_profiler package specification
prvtpbp.plbDatabaseSQL Scriptdbms_profiler package body specification
PL/SQL Profiler ReportE-Business SuiteConcurrent Program

The executable is FNDPMPRPT.

Calls fnd_trace_utils.plsql_prof_rpt

fnd_trace_utilsE-Business SuitePackage

Used to produce the PL/SQL Profiler report.

Uses dbms_profiler hidden procedure (rollup_run) and accesses tables plsql_profiler_runs, plsql_profiler_units and plsql_profiler_data.

AFPMUTLS.plsE-Business SuiteSQL Script

fnd_trace_utils package specification

AFPMUTLB.pls SQL Scriptfnd_trace_utils package body specification
profiler.sqlOracle Server Technologies Center of ExpertiseSQL Script

Provides an alternative report on PL/SQL Profiler data.

E-Business Suite Forms, Self Service (OAF) and the Debug Rules/Options functionality available from Oracle Applications Manager and Concurrent Request submission all utilise the database functionality (package dbms_profiler and plsql_profiler tables to collect PL/SQL profiler data).

See Oracle Database PL/SQL Packages and Types Reference for more information on the database tables and package.

12.5 Installing PL/SQL Profiler package

To install the PL/SQL Profiler tables, run the following as the APPS user:

$ORACLE_HOME/rdbms/admin/proftab.sql 

Note that if the tables and sequence (in this file) are created in a different schema then synonyms will be needed in the APPS schema.

To install the dbms_profiler package, run the following (as the user running the application e.g. APPS):

$ORACLE_HOME/rdbms/admin/profload.sql

See My Oracle Support document "Using DBMS_PROFILER (Document 97270.1)"

12.6 Common issues

If the PL/SQL Profiler report(s) do not return any detailed information then it could be because the PL/SQL is compiled in NATIVE mode. See Oracle Database PL/SQL Packages and Types Reference for more information.

Note that if the PL/SQL profiler objects (installed with PROFTAB.sql and PROFLOAD.sql) are created in a different user than APPS, but are accessed by other database users then it will be necessary to add privileges (Grant SELECT/INSERT/UPDATE) and synonyms for the tables and sequences. See Oracle Database PL/SQL Packages and Types Reference for more information.

12.7 PL/SQL Hierarchical profiler (HPROF)

The PL/SQL hierarchical profiler has a better hierarchical breakdown of the time spent in each program unit, showing how much time was consumed in the actual program unit (Function), the number of calls and how much time was consumed in its sub-programs (Descendents).

Time spent in SQL and PL/SQL is accounted separately.

However, it does not show the time spent at line level and it is not integrated into Oracle E-Business Suite functionality like PL/SQL Profiler.

It does not need to be inserted into source code, but it can only be enabled for the current session.

So for Oracle E-Business Suite the only way it can be enabled (without modifying code) is to use the profile option "Initialization SQL Statement - Custom"(internal name FND_INIT_SQL), adding the following:

BEGIN Dbms_hprof.start_profiling('<profiling directory>','<trace_filename>'); END;

The results are stored in a trace file, but can be reported in html output or written to database tables for further analysis.

See here for instructions on how to obtain and report PL/SQL Hierarchical Profiler information.

For more information see:

13 SQLHC

When SQLs with possible sub-optimal execution plans have been identified then use SQLHC (sqlhc.sql).

  • This provides html reports, which contain most of the data required to identify the reasons why the CBO chose the sub-optimal execution plan. e.g. Health Check (Observations), CBO Statistics, Execution plans, Runtime Statistics, System/Instance (Initialization) Parameters and SQL Profiles/Baselines associated with the SQL.

  • It saves a huge amount of time by providing all information in one action (and in one zip file sqlhc_YYYYMMDD_HHMISS_SQLID.zip).

  • It is particularly useful if engineers/analysts do not have access to the instance.

  • It is developed and supported by the Oracle Server Technologies Center of Expertise (ST CoE).

  • It reports on information available across the entire RAC (i.e. AWR/statistics) and also that which is specific to the node (e.g. initialization parameters etc).

  • The same script (sqlhc.sql) can be used on all database versions.

  • It does not require any installation on the environment (other than the script file itself) and has a negligible performance impact.

The html reports are as follows:

  • sqlhc_*_1_health_check.html contains Observations (Health Check) and CBO Statistics Summary for Tables and Indexes
  • sqlhc_*_2_diagnostics.html contains:
    • SQL Plan Baselines and SQL Profiles
    • Cursor Sharing information
    • Summary of SQL Runtime Statistics for the SQL / execution plans
    • CBO Statistics for Tables, Columns, Indexes
    • System and Instance (Initialization) Parameters
  • sqlhc_*_3_execution_plans.html contains Current Execution Plans (Memory) and Historical Execution Plans (AWR)
  • sqlhc_*_4_sql_detail.html contains the SQL Detail Performance Report (from DBMS_SQLTUNE.REPORT_SQL_DETAIL).

If the SQL was executed when "statistics_level" was set to ALL (or "_rowsource_execution_statistics" is set to TRUE) then the current execution plans will show the actual row source statistics from memory (if still available).

So for maximum benefit it should be run soon after the SQL has executed.

Note that SQLHC may report misleading CBO statistics and other metadata (e.g. they differ from the time at which the program/SQL was run). See more.

The section "How to interpret the output? > 4. Understanding sqltxtract output or sqlhc output" in My Oracle Support document "Oracle E-Business Suite SQL Tuning Tips for Customers (Document 2098522.1)" has some useful examples of how information from the SQLT or SQLHC can be used to resolve issues.

13.1 Latest Version

The SQLHC tool will be updated from time to time, always make sure that latest version is installed.

The latest versions can be downloaded from the My Oracle Support document "SQL Tuning Health-Check Script (SQLHC) (Document 1366133.1)"

13.2 More Information

The main source of information is the My Oracle Support document "SQL Tuning Health-Check Script (SQLHC) (Document 1366133.1)". The SQLHC script (zip file) can be downloaded from this document. It also contains an excellent presentation “How_to_improve_SQL_Performance_with_new_SQL_Health_Check_Tool.pptx” and example output.

The following My Oracle Support documents are also useful:

13.3 Running

See the "Instructions" section of the My Oracle Support document "SQL Tuning Health-Check Script (SQLHC) (Document 1366133.1)" for more information.

The sqlhc.sql script requires two parameters :

  1. Oracle Pack License (Tuning, Diagnostics or None) [T|D|N]
  2. A valid SQL_ID for the SQL to be analyzed. See here for more information on obtaining the SQL ID.

13.4 Running Issues - Plan Table

SQLHC requires the 'plan_table'.

On most databases, the plan table is pre-created centrally and referenced by a public synonym using the 'catplan.sql' script from $ORACLE_HOME/rdbms/admin. If it is not there or accessible then check the installation logs for issues.

14 SQLT with XTRACT Method

This provides an html report (sqlt_snnnnn_main.sql), which contains most of the data required to identify the reasons why the CBO chose the execution plan.

  • It contains execution plans, runtime statistics, CBO statistics, DB parameters, CBO parameters, performance statistics, SQL Profiles/Baselines and metadata (schema object definitions etc) associated with the SQL.

  • Depending on the SQLT parameters it can also contain supporting information such as AWR reports, ASH Reports, ADDM Reports, CBO (10053) trace, SQL Monitor report, EBS Initialization Parameters healthcheck report (bde_chk_cbo), Test Case Builder and SQL Tuning Advisor.

  • It saves a huge amount of time by providing all information in one action (and in one zip file sqlt_snnnnn_method_*.zip).

  • It is particularly useful if engineers/analysts do not have access to the instance.

  • It is developed and supported by the Oracle Server Technologies Center of Expertise (ST CoE).

Note that SQLT requires its own source code and objects on the database, so it needs to be installed.

See My Oracle Support document "All About the SQLT Diagnostic Tool (Document 215187.1)" for more information.

The section "How to interpret the output? > 4. Understanding sqltxtract output or sqlhc output" in My Oracle Support document "Oracle E-Business Suite SQL Tuning Tips for Customers (Document 2098522.1)" has some useful examples of how information from the SQLT or SQLHC can be used to resolve issues.

It has 3 main methods (XPLAIN, XTRACT and XCUTE).

  • XTRACT. This obtains the runtime execution plan from memory or AWR. The SQL ID or SQL Hash Value will be required.

  • XPLAIN. This requires a file containing the SQL and uses the EXPLAIN PLAN FOR statement to obtain the explain plan. The application context may also need setting before running.

  • XECUTE. This requires a SQL script containing the SQL to be executed, with the application context set and bind variables declared and populated. Ideally the data should be in a similar state to when the SQL was originally executed, including any temporary (e.g. Global Temporary Tables) and transitory tables (e.g. interface tables).

The XTRACT method is recommended because it gives the runtime execution plan (provided the runtime execution plan is available in memory or AWR).

If the runtime execution plan has been obtained from another source (e.g. SQL Trace, Display Cursor or SQL Monitor) then XPLAIN will suffice.

14.1 Methods

There are 3 main methods:

XTRACT

This does not actually execute the SQL, so a script is not needed and time is saved.

It requires the SQL ID or SQL Hash Value.

It obtains the runtime execution plan from memory or AWR and row source statistics (if available) from memory.

It is RAC aware.

If the SQL was executed when "statistics_level" was set to ALL (or "_rowsource_execution_statistics" is set to TRUE) then it will contain actual row source statistics on the execution plan.

It also uses SQL Tuning Advisor and Test Case Builder. If the SQL is no longer in memory or AWR then this method errors.

This is the preferred option.

However, for maximum benefit it should be run soon after the SQL has executed.

XPLAIN

This requires a file containing the SQL. The application context may also need to be set.

This may report a different execution plan than the one used at runtime.

It is also based on the EXPLAIN PLAN FOR statement, so it does not do bind peeking.

It has less information than XTRACT or XECUTE.

However, if the runtime execution plan has already been obtained from another source (e.g. SQL Trace, Display Cursor or SQL Monitor) and the SQL/execution plan is no longer in memory or AWR then this is the preferred option.

The majority of SQLs in E-Business Suite use either Multi-org views (11.5.10) or Synonyms (R12). These have database policies and contexts applied (virtual private database). In addition there are multi-language views/tables and HR uses security groups. These views and policies rely on the application context (e.g. the user, responsibility, application and security group), which add additional predicates to the SQL. Without the application context being set the execution plan could be inaccurate.

XECUTE

This requires a SQL script containing the SQL to be executed, with the application context set and bind variables declared and populated.

Although this provides all the information, bear in mind that it may use a different execution plan than the one used at runtime.

It may be difficult to provide a SQL script that reproduces the conditions (data state, sequence of actions etc) of the program. The data may not be in a similar state to when the SQL was originally executed, including any temporary (e.g. Global Temporary Tables) and transitory tables (e.g. interface tables).

So, it is preferable to run the program again and use the XTRACT method.

The majority of SQLs in E-Business Suite use either Multi-org views (11.5.10) or Synonyms (R12). These have database policies and contexts applied (virtual private database). In addition there are multi-language views/tables and HR uses security groups. These views and policies rely on the application context (e.g. the user, responsibility, application and security group) to determine the data returned by views or synonyms. Without the application context being set these views/synonyms will return no rows.

Preference

So the preference is:

  1. XTRACT, if the SQL/runtime execution plan is still in memory or AWR.

  2. XPLAIN, if the runtime execution plan has been obtained from another source (e.g. SQL Trace, Display Cursor or SQL Monitor). This will provide the remaining information (e.g. CBO statistics, DB parameters, CBO parameters, performance statistics and metadata) associated with the SQL.

  3. Otherwise Run the program again (with "statistics_level" set to ALL / "_rowsource_execution_statistics" set to TRUE) and use the XTRACT method.

14.2 Row Source Statistics / Statistics_Level = ALL

The SQLT (with XTRACT or XECUTE method) (and SQLHC) will also report the row source statistics if they are available.

Row source statistics are the actual (not estimated) statistics (starts, number of rows, elapsed time, physical reads, buffer gets, memory used (in PGA)) for each execution plan line.

They are available if the "statistics_level" initialization parameter is set to ALL ( or "_rowsource_execution_statistics" is set to TRUE) at the time the SQL is executed.

If testing from SQL*Plus then the hint /*+ gather_plan_statistics*/ can be used to collect row source statistics for a specific SQL (even if "statistics_level" is not ALL and "_rowsource_execution_statistics" is FALSE). This could also be used in the SQL script if using the XECUTE method.

14.3 Use the XTRACT method as soon as possible after a SQL statement executed

This also applies to SQLHC.

There are three reasons for this:

  1. If the row source statistics have been gathered (e.g. "statistics_level" set to ALL / "_rowsource_execution_statistics" is set to TRUE) then there is a higher probability that they will still be in memory.

  2. If the AWR retention period is short then there is a higher probability that the runtime execution plan will still be in memory or AWR.

  3. There is a higher probability that the CBO statistics and other metadata will be consistent with the state when the SQL was executed. The SQLT reports the CBO statistics at the time the SQLT is run, not the point at which the SQL was executed. There will be cases where the CBO statistics have been subsequently gathered for some tables (either as part of Oracle E-Business Suite business processes or upgrades or manually by DBAs).

14.4 Latest Version

The SQLT tool will be updated from time to time.

Always make sure that latest version (for the database version) is installed.

The latest versions can be downloaded from the My Oracle Support document "All About the SQLT Diagnostic Tool (Document 215187.1)".

There are different versions of SQLT for each database version (from 9.2 through to 12c). Always download and install the version specific to the database version being used.

14.5 More Information

See My Oracle Support document "All About the SQLT Diagnostic Tool (Document 215187.1)".

This document contains:

  • A number of attachments including sample output and a powerpoint presentation that provide more information about SQLT.

  • Links to other My Oracle Support documents, which include usage instructions, frequently asked questions and changes. E.g.

The installation zip files contain useful information in the following files:

  • sqlt_instructions.html / sqlt_instructions.txt – Instructions for installation and usage.

  • sqlt_changes.html – Changes for each version.

  • The version for Database Release 9i and 10.1 also contains some presentations (SQLTXPLAIN1.pdf / SQLTXPLAIN2.pdf / SQLTXPLAIN3.pdf) on how to use SQLT and interpret the output.

14.6 Installing and Running

See the sqlt_instructions.html file that is delivered with the install zip for instructions on installing and running SQLT. These are also in My Oracle Support document "SQLT Usage Instructions (Document 1614107.1)"

For the XTRACT method the SQL ID or Hash Value are required.

For the XECUTE method a text file containing the SQL, application context and declared / initialized bind variables (if required) will be needed.

For the XPLAIN method a text file containing the SQL text is required. If the SQL contains bind variables then they can be left in the SQL or replaced with literals of the same datatype. However, remember to use TO_DATE conversions on literals for date columns.

Be aware that XPLAIN is blind to bind variable values used in the SQL statement.

14.7 Installation or Running Issues

General

Many of the errors that occur are usually due to incorrect installation.

Please obtain the latest version from My Oracle Support document "All About the SQLT Diagnostic Tool (Document 215187.1)".

Refer to the instructions sqlt_instructions.html (also on My Oracle Support document "SQLT Usage Instructions (Document 1614107.1)").

Check the install log files (*.log).

And re-install exactly as indicated in the instructions, using default names for the directories.

See My Oracle Support document "Troubleshooting SQLT Issues (Document 1521607.1)" for links to some useful My Oracle Support Community (MOSC) discussions on SQLT issues.

Issues writing to files

By default, SQLT will pick up the stage directories location to use from the database parameter "user_dump_dest".

In Database version 11.2 the install script can fail with errors (ORA-29283: invalid file operation, ORA-20102: SQLT$STAGE: not capable of reading or writing), because the directory location in initialization parameter "user_dump_dest" is a symbolic linked directory.

See my Oracle Support document "Installing SQLT fails with ORA-29283: invalid file operation (Document 1362270.1)" for more information and a solution.

14.8 Obtaining SQL ID and Hash Value

The XTRACT method requires the SQL ID or the SQL hash value (not to be confused with the plan hash value).

See here for more information on obtaining the SQL ID.

The TKPROF and SQL Trace also contain the plan_hash (identified by plh= token in the SQL Trace). This is not to be confused with the SQL hash value.

14.9 Improving Performance of SQLT

Note that SQLT runs AWR and ASH reports. Some dictionary objects (particularly WRH$_LATCH_CHILDREN) will have grown significantly during the upgrade. So, fixed object and dictionary statistics should be gathered before running SQLT, especially if statistics_level is set to ALL (and is normally set to TYPICAL or BASIC).

SQLT can take quite a while to run.

To reduce the workload, it is recommended that the following is run (from SQL*Plus) before running sqltxtract.sql:

To disable Test Case Builder TCB and/or SQL Tuning Advisor:

  • EXEC sqltxplain.sqlt$a.set_param('test_case_builder', 'N');

  • EXEC sqltxplain.sqlt$a.set_param('sta_time_limit_secs', '30');

To disable the automatic export of a test case repository

  • EXEC sqltxplain.sqlt$a.set_param('export_repository', 'N');

If SQLT still takes a long time, and the schema objects used by the SQL contain a large number of sub-partitions, the granularity of the data collected can be reduced as follows:

  • EXEC sqltxplain.sqlt$a.set_param('c_gran_segm', 'PARTITION');

  • EXEC sqltxplain.sqlt$a.set_param('c_gran_cols', 'PARTITION');

  • EXEC sqltxplain.sqlt$a.set_param('c_gran_hgrm', 'PARTITION');

Note that these commands can all be run as APPS. They do not need to be run as user SQLTXPLAIN.

These values are stored in a table called SQLTXPLAIN.SQLI$_PARAMETER. When they are set, they do not need to be re-set for each execution of SQLT. The current values can be checked by querying the table SQLTXPLAIN.SQLI$_PARAMETER.

To reduce the time further the counting of rows on tables can be disabled, by running the following. However, information on the actual number of rows in each table will be lost.

  • EXEC sqltxplain.sqlt$a.set_param('count_star_threshold', '0');

All of this assumes that a SQLT version greater than 1.4.4.4 (April 2, 2012) is being used.

14.10 Misleading CBO Statistics on SQLT and SQLHC

Note that SQLT and SQLHC will report CBO Statistics and other metadata at the time it is run, not the point at which the program /SQL was run.

It is possible that these could have changed since the program (with the performance issue) was run (especially the CBO Statistics).

  • CBO statistics could have been gathered for some tables since the program was run. For example, in the main R12 upgrade, the script adsstats.sql is run towards the end of the upgrade. This gathers CBO stats for all tables that are new or have grown/shrunk significantly.

  • Similarly, data may have been deleted from temporary, transitory or interface tables.

  • Or there may have been manual actions.

  • Global Temporary tables will usually contain data when they are accessed by the program (for that session), but will have no rows when accessed by SQLT (and a different session), even if the original session (and the rows in the Global Temporary table) persist.

15 Operating System Statistics

On linux/unix environments there are a number of individual operating system commands that can be used to monitor overall resource usage, or the resource usage for individual processes. (e.g. "sar", "ps", "top", "ifconfig", "mpstat", "iostat", "netstat" "vmstat" etc.).

However, the OS Watcher and LTOM tools provide automated ways of running some of these commands and then collating and presenting the results.

15.1 OS Watcher

Oracle’s OS Watcher tool (oswbb) runs shell scripts that run the operating system tools (ps, top, ifconfig, mpstat, iostat, netstat, traceroute, vmstat, meminfo (Linux Only), slabinfo (Linux Only)) to collect CPU, Memory, I/O and Network usage and collate the results. OSWatcher Analyzer (oswbba) presents these results graphically.

oswbb and oswbba require installation, however this is relatively simple and there is little overhead.

By default the diagnostics are only retained for 48 hours, but this can be configured.

Note that it is a unix / linux based utility.

For more information see My Oracle Support documents:

15.2 LTOM

The Lite Onboard Monitor (LTOM) is a java program that runs on unix / linux (e.g. Solaris, Linux, HP-UX, AIX, Tru64)

The LTOM System Profiler uses information from operating system commands (such as "top", "vmstat" and "iostat") and Oracle database views (v$session, v$process, v$sesson_wait, v$system_event and v$system_statistics) to profile the system usage.

LTOM also includes:

  • Automatic Session Tracing, which determines which sessions to SQL trace (Extended SQL Trace) using a set of rules, which can be user defined.

  • Automatic Hang Detection, which uses configurable rules (based on database wait events) to identify and trace sessions that may have hung. It can also notify by email.

  • LTOMg graphing utility, which can be used to present the information produced by the LTOM System Profiler in graphs and an html report.

See the My Oracle Support documents

Diagnosing High CPU Usage in Modules

The My Oracle Support document "How to Diagnose High CPU Usage Problems to the Module Level (Document 352648.1)" gives some tools and techniques for investigating processes that consume a large amount of CPU where there is no specific activity (e.g. very high number of logical reads) or wait event that explains this.

16 Locking

Sometimes a problem that appears to be a performance issue could actually be due to locking.

Particularly if high occurrences of the following wait events are seen in the AWR or SQL Trace / TKPROF :

Be aware that the wait event "PL/SQL lock timer" is not caused by locks between sessions but by the USER_LOCK.SLEEP or DBMS_LOCK.SLEEP procedures in application code.

The best explanation of the Oracle Database Locking Mechanism is covered in the Oracle Database Concepts Guide, Chapter 9 - Data Concurrency and Consistency, Overview of the Oracle Database Locking Mechanism. This also covers latches, mutexes and internal locks.

My Oracle Support document "FAQ: Detecting and Resolving Locking Conflicts and Ora-00060 errors (Document 15476.1)" also gives a good explanation of how Oracle handles locking.

There are three main types of lock that are likely to be encountered, TXTM and UL.

Here’s a brief explanation of all three.

16.1 TX (Row Locks)

A TX enqueue is created as soon as a transaction is started. It is uniquely identified by the rollback segment number, slot number (in rollback segment’s transaction table) and slot number’s sequence number.

The wait event for this lock is normally "enq: TX - row lock contention".

A session, whilst carrying out DML on a row (or SELECT …. FOR UPDATE) can be waiting on a TX for the following main reasons:

  • Another session is carrying out DML on the requested row. Or has used SELECT … FOR UPDATE.

  • Another session has inserted the same row (defined by unique key) into the same table.

  • There are no free ITL (interested transaction list) slots in the block header. i.e. too many concurrent DML transactions are modifying rows in the same block. Increase INITRANS (and/or MAXTRANS) for the segment. Alternatively the number of rows in a block can be reduced by re-organizing the table; either reducing the block size or increasing the PCTFREE. The wait event for this situation is "enq: TX - allocate ITL entry".

My Oracle Support document "Waits for 'Enq: TX - ...' Type Events - Transaction (TX) Lock Example Scenarios (Document 62354.1)" explains some scenarios for TX locks, including:

  • Insufficient ITL slots (INITRANS) in a block

  • Waits due to rows being covered by the same BITMAP index fragment.

16.2 TM (Table Locks)

This lock is acquired by a transaction when using DML (or SELECT …. FOR UPDATE) or by the LOCK TABLE statement.

DML operations require table locks to reserve DML access to the table on behalf of a transaction and to prevent DDL operations that would conflict with the transaction.

The wait for this lock is "enq: TM – contention".

16.3 UL (User Lock)

These are user specified locks.

They can be requested and released using the DBMS_LOCK package procedures REQUEST and RELEASE.

The wait for this lock is "enq: UL - contention".

16.4 Identifying where locks are being held

The following will identify the locks that have been held for the longest time along with the blocking/blocked session:

SELECT
   lr.sid blocker_sid,
   ld.sid blocked_sid,
   lr.type, -- Type will typically be TM, TX, UL. Otherwise it is system type.
   lt.name,
   lr.id1, -- The value contained in these varies. For TM ID1 contains the object_id, for TX it is the Rollback entry
   lr.id2, -- The value contained in these varies. For TX it is the Transaction entry.
   decode(lr.lmode, 
          0,'none(0)',
          1,'null(NULL)(1)',
          2,'row-S(SS)(2)',
          3,'row-S(SX)(3)',
          4,'Share(S)(4)',
          5,'S/Row-X(SSX)(5)',
          6,'exclusive(X)(6)',
          LTRIM(TO_CHAR(lr.lmode,'990'))) lock_hold_mode,
   decode(lr.request,
          0,'none(0)',
          1,'null(NULL)(1)',
          2,'row-S(SS)(2)',
          3,'row-S(SX)(3)',
          4,'Share(S)(4)',
          5,'S/Row-X(SSX)(5)',
          6,'exclusive(X)(6)',
          LTRIM(TO_CHAR(lr.request,'990'))) lock_request_mode,
   lr.ctime time_blocker_held,
   ld.ctime time_blocked_waiting
FROM v$lock lr, 
     v$lock ld, 
     v$lock_type lt 
WHERE lt.type (+) = lr.type
AND ld.id1 = lr.id1 -- rollback entries match
AND ld.id2 = lr.id2 -- transaction entries match
AND lr.block = 1 -- blocker
AND ld.block = 0 -- blocked
ORDER BY lr.ctime DESC;  

The following will give the chains of locks - with the longest first:

The key blocking sessions should be able to be identified from this.

Note that if there are occurrences of blocker sessions blocking multiple blocked sessions then parts of the chains will be reported multiple times (there will be one row for each combination of chain starts and ends).

SELECT * 
FROM 
   (SELECT 
       level chainlevel,
       blocked_sid,
       CONNECT_BY_ROOT blocker_sid ultimate_blocker_sid,
       sys_connect_by_path(blocker_sid,'/')|| '/' || blocked_sid blocking_path,
       time_blocked_waiting 
    FROM (SELECT 
             lr.sid blocker_sid, 
             ld.sid blocked_sid, 
             lr.id1,
             lr.id2,
             ld.ctime time_blocked_waiting
          FROM v$lock lr,
               v$lock ld
          WHERE ld.id1 = lr.id1 
          AND ld.id2 = lr.id2 
          AND lr.block = 1 
          AND ld.block = 0) linked_locks 
    CONNECT BY NOCYCLE PRIOR blocked_sid = blocker_sid) chains 
-- blocked end of chain is not a blocker session
WHERE NOT EXISTS (SELECT 'exists' 
                  FROM v$lock lr,
                       v$lock ld
                  WHERE ld.id1 = lr.id1
                  AND ld.id2 = lr.id2
                  AND lr.block = 1
                  AND ld.block = 0
                  AND lr.sid = chains.blocked_sid)
-- blocker end of chain is not a blocked session 
AND NOT EXISTS (SELECT 'exists' 
                FROM v$lock lr,
                     v$lock ld
                WHERE ld.id1 = lr.id1
                AND ld.id2 = lr.id2
                AND lr.block = 1
                AND ld.block = 0
                AND ld.sid = chains.ultimate_blocker_sid)
ORDER BY chainlevel DESC; 

The following will report any cycles/deadlocks:

However, deadlock detection should prevent these from occurring.

SELECT * 
FROM (SELECT 
         DECODE(CONNECT_BY_ISCYCLE,1,'YES','NO') is_a_cycle,
         level,
         blocked_sid,
         CONNECT_BY_ROOT blocker_sid ultimate_blocker_sid,
         sys_connect_by_path(blocker_sid,'/')|| '/' || blocked_sid blocking_path,
         time_blocked_waiting
      FROM (SELECT
               lr.sid blocker_sid,
               ld.sid blocked_sid,
               lr.id1,
               lr.id2,
               ld.ctime time_blocked_waiting
            FROM v$lock lr,
                 v$lock ld
            WHERE ld.id1 = lr.id1
            AND ld.id2 = lr.id2
            AND lr.block = 1
            AND ld.block = 0) linked_locks
      CONNECT BY NOCYCLE PRIOR blocked_sid = blocker_sid)
WHERE is_a_cycle = 'YES';

To get more information for any session ids (sid), with the longest running locks first :

SELECT 
   l.sid,
   l.block,
   DECODE(l.block,1,'Blocker','Blocked'),
   s.blocking_session,
   l.type,
   lt.name,
   l.id1,
   l.id2,
   decode(l.lmode,
          0,'none(0)',
          1,'null(NULL)(1)',
          2,'row-S(SS)(2)',
          3,'row-S(SX)(3)',
          4,'Share(S)(4)',
          5,'S/Row-X(SSX)(5)',
          6,'exclusive(X)(6)',
          LTRIM(TO_CHAR(l.lmode,'990')) ) lock_hold_mode,
   decode(l.request,
          0,'none(0)',
          1,'null(NULL)(1)',
          2,'row-S(SS)(2)',
          3,'row-S(SX)(3)',
          4,'Share(S)(4)',
          5,'S/Row-X(SSX)(5)',
          6,'exclusive(X)(6)',
          LTRIM(TO_CHAR(l.request, '990')) ) lock_request_mode,
   l.ctime time_held_s,
   s.seconds_in_wait,
   s.wait_class,
   s.event,
   s.p1text,
   s.p1,
   s.p1raw,
   s.p2text,
   s.p2,
   s.p2raw,
   s.p3text,
   s.p3,
   s.p3raw,
   s.serial#,
   s.program,
   s.client_identifier,
   s.terminal,
   s.command,
   ct.command_name,
   s.service_name,
   s.module,
   s.action,
   s.username,
   s.machine,
   DECODE(l.type, 'TM', o.object_name) object_name,
   s.sql_id,
   st.sql_text, -- just the first 64 characters of the SQL text
   st.hash_value,
   s.row_wait_obj#,
   s.row_wait_file#,
   s.row_wait_block#,
   s.row_wait_row#
FROM v$session s,
     v$lock l,
     v$sqltext st,
     v$lock_type lt,
     v$sqlcommand ct,
     dba_objects o
WHERE s.sid (+) = l.sid
AND (   (l.request != 0) -- either requesting a lock 
      OR 
        (l.lmode != 0 -- or holding a lock
         AND 
         EXISTS (SELECT 'exists' 
                 FROM v$lock ld 
                 WHERE ld.request != 0 
                 AND ld.id1 = l.id1 
                 AND ld.id2 = l.id2) -- and blocking a blocked session
        )
    )
AND st.sql_id (+) = s.sql_id
AND st.piece (+) = 0
AND o.object_id (+) = l.id1
AND lt.type (+) = l.type
AND ct.command_type (+) = s.command
-- AND l.sid = <session id>
ORDER BY l.ctime DESC;

The object and rowid for a particular blocked session can be determined by using the following SQL:

This is primarily for row locks (TX), but it may also show some relevant information for TM locks, depending on the mode the table was locked and the activity taking place on the locked session.

SELECT
   o.object_name,
   s.row_wait_obj#,
   s.row_wait_file#,
   s.row_wait_block#,
   s.row_wait_row#,
   dbms_rowid.rowid_create( 0, s.row_wait_obj#, s.row_wait_file#, s.row_wait_block#, s.row_wait_row#) restricted_rowid,
   dbms_rowid.rowid_create( 1, s.row_wait_obj#, s.row_wait_file#, s.row_wait_block#, s.row_wait_row#) extended_rowid
FROM v$session s,
     dba_objects o
WHERE s.row_wait_obj# = o.object_id (+)
AND s.sid = <sid>;

The rowid could be used to query the contents of the locked row on the object (table).

The following SQL can be run for particular sql ids to get more SQL text:

SELECT
   sql_text -- First 1000 characters
FROM v$sql
WHERE sql_id = <sql_id>;

or:

SELECT
   sql_text
FROM v$sqltext
WHERE sql_id = <sql_id>
ORDER BY piece;

16.5 Identifying where locks were held

The information on the views v$lock, v$session is only for current sessions only.

However, it is possible that the Active Session History (ASH) may have captured information on locks held for more than a few seconds.

The following query will give information on all sessions from the Active Session History (on DBA_ACTIVE_SESS_HISTORY) that were blocked due to a Row, Table or User lock.

A similar query using view V$ACTIVE_SESSION_HISTORY could be used to check the very recent history (not yet flushed to DBA_ACTIVE_SESS_HISTORY).

SELECT
   s.session_id,
   s.blocking_session,
   s.session_serial#,
   s.sql_id,
   s.wait_class,
   s.event,
   s.p1text,
   s.p1,
   s.p2text,
   s.p2,
   s.p3text,
   s.p3,
   o.object_type,
   o.object_name,
   s.current_obj#,
   s.current_file#,
   s.current_block#,
   s.current_row#,
   s.program,
   s.module,
   s.action,
   s.client_id,
   s.machine,
   COUNT(*)*10 approx_wait_secs, -- note that if 10 seconds is reported then it could be a lot less
   MIN(s.sample_time) start_sample_time,
   MAX(s.sample_time) end_sample_time
FROM dba_active_sess_history s,
     dba_hist_seg_stat_obj o 
WHERE s.dbid = <db_id>
AND s.instance_number = <inst_num>
AND s.snap_id BETWEEN <begin_snap> and <end_snap>
AND o.dbid (+) = s.dbid
AND o.obj# (+) = s.current_obj#
AND s.blocking_session IS NOT NULL
AND s.event IN 
    ('enq: TX - row lock contention'
    ,'enq: TM - contention'
    ,'enq: UL - contention'
    ,'enq: TX - allocate ITL entry')
GROUP BY s.session_id, s.blocking_session, s.session_serial#, s.sql_id, s.wait_class, s.event, s.p1text, s.p1, s.p2text, s.p2, s.p3text, s.p3, o.object_type, o.object_name, s.current_obj#, s.current_file#, s.current_block#, s.current_row#, s.program, s.module, s.action, s.client_id, s.machine
ORDER BY COUNT(*) DESC;

The following query will give information on chains of locks (due to table, row or user locks) that have occurred for more than a few seconds, with the longest chains first :

SELECT 
   level,
   sample_time,
   session_id blocked_sid,
   CONNECT_BY_ROOT blocking_session ultimate_blocker_sid,
   sys_connect_by_path(blocking_session,'/')|| '/' || session_id blocking_path 
FROM
   (-- Blocked Sessions
    SELECT
       s.session_id,
       s.blocking_session,
       s.sample_time
    FROM dba_active_sess_history s 
    WHERE s.dbid = <db_id>
    AND s.instance_number = <inst_num>
    AND s.snap_id BETWEEN <begin_snap> and <end_snap>
    AND s.blocking_session IS NOT NULL
    AND s.event IN 
        ('enq: TX - row lock contention'
        ,'enq: TM - contention'
        ,'enq: UL - contention'
        ,'enq: TX - allocate ITL entry')
    UNION
    -- Blocking Sessions
    SELECT s.session_id,
           s.blocking_session,
           s.sample_time
    FROM dba_active_sess_history s 
    WHERE s.dbid = <db_id>
    AND s.instance_number <inst_num>
    AND s.snap_id BETWEEN <begin_snap> and <end_snap>
    AND s.blocking_session IS NULL
    AND s.event IN
        ('enq: TX - row lock contention'
        ,'enq: TM - contention'
        ,'enq: UL - contention'
        ,'enq: TX - allocate ITL entry')
    AND EXISTS
       (SELECT 'exists' 
        FROM DBA_HIST_ACTIVE_SESS_HISTORY bs 
        WHERE bs.dbid = <db_id>
        AND bs.instance_number = <inst_num>
        AND bs.snap_id BETWEEN <begin_snap> and <end_snap>
        AND bs.blocking_session = s.session_id
        AND bs.sample_time = s.sample_time
        AND bs.blocking_session IS NOT NULL
        AND bs.event IN 
            ('enq: TX - row lock contention'
            ,'enq: TM - contention'
            ,'enq: UL - contention'
            ,'enq: TX - allocate ITL entry')
       )
   )
CONNECT BY NOCYCLE PRIOR session_id = blocking_session 
                   AND PRIOR sample_time = sample_time 
ORDER BY level DESC, blocked_sid, sample_time;

16.6 "PL/SQL lock timer" waits

In this case there is no lock or interaction between sessions.

A session simply sleeps for a pre-determined length of time specified in a call to the USER_LOCK.SLEEP or DBMS_LOCK.SLEEP procedures.

However it can appear that a session is locked or there is a performance issue.

The details of sessions with "PL/SQL lock timer" waits can be queried using the following SQL:

SELECT
   s.sid,
   s.seconds_in_wait,
   s.wait_class,
   s.event,
   s.p1text,
   s.p1,
   s.p1raw,
   s.serial#,
   s.program,
   s.client_identifier,
   s.terminal,
   s.command,
   ct.command_name,
   s.service_name,
   s.module,
   s.action,
   s.username,
   s.machine,
   s.sql_id,
   st.sql_text, -- just the first 64 characters of the SQL text
   st.hash_value
FROM v$session s,
     v$sqltext st,
     v$sqlcommand ct
WHERE st.sql_id (+) = s.sql_id
AND st.piece (+) = 0
AND ct.command_type (+) = s.command
AND s.event = 'PL/SQL lock timer'
ORDER BY s.seconds_in_wait DESC;

16.7 Getting mode for TX Lock from SQL Trace or Active Session History (ASH)

If a TX lock is encountered in a trace or Active Session History (ASH) then it is possible to determine the mode of TX lock as follows:

Convert the mode (e.g. 1415053318) to hex (e.g. 54580006).

This can be done by using:

SELECT TRIM(TO_CHAR(1415053318,'XXXXXXXXX')) FROM dual; 

Note that the column V$SESSION.p1raw will contain the hex value anyway.

The first four digits gives the lock type. Hex 54 = T in ASCII, 58 = X.

The last four digits gives the lock mode (which is also available from V$LOCK for the holding session).

See My Oracle Support document "How to Determine The Lock Type and Mode from an Enqueue Wait (Document 413934.1)".

17 Action Plan Templates

17.1 Display Cursor Report

Please produce a Display Cursor Report for SQL ID <sql_id>.

For the Display Cursor Report to be useful (i.e. contain actual row source statistics as well as the runtime execution plan).

  • The initialization parameter "statistics_level" must be set to ALL (or "_rowsource_execution_statistics" set to TRUE) at the time the SQL is executed.

  • The SQL must still be loaded in the cursor cache (memory).

The report can be produced by running:

SET pages 0
SET lines 300
SET LONG 10000
SET LONGCHUNKSIZE 10000

SPOOL<report_name>.txt 

SELECT * FROM TABLE(dbms_xplan.display_cursor('<sql_id>', NULL, 'ALL +ALLSTATS')); 

SPOOL OFF;

See here for how to obtain the SQL ID.

17.2 SQL Monitor Report

Please produce a SQL Monitor report for SQL ID <sql_id>.

The SQL Monitor report can only be used if:

  • Oracle Tuning Pack and Oracle Diagnostics are licensed

  • The "control_management_pack_access" parameter is set to "DIAGNOSTIC+TUNING"

  • The initialization parameter "statistics_level" is set to "ALL" or "TYPICAL" (default) when the SQL is executed.

  • The SQL consumes more than 5 seconds of CPU or I/O time in a single execution or it uses parallel execution (SQL, DML or DDL).

Note that the SQL Monitor report will only show the Active Period, CPU Activity and Wait Activity for execution plan lines if the "statistics_level" initialization parameter is set to ALL ( or "_rowsource_execution_statistics" is set to TRUE) at the time the SQL is executed.

The report can be produced by running the following. The HTML format is preferred:

set trimspool on
set trim on
set pages 0
set long 10000000
set longchunksize 10000000
set linesize 200
set termout off

spool sql_monitor_for_<sql_id>.htm 

variable my_rept CLOB;

BEGIN

   :my_rept := dbms_sqltune.report_sql_monitor(sql_id => '<sql_id>', report_level => 'ALL', type => 'HTML'); 

END;
/ 

print :my_rept

spool off; 

set termout on

SQL Monitor reports are also available from Oracle Enterprise Manager.

See here for how to obtain the SQL ID.

17.3 Reports Client Process Trace

Please provide a Reports Client Process Trace.

For summary reports use the "Log all possible trace information" (TRACE_ALL) option.

For detail reports use the "Log report performance statistics" (TRACE_PRF) option. If this indicates that the issue is in "Reports Time" or "UPI" time then run the report again with the "Log all possible trace information" (TRACE_ALL) option.

Add Options to the Concurrent Program Definition

This method is available in both 11i and R12.

To produce a Reports Client Process Trace at Concurrent Program Level:

1. Log in to the environment that will be traced.

2. Choose the relevant responsibility for defining concurrent programs.

3. Navigate to the Concurrent > Program > Define window.

4. Query for the concurrent program to be traced.

5. Turn on Reports Trace for the concurrent program by entering TRACEOPTS=TRACE_ALL and TRACEFILE=<full path and filename> in the Options field. The two keywords are delimited by a space. TRACEOPTS can also be set to TRACE_PRF. See here.

Options must contain TRACEFILE along with TRACEOPTS, otherwise the request will error with "REP-0715: Trace file must be specified when tracing."

6. Navigate to the Submit Request window (e.g. Request > Run).

7. Submit a concurrent request for the concurrent program.

8. Make a note of the request_id of the concurrent request.

9. Check the progress of the concurrent request in the requests window.

10. When the request has completed navigate to the Concurrent > Program > Define window and disable the Reports Trace by clearing the Options field.

11. Obtain the trace file from the location specified in TRACEFILE above.

Other values for TRACEOPTS are available. See here for a full list.

The TRACEMODE parameter can also be used to specify if trace information is appended to the file (TRACE_APPEND) or overwrites it (TRACE_REPLACE).

Specify Debug Options when submitting a Concurrent Request

This method is available in R12 and 11.5.10 (with latest ATG patch levels).

To produce a Reports Client Process Trace at Concurrent Request Level:

1. Log in to the environment that will be traced.

2. Set the profile option "Concurrent: Allow Debugging" (internal name FND_CONC_ALLOW_DEBUG) user-level value to "Yes".

3. Navigate to the Submit Request window (e.g. Request > Run).

4. Select the Concurrent Request to be run.

5. Click on the "Debug Options" button.

6. Enable Reports Trace by checking the "Reports Trace" Debug Option and then select Debug Option Value "Log all possible trace information". Debug Option Value "Log report performance statistics" can also be used. See here.

7. Click the "OK" button to confirm the selection. A "Create Debug Rule : Debug Option" page is displayed. Click on the "OK" button again.

8. Submit the concurrent request.

9. Make a note of the request_id of the concurrent request.

10. Check the progress of the concurrent request in the requests window.

11. When the request has completed, the trace file name and location (full path) can be obtained from the concurrent program log (file) .

Other levels are available in the Debug Option Value. See here for a full list.

Using Debug Workbench in Oracle Applications Manager

In both 11.5.10 (with latest ATG patch levels) and R12 it is also possible to specify Debug Options in the Debug Workbench in Oracle Applications Manager by creating a Debug Rule.

Site Map > Diagnostics and Repair (tab) > Diagnostics (heading) > Debug Workbench (link).

Create a Debug Rule from the Debug Workbench, select "Reports Trace" Debug Option and then select Debug Option Value "Log all possible trace information". Debug Option Value "Log report performance statistics" can also be used. See here.

The Debug Rule also specifies the context (either the specific User, Responsibility or Any (All)) and the number of repetitions or time span for which the debug/trace will be gathered.

By default the Debug Options are disabled. They can be enabled by setting the profile option "Concurrent: Allow Debugging" (internal name FND_CONC_ALLOW_DEBUG) user-level value to "Yes".

When the relevant actions are completed, remember to delete or end date the Debug Rule.

See the Oracle E-Business Suite Maintenance Guide (R12.2 and onwards) or System Administrator's Guide – Maintenance (prior to R12.2) for more information.

17.4 PL/SQL Profiler Report

Please provide a PL/SQL Profiler Report.

Obtain PL/SQL Profiler Report from Forms

To obtain a PL/SQL Profiler report for Oracle Forms:

1. Log in to the environment that will be traced.

2. Set the profile option "Utilities:Diagnostics" (internal name DIAGNOSTICS) user-level value to "Yes".

3. Choose the relevant responsibility for the form and navigate to the form that will be traced.

4. Enable SQL Trace using the menu option : Help > Diagnostics > Trace. Check the "PL/SQL Profiling" box.

5. Execute the required actions on the Form.

6. When complete, disable SQL Trace using the menu option : Help > Diagnostics > Trace > Uncheck the "PL/SQL Profiling" box.

7. If the "PL/SQL Profiler Report" concurrent request is not started automatically then do the following:

Identify the run id for the actions traced above using the SQL:

SELECT runid, related_run, TO_CHAR(run_Date, 'DD-MON-YYYY HH24:MI:SS'), run_comment, run_comment1 FROM plsql_profiler_runs; 

Submit a request for concurrent program "PL/SQL Profiler Report" from the "System Administrator" responsibility.

Enter the parameters Run ID and Purge Profiler Data = "No" (default is "Yes").

8. When the "PL/SQL Profiler Report" concurrent request is complete, click on the "View Output" button to see the PL/SQL Profiler report.

Obtain PL/SQL Profiler Report from Self Service

This is also sometimes referred to as OAF (Oracle Applications Framework) after the technology framework used on these applications.

To obtain a PL/SQL Profiler report for Self Service (OAF) pages:

1. Log in to the environment that will be traced.

2. Set the profile option "FND: Diagnostics" (internal name FND_DIAGNOSTICS) user-level value to "Yes".

3. Choose the relevant responsibility for the HTML-based application.

4. To enable SQL Trace click the "Diagnostics" icon at the top right of the page > Set Trace Level > Go > Enable PL/SQL Profiler > Save.

5. Select "Home" and then execute the required actions.

6. When complete, disable the trace by clicking on the "Diagnostics" icon > Set Trace Level > Go > Disable PL/SQL Profiler > Save

7. The "Trace Ids" will be provided on the left hand side of the screen. This is relevant for SQL Traces (not PL/SQL Profiler)

8. Exit the HTML Application and log out.

9. If the "PL/SQL Profiler Report" concurrent request is not started automatically then do the following:

Identify the run id for the actions traced above using the SQL:

SELECT runid, related_run, TO_CHAR(run_date, 'DD-MON-YYYY HH24:MI:SS'), run_comment, run_comment1 FROM plsql_profiler_runs; 

Submit a request for concurrent program "PL/SQL Profiler Report" from the "System Administrator" responsibility.

Enter the parameters Run ID and Purge Profiler Data = "No" (default is "Yes").

10. When the "PL/SQL Profiler Report" concurrent request is complete, click on the "View Output" button to see the PL/SQL Profiler report.

Obtain PL/SQL Profiler Report for a Concurrent Request

Note that this method is only available in R12 and 11.5.10 (with latest ATG patch levels).

To obtain a PL/SQL Profiler report for a Concurrent Request :

1. Log in to the environment that will be traced.

2. Set the profile option "Concurrent: Allow Debugging" (internal name FND_CONC_ALLOW_DEBUG) user-level value to "Yes".

3. Navigate to the Submit Request window (e.g. Request > Run).

4. Select the Concurrent Request to be run.

5. Click on the "Debug Options" button.

6. Enable SQL Trace by checking the "PL/SQL Profiler" Debug Option. There is only one Debug Option Value "PL/SQL Profiler", which is already selected.

7. Click the "OK" button to confirm the selection. A "Create Debug Rule : Debug Option" page is displayed. Click on the "OK" button again.

8. Submit the concurrent request.

9. Make a note of the request_id of the concurrent request.

10. Check the progress of the concurrent request in the requests window.

11. When the request has completed, the "PL/SQL Profiler Report" concurrent program will be submitted.

12. When the "PL/SQL Profiler Report" request has completed, select the "View Output" button (of the "PL/SQL Profiler Report") to view the output.

See My Oracle Support document "How to generate pl/sql profiler output for the concurrent program under EBS? (Document 808005.1)" for more information.

17.5 PL/SQL Hierarchical Profiler Report

Please provide a PL/SQL Hierarchical Profiler report.

To enable the PL/SQL Hierarchical profiler for an E-Business Suite session using the Profile Option method:

1. Log in to the environment that will be traced.

2. If the PL/SQL Hierarchical profiler is being enabled for the current (logged in) user account then navigate to the Profile > Personal window, otherwise navigate to the Profile > System window.

3. Query the profile "Initialization SQL Statement - Custom"(internal name FND_INIT_SQL).

4. Enter the following as the profile value and save.

BEGIN dbms_hprof.start_profiling('<profiling directory>','<raw_output_filename>'); END; 

Note: If there is existing SQL in the profile then the above should be merged with the existing SQL.

5. Log in (again) to the application using the user account being traced. This is necessary because the Initialization SQL Statement is only executed at login.

6. Execute the actions to be profiled.

7. Exit the application and log out.

8. Navigate to the profile form again, remove the SQL above (ensuring any original SQL remains) from the "Initialization SQL Statement - Custom" profile and save.

9. Obtain the raw output file from the file location specified in dbms_hprof.start_profiling above.

10. Generate PL/SQL hierarchical profiler HTML reports from the raw output file using:

plshprof -output <html_root_filename> <profiler_output_filename> 

Where

<html_root_filename> is the name of the root HTML file to be created.

<profiler_output_filename> is the name of a raw profiler output file.

11. This will create a set of HTML reports, which can be browsed starting from the root HTML file. Please provide all these files.

17.6 SQLHC

Please provide the output from SQLHC for <sql_id>. See My Oracle Support document Document 1366133.1.

  1. Login to the database server and set the environment used by the Database Instance

  2. Download the "sqlhc.zip" archive file from My Oracle Support document Document 1366133.1 and extract the contents to a suitable directory/folder

  3. Connect into SQL*Plus as SYS, a DBA account, or a user with access to Data Dictionary views and simply execute the "sqlhc.sql" script.

    It will request to enter two parameters:

    1. Oracle Pack License (Tuning, Diagnostics or None) [T|D|N] (required). If site has both Tuning and Diagnostics licenses then specify T (Oracle Tuning pack includes Oracle Diagnostics).
    2. A valid SQL_ID for the SQL to be analyzed. See here for how to obtain the SQL ID.

SQLHC should be provided on same environment that performance issue has been observed and should be run as soon after the observed program/process as possible.

17.7 SQLT

Please provide SQLT output using the XTRACT/XPLAIN/XECUTE method for <SQL/sql_id>. See My Oracle Support document Document 215187.1.

[XTRACT Method only, and runtime execution plan already known] If the SQL is no longer in memory (or AWR) then provide a SQLT with XPLAIN method instead.

For installation and running instructions read the file sqlt_intructions.html (included in the SQLT installation zip file).

SQLT should be provided on same environment that performance issue has been observed and should be run as soon after the observed program/process as possible.

[XTRACT Method only] The XTRACT method requires the SQL ID or the SQL hash value (not to be confused with the plan hash value).

[XTRACT Method only] See here for how to obtain the SQL ID.

[XTRACT Method and program rerun only] Set "statistics_level" initialization parameter to ALL ( or "_rowsource_execution_statistics" to TRUE) before re-running the program (which runs the SQL in question) and running SQLT with XTRACT.

[XECUTE Method only] Please use the attached file. This file contains the application context, bind variable declaration / initialization and the SQL text.

[XECUTE Method only] If the SQL contains context sensitive views/synonyms/policies (e.g. Multi-Language or Multi-Organization in Oracle E-Business Suite) then the application context (language, user, responsibility, security group) will also need to be set in the SQL script. The "Obtaining Traces (TKPROF) in E-Business Suite - From SQL*Plus" section in the My Oracle Support document "Oracle E-Business SQL Trace and TKPROF Guide (Document 1674024.1)" has useful instructions on doing this.

[XPLAIN Method only] If the SQL contains context sensitive views/synonyms/policies (e.g. Multi-Language or Multi-Organization in Oracle E-Business Suite) then the application context (language, user, responsibility, security group) will also need to be set before running SQLT with XPLAIN method. The "Obtaining Traces (TKPROF) in E-Business Suite - From SQL*Plus" section in the My Oracle Support document "Oracle E-Business SQL Trace and TKPROF Guide (Document 1674024.1)" has useful instructions on doing this.

[XPLAIN Method only] Please use the attached file containing the SQL text.

17.8 Obtaining SQL ID

The sql_id can be obtained from:

  • AWR reports (SQL Statistics section)

  • ASH reports

  • TKPROF

  • SQL Trace (sqlid= token in the PARSING IN CURSOR line above the SQL Text)

  • View V$SQL

  • Oracle Enterprise Manager.

If querying v$SQL, use the following query:

SELECT sql_id, hash_value, SUBSTR(sql_text,1, 80)
FROM v$sql  
WHERE sql_fulltext LIKE '%<part of SQL text>%';

Alternatively, the sql_text column can be used to search on the first 1000 characters.

The UPPER function could also be used on column sql_fulltext or sql_text if the exact case of some of the SQL is unknown.

See My Oracle Support document "How to Determine the SQL_ID for a SQL Statement (Document 1627387.1)".

Oracle Database Documentation Library Links

The documentation libraries are available here.

Concepts

19c
12c Release 1
11g Release 2

Performance Tuning Guide

19c
12c Release 1
11g Release 2

Real Application Clusters Administration and Deployment Guide

19c
12c Release 1
11g Release 2

Database Advanced Queuing User's Guide

19c
12c Release 1
11g Release 2

PL/SQL Packages and Types Reference

19c
12c Release 1
11g Release 2

Development Guide

19c
12c Release 1
11g Release 2 (Advanced Application Developer's Guide)

Licensing Information

19c
12c Release 1
11g Release 2

No comments:

Post a Comment

Database Options/Management Packs Usage Reporting for Oracle Databases 11.2 and later (Doc ID 1317265.1)

  Database Options/Management Packs Usage Report You can determine whether an option is currently in use in a database by running options_pa...