Thursday, July 8, 2021

SQL_TRACE (10046), TKProf and Explain Plan - Overview Reference (Doc ID 199081.1)

In this Document Purpose Scope Details Overview Reference for SQL_TRACE, TKProf and Explain Plan Communities Tracing Related Initialization Parameters SQL_TRACE (10046 trace) Enabling SQL_TRACE Trace Files TKProf Basic Syntax of TKProf Explain Plan Creating the Plan Table Populating the Plan Table Displaying the Execution Plan Interpretation of Explain Plan Autotrace References APPLIES TO: Oracle Database - Standard Edition - Version 7.0.16.0 and later Oracle Database - Enterprise Edition - Version 7.0.16.0 and later Oracle Database - Personal Edition - Version 7.1.4.0 and later Information in this document applies to any platform. ***Checked for relevance on 13-Jan-2014*** PURPOSE This document gives an overview of some of the tools that can be used to discover more about SQL execution. The article provides information covering SQL_TRACE, TKProf, Explain Plan and Autotrace. This information is provided to supplement information in: Oracle® Database SQL Tuning Guide 12c Release 1 (12.1) E15858-15 18.3 Understanding SQL Trace and TKPROF http://docs.oracle.com/cd/E16655_01/server.121/e15858/tgsql_trace.htm#TGSQL792 SCOPE For all DBAs DETAILS NOTE: In the images and/or the document content below, the user information and data used represents fictitious data from the Oracle sample schema(s) or Public Documentation delivered with an Oracle database product. Any similarity to actual persons, living or dead, is purely coincidental and not intended in any manner. Overview Reference for SQL_TRACE, TKProf and Explain Plan Oracle provides extensive tracing and interpretation tools to assist analysts diagnose and resolve sub-optimal performance issues. This article covers the most important of these and points to detailed reference articles. Communities This article outlines a number of different areas with regard to tracing, but you might find that someone in the community at large has investigated specific areas of interest for you. Communities have been set up for My Oracle Support (MOS) users moderated by Oracle. The goal of this community is to exchange database related Tuning knowledge and concepts. For performance related issues we have two communities: Database Tuning and SQL Performance. The SQL Performance community is likely to be the best match for TKProf discussions. Tracing Related Initialization Parameters Prior to tracing, there are a number of parameters that need to be set so that the trace information is complete. These parameter should be set up in the "initSID.ora" file for the particular instance (SID) where you wish to use SQL Trace although they can also be set individually at the session level. Used to Enable/Disable the collection of timed statistics, such as CPU and elapsed times. TRUE - Enable timing FALSE - Disable timing (Default value). Note that much of the tracing information that can be gathered is rendered useless if TIMED_STATISTICS is set to False and timings are not collected. Specifies the maximum size of trace files in operating system blocks. The default value for this was 10000 OS blocks in 8i version, and limited only by the space available in 9i version. If your trace file is truncated then you will see a message similar to: *** DUMP FILE SIZE IS LIMITED TO 12345 BYTES*** and the size of this parameter should be increased. Specifies the destination for the trace file. The default value for this parameter is the default destination for oracle dumps on your operating system. Note: Most of these parameters can be dynamically altered using alter system/alter sessioncommands For example TIMED_STATISTICS can be enabled/disabled dynamically by using the following SQL statement: ALTER SYSTEM/SESSION SET TIMED_STATISTICS = TRUE/FALSE; (Note that USER_DUMP_DEST can only be modified at the system level). SQL_TRACE (10046 trace) SQL_TRACE is the main method for collecting SQL Execution information in Oracle. It records a wide range of information and statistics that can be used to tune SQL operations. Enabling SQL_TRACE The SQL Trace facility can be enabled/disabled for an individual session or at the instance level. If the initialisation Parameter SQL_TRACE is set to TRUE in the init.ora of an instance, then all sessions will be traced. Note that using this initialization parameter to enable the SQL trace facility for the entire instance can have a severe performance impact. See: Oracle® Database Reference 12c Release 1 (12.1) E17615-20 SQL_TRACE http://docs.oracle.com/cd/E16655_01/server.121/e17615/refrn10208.htm#REFRN10208 Generally, Oracle recommends that you use the DBMS_MONITOR and DBMS_SESSION packages to enable/disable Tracing on other sessions (as well as your own). See: Document 62160.1 Tracing Sessions in Oracle Using the DBMS_SUPPORT Package SQL_TRACE can also be enabled/disabled at the system/session by issuing the following SQL statement: ALTER SYSTEM/SESSION SET SQL_TRACE = TRUE/FALSE; In 11g and higher, sql_trace is also an event and can be set with event syntax: SQL> oradebug doc event name sql_trace sql_trace: event for sql trace Usage ------- sql_trace wait < false | true >, bind < false | true >, plan_stat < never | first_execution | all_executions | adaptive >, level So you could use it as follows to enable SQL_TRACE requesting bind information: alter session set events 'sql_trace bind=true'; or bind and wait information (notice comma separated): alter session set events 'sql_trace bind=true, wait=true'; Further trace can be limited to a set of SQL_IDs if you include a filter for it. E.g. alter session set events 'sql_trace [sql: sql_id=g3yc1js3g2689 | sql_id=7ujay4u33g337]bind=true, wait=true'; See "oradebug doc event" for the full event syntax: SQL> oradebug doc event Event Help: *********** ... SQL_TRACE can be set at the instance level by using the initialisation parameter SQL_TRACE: : Enable/Disable SQL Trace instance wide. TRUE - Enable statistics to be collected for all sessions. FALSE - Disable statistics to be collected for all sessions. For more information on how to setup tracing, refer to: Document 376442.1 Recommended Method for Obtaining 10046 trace for Tuning Document 15160.1 Setting SQL Trace in the Oracle Tools. Document 21154.1 EVENT: 10046 "enable SQL statement tracing (including binds/waits) There is also extensive information in the: Oracle® Database SQL Tuning Guide 12c Release 1 (12.1) E15858-15 18.3 Understanding SQL Trace and TKPROF http://docs.oracle.com/cd/E16655_01/server.121/e15858/tgsql_trace.htm#TGSQL792 Trace Files Oracle will generate trace (.trc) files for every session where the value of SQL_TRACE=TRUE and write them to the USER_DUMP_DEST destination. If tracing has been enabled for the instance then individual trace files will be generated for each session, unless otherwise disabled. Note that the generated files may be owned by an operating system user other than your own so the necessary privileges will need to be put in place before they can be formatted with TKProf. TKProf The TKProf facility accepts as input a SQL trace file and produces a formatted output file. For the full syntax of TKProf see the Oracle Server Tuning Manual. If TKProf is invoked with no arguments, an online help is displayed. Basic Syntax of TKProf TKPROF filename_source filename_output EXPLAIN=[username/password] SYS=[YES/No] TABLE=[tablename] filename_source The trace file that needs to be formatted filename_output The output file where the formatted file will be written 'EXPLAIN' This option causes TKProf to generate an execution plan for each SQL statement in the trace file as though it was been parsed at the time the TKProf is being run. TKProf connects as the user [username] and uses the EXPLAIN PLAN command to generate an execution plan for each traced SQL statement. These explain plans are written to the output file. Note that in later releases, the raw trace file automatically records the explain plan that was used at execution time and TKProf will automatically output this plan. If the EXPLAIN option is selected then two explain plans appear in the TKProf output. The first explain plan is the plan that was used at execution time. The second plan is the plan that was generated dynamically at the time the TKProf runs. If session settings etc have changed between the execution time and when the trace file is TKProffed then the first and second plans may differ. 'TABLE' Specifies the schema and name of the table into which TKPROF temporarily places execution plans before writing them to the output file. If the specified table already exists, TKProf deletes its rows then uses it for the EXPLAIN PLAN command and then deletes its rows. If this table does not exist, TKProf creates, uses, then drops it. This parameter is ignored if the EXPLAIN parameter isn't used. 'SYS' Enables and disables the listing of SQL statements issued by the user SYS (recursive SQL statements) into the output file. The default value is YES. For more information on using TKProf see: Document 199081.1 - SQL_TRACE (10046), TKProf and Explain Plan - Overview Reference Document 41634.1 TKProf Basic Overview Document 29012.1 QREF: TKPROF Usage - Quick Reference Document 760786.1 TKProf Interpretation (9i and above) Document 214106.1 Using TKProf to compare actual and predicted row counts There is also extensive information in : Oracle® Database SQL Tuning Guide 12c Release 1 (12.1) E15858-15 18.3 Understanding SQL Trace and TKPROF http://docs.oracle.com/cd/E16655_01/server.121/e15858/tgsql_trace.htm#TGSQL792 As an alternative to TKProf, Trace Analyzer (TRCANLZR) can be used. See: Document 224270.1 Trace Analyzer TRCANLZR - Interpreting Raw SQL Traces with Binds and/or Waits generated by EVENT 10046 Explain Plan The Explain Plan command generates information that details the execution plan that will be used on a particular query. The use of the explain plan command itself has largely been superceded by tools such as SQLT. See: Document 215187.1 SQLT (SQLTXPLAIN) - Tool That Helps To Diagnose SQL Statements Performing Poorly But is still useful for looking at access paths for queries. Note that the EXPLAIN PLAN statement displays execution plans chosen by the Oracle optimizer for SELECT, UPDATE, INSERT, and DELETE statements only. For more details see: Oracle® Database SQL Tuning Guide 12c Release 1 (12.1) E15858-15 Chapter 8 Generating and Displaying Execution Plans http://docs.oracle.com/cd/E16655_01/server.121/e15858/tgsql_genplan.htm#TGSQL271 The Explain plan command uses a precreated table (PLAN_TABLE) in the current schema to store information about the execution plan chosen by the optimizer. Creating the Plan Table The plan table is created using the script utlxplan.sql. This script is typically found under the Oracle Home in the rdbms/admin directory. On Unix its location will be: $ORACLE_HOME/rdbms/admin On WindowsNT/2000: %ORACLE_HOME%\rdbms\admin This script creates the output table, called PLAN_TABLE, for holding the output of the Explain plan Command. Note that the exact structure of the plan table can change with different release as new features are introduced. Populating the Plan Table The plan table is populated using the explain plan command: SQL> EXPLAIN PLAN for select * from emp where empno=1000; This command inserts the execution plan of the SQL statement into the plan table. It is also possible to adds the name tag to the explain information by using the set statement_idclause. Displaying the Execution Plan Once the table has been populated, the explain information needs to be retrieved and formatted. There are a large number of scripts available to format the plan table data. Some of the most popular are noted below: Supplied Scripts: $ORACLE_HOME/rdbms/admin/utlxpls.sql: script to format serial explain plans $ORACLE_HOME/rdbms/admin/utlxplp.sql: script to format parallel explain plans Articles: Document 235530.1 How to Obtain a Formatted Explain Plan - Recommended Methods Document 31101.1 Obtaining Formatted Explain Plan Output Document 39294.1 Formatted Select of PLAN_TABLE for EXPLAIN PLAN command Document 39341.1 Automatic Explain Plan Document 1019631.6 SCRIPT: SCRIPT TO SIMPLIFY THE USE OF EXPLAIN PLAN Interpretation of Explain Plan This is a complex topic and is covered in detail in the following article: Document 46234.1 Interpreting Explain Plan There is also extensive information in the: Oracle® Database SQL Tuning Guide 12c Release 1 (12.1) E15858-15 Chapter 8 Generating and Displaying Execution Plans http://docs.oracle.com/cd/E16655_01/server.121/e15858/tgsql_genplan.htm#TGSQL271 Autotrace The autotrace facility in SQL*Plus allows analysts to view the execution plan and some useful statistics for a SQL statement within a SQL*Plus session. This option was introduced with 7.3 version of Oracle. Autotrace needs to be initiated in the SQL*Plus session prior to executing the statement. The Autotrace command is: SET AUTOTRACE [OPTIONS] [EXPLAIN/STATISTICS] For a detailed explanation of AUTOTRACE functions see: Document 43214.1 AUTOTRACE option in 7.3 Document 31101.1 SCRIPT: DIAG - Obtaining Formatted Explain Plan Output with Autotrace As with the EXPLAIN PLAN command, to obtain an execution plan the PLAN_TABLE must be created in the user's schema prior to autotracing. Example SQL> set autotrace traceonly explain SQL> select * from dual; Execution Plan ---------------------------------------------------------- 0 SELECT STATEMENT Optimizer=CHOOSE 1 0 TABLE ACCESS (FULL) OF 'DUAL' To enable viewing of STATISTICS data, the autotracing user must have access to dynamic performance tables. To achieve this, grant PLUSTRACE role to the user. The PLUSTRACE role is created by the plustrce.sqlscript. On Unix the location is: $ORACLE_HOME/sqlplus/admin On WindowsNT/2000: %ORACLE_HOME%\sqlplus\admin This script must be run by the SYS user. A DBA user can then grant the role to the users who wish to use the the AUTOTRACE option. Refer to: Document 1055431.6 ORA-01919 Using AUTOTRACE in SQL*Plus Extensive reference information regarding the autotrace facility can be found in: SQL*Plus® User's Guide and Reference Release 12.1 E18404-12 Chapter 8 Tuning SQL*Plus Section: Controlling the Autotrace Report http://docs.oracle.com/cd/E16655_01/server.121/e18404/ch_eight.htm#SQPUG535 REFERENCES NOTE:32951.1 - TKProf Interpretation (9i and below) NOTE:376442.1 - * How To Collect 10046 Trace (SQL_TRACE) Diagnostics for Performance Issues NOTE:43214.1 - AUTOTRACE Option in SQL*Plus NOTE:46234.1 - Interpreting Explain plan NOTE:62160.1 - Tracing Sessions in Oracle Using the DBMS_SUPPORT Package NOTE:760786.1 - TKProf Interpretation (9i and above) NOTE:199081.1 - SQL_TRACE (10046), TKProf and Explain Plan - Overview Reference NOTE:39341.1 - SCRIPT: Automatic Explain plan NOTE:1019631.6 - Script: Script to Simplify the Use of Explain Plan NOTE:21154.1 - EVENT: 10046 "enable SQL statement tracing (including binds/waits)" NOTE:215187.1 - All About the SQLT Diagnostic Tool NOTE:235530.1 - * How to Obtain a Formatted Explain Plan - Recommended Methods NOTE:29012.1 - QREF: TKPROF Usage - Quick Reference

TKProf Interpretation (9i and above) (Doc ID 760786.1)

In this Document Purpose Scope Details TKProf Structure General Tips Potential TKProf Usage Examples References APPLIES TO: Oracle Database - Enterprise Edition - Version 9.0.1.0 and later Oracle Database Cloud Schema Service - Version N/A and later Gen 1 Exadata Cloud at Customer (Oracle Exadata Database Cloud Machine) - Version N/A and later Oracle Cloud Infrastructure - Database Service - Version N/A and later Oracle Database Backup Service - Version N/A and later Information in this document applies to any platform. PURPOSE This document gives general advice on the use of TKProf. It supersedes the archived document : Document 32951.1 TKProf Interpretation (8i and below) Additionally there is a video entitled "SQL Tuning Using TKProf" in the Oracle Database Learning Stream. SCOPE For users needing advice on how to use TKProf. DETAILS TKProf is an executable that 'parses' Oracle trace files to produce more readable output. In the default mode, all the information in TKProf is available from the base trace file. To produce a trace file from a session for interpretation see: Document 376442.1 Recommended Method for Obtaining 10046 trace for Tuning Document 21154.1 EVENT: 10046 "enable SQL statement tracing (including binds/waits)" For TKProf Usage see: Document 29012.1 QREF: TKPROF Usage - Quick Reference TKProf is also documented in the following manual: Oracle® Database Performance Tuning Guide 11g Release 1 (11.1) Part Number B28274-02 Chapter 21 Using Application Tracing Tools TKProf Structure TKProf output for an individual cursor has the following structure: SELECT NULL FROM DUAL call count cpu elapsed disk query current rows ------- ------ -------- ---------- ---------- ---------- ---------- ---------- Parse 1 0.00 0.00 0 0 0 0 Execute 1 0.00 0.00 0 0 0 0 Fetch 2 0.00 0.00 0 3 0 1 ------- ------ -------- ---------- ---------- ---------- ---------- ---------- total 4 0.00 0.00 0 3 0 1 Misses in library cache during parse: 0 Optimizer goal: FIRST_ROWS Parsing user id: 271 Rows Row Source Operation ------- --------------------------------------------------- 1 TABLE ACCESS FULL DUAL (cr=3 pr=0 pw=0 time=21 us cost=1 size=1 card=1) Elapsed times include waiting on following events: Event waited on Times Max. Wait Total Waited ---------------------------------------- Waited ---------- ------------ SQL*Net message to client 2 0.00 0.00 SQL*Net message from client 2 0.00 0.00 Overall the structure is: SQL Statement Parse/Execute/Fetch statistics and timings Library Cache information Row source plan Events waited for by the statement Parse/Execute/Fetch statistics and timings This section contains the bulk of the useful timing information for each statement. This can be used in conjunction with the 'Row source plan' and 'Events waited for by the statement' to give the full picture. Columns in the Parse/Execute/Fetch table have the following meanings: call Statistics for each cursor's activity are divided in to 3 areas: Parse/Execute/Fetch. A total is also calculated. Parse statistics from parsing the cursor. This includes information for plan generation etc. Execute statistics for the execution phase of a cursor Fetch statistics for actually fetching the rows count number of times each individual activity has been performed on this particular cursor cpu cpu time used by this cursor elapsed elapsed time for this cursor (includes the cpu time) disk This indicates the number of blocks read from disk. Generally it would be preferable for blocks to be read from the buffer cache rather than disk. query This column is incremented if a buffer is read in Consistent mode. A Consistent mode buffer is one that has been generated to give a consistent read snapshot for a long running transaction. current This column is incremented if a buffer is found in the buffer cache that is new enough for the current transaction and is in current mode (and it is not a CR buffer). This applies to buffers that have been read in to the cache as well as buffers that already exist in the cache in current mode. rows Rows retrieved by this step Library Cache information Tracing a statement records some information regarding library cache usage which is externalised by TKProf in this section. Most important here is "Misses in library cache during parse:" which shows whether or not a statement is being re-parsed. If a statement is being shared well then you should see a minimal number of misses here (1 or 0 preferably). If sharing is not occurring then high values in this field can indicate that. Row source plan This section displays the access path used at execution time for each statement along with timing and actual row counts returned by each step in the plan. This can be very useful for a number of reasons. Row source plans are generated from STAT lines in the raw trace. STAT lines are written to trace every now and then, but sometimes, if the cursor is not closed cleanly then STAT lines will not be recorded and then the row source plan will not be displayed. Setting SQL_TRACE to false DOES NOT close all cursors. Cursors are closed in SQL*Plus immediately after execution. The safest way to close all cursors is to cleanly exit the session in question. See: Document 312368.1 Why Row Source Plans or Row Counts are Missing in TKPROF Output For details of interpreting 10046 output see: Document 39817.1 Interpreting Raw SQL_TRACE and DBMS_SUPPORT.START_TRACE output Example: Rows Row Source Operation ------- --------------------------------------------------- [A] 1 TABLE ACCESS FULL DUAL [B] (cr=3 [C] pr=0 [D] pw=0 [E] time=21 us [F] cost=7 [G] size=7 [H] card=1 [I]) Row count [A]- the row counts output in this section are the actual number of rows returned at each step in the query execution. These actual counts can be compared with the estimated cardinalities (row counts) from an optimizer explain plan. Any differences may indicate a statistical problem that may result in a poor plan choice. See: Document 214106.1 Using TKProf to compare actual and predicted row counts Row Source Operation [B] - Shows the operation executed at this step in the plan. IO Stats - For each step in the plan, [C] is the consistent reads, [D] is the physical reads and [E] is the writes. These statistics can be useful in identifying steps that read or write a particularly large proportion of the overall data. Timing - [F] shows the cumulative elapsed time for the step and the steps that preceded it in microseconds (µs: 1/1000000 of a second). This section is very useful when looking for the point in an access path that takes all the time. By looking for the point at where the majority of the time originates it is possible to narrow down a number of problems. On later releases, row source trace has been enhanced to include some optimizer information. [G] is the estimated cost of the operation used by the optimizer for internal comparison, [H] is the estimated space usage of the operation in bytes and [I] is estimated cardinality (number of rows returned) of that particular operation. Note that TKProf also contains an explain plan feature which allows users to generate the explain plan the statement in question would use if it were to be executed NOW as the specified user. When looking at previously executed statements it is advisable not to use this option and to rely on the Row Source plans generated from the STAT lines in the trace, as above. See the TKProf section in: Document 199081.1 Overview Reference for SQL_TRACE, TKProf and Explain Plan Events waited for by the statement This section displays all wait events that a statement has waited for during the tracing. This section can be very useful when used in conjunction with the statistics and row source information for tracking down the causes of problems associated with long wait times. High numbers of waits or waits with a long total duration may be candidates for investigation dependent on the wait itself. General Tips If a system is performing sub-optimally then one potential way of diagnosing potential causes is to trace a typical user session and then use TKProf to format the output. The numerous sort options available can provide a useful way of organising the output by moving the 'top' statement in a particular category to the top of the list. A list of the sort options can be accessed by simply typing 'TKProf' at the command prompt. A useful starting point is the 'fchela' sort option which orders the output by elapsed time spent fetching. The resultant .prf file will display the most time consuming SQL statement at the start of the file. For actions to deal with degraded query performance located in this manner see: Document 742112.1 Support Action Plan for Query Performance Degradation Another useful parameter is sys=yes/no. This can be used to prevent SQL statements run as user SYS from being displayed. This can make the output file much shorter and easier to manage. Remember to always set the TIMED_STATISTICS parameter to TRUE when tracing sessions as otherwise no time based comparisons can be made. Potential TKProf Usage Examples Spotting Relatively High Resource Usage update ... where ... ----------------------------------------------------------------------- | call | count | cpu | elapsed | disk | query | current | rows | |---------|-------|-----|---------|------|---------|---------|--------| | Parse | 1 | 7 | 122 | 0 | 0 | 0 | 0 | | Execute | 1 | 75 | 461 | 5 | [H] 297 | [I] 3 | [J] 1 | | Fetch | 0 | 0 | 0 | 0 | 0 | 0 | 0 | ----------------------------------------------------------------------- This statement is a single execute of an update. [H] shows that this query is visiting 297 buffers to find the rows to update [I] shows that only 3 buffer are visited performing the update [J] shows that only 1 row is updated. Reading 297 buffers to update 1 rows is a lot of work and would tend to indicate that the access path being used is not particularly efficient. Perhaps there is an index missing that would improve the access performance? Spotting Over Parsing select ... ------------------------------------------------------------------------- | call | count | cpu | elapsed | disk | query | current | rows | |---------|-------|---------|---------|------|--------|---------|-------| | Parse | [M] 2 | [N] 221 | 329 | 0 | 45 | 0 | 0 | | Execute | 3 | [O] 9 | [P] 17 | 0 | 0 | 0 | 0 | | Fetch | 3 | 6 | 8 | 0 | [L] 4 | 0 | [K] 1 | ------------------------------------------------------------------------- Misses in library cache during parse: 2 [Q] Here we have a select that we suspect may be a candidate for over parsing. [K] is shows that the query has returned 1 row. [L] shows that 4 buffers were read to get this row back. This is fine. [M] show that the statement is parsed twice - this is not desirable especially as the parse cpu usage is a high [N] in comparison to the execute figures : [O] & [P] (ie the elapsed time for execute is 17 seconds but the statement spends over 300 seconds to determine the access path etc in the parse phase. [Q] shows that these parses are hard parses. If [Q] was 1 then the statement would have had 1 hard parse followed by a soft parse (which just looks up the already parsed detail in the library cache). See: Document 32895.1 SQL Parsing Flow Diagram for more details. This is not a particularly bad example in terms of total counts since the query has only been executed a few times. However if this pattern is reproduced for each execution this could be a significant issue. Excessive parsing should be avoided as far as possible by ensuring that code is shared: using bind variables make shared pool large enough to hold query definitions in memory long enough to be reused. See: Document 62143.1 Understanding and Tuning the Shared Pool Spotting Queries that Execute too frequently The following query has a high elapsed time and is a candidate for investigation: UPDATE ... SET ... WHERE COL = :bind1; call count cpu elapsed disk query current rows ------- ------ -------- ---------- ---------- ---------- ---------- -------- Parse 0 0.00 0.00 0 0 0 0 Execute 488719 66476.95 66557.80 1 488729 1970566 488719 Fetch 0 0.00 0.00 0 0 0 0 ------- ------ -------- ---------- ---------- ---------- ---------- -------- total 488719 66476.95 66557.80 1 488729 1970566 488719 From the above, the update executes 488,719 times and takes in total ~ 65,000 seconds to do this. The majority of the time is spent on CPU. A single row is updated per execution. For each row updated ~1 buffer is queried. ~2 million buffers are visited to perform the update. On average the elapsed time is ~ 0.1 second per execution. A sub-second execution time would normally be acceptable for most queries, but if the query is not scaleable and is executed numerous times, then the time can quickly add up to a large number. It would appear that in this case the update may be part of a loop where individual values are passsed and 1 row is updated per value. This structure does not scale with large number of values meaning that it can become inefficient. One potential solution is to try to 'batch up' the updates so that multiple rows are updated within the same execution. As Oracle releases have progressed a number of optimizations and enhancements have been made to improve the handling of 'batch' operations and to make them more efficient. In this way, code modifications to replace frequently executed relatively inefficient statements by more scaleable operations can have a significant impact. Trace File Elapsed Time Total Differs From Total SQL Time Sometimes you may see tkprof report that shows the elapsed time does not match the overall total for all recursive sql statements + overall total time for all non-recursive statements: OVERALL TOTALS FOR ALL NON-RECURSIVE STATEMENTS call count cpu elapsed disk query current rows ------- ------ -------- ---------- ---------- ---------- ---------- ---------- Parse 1 0.00 0.00 0 0 0 0 Execute 1 0.00 0.00 0 0 0 1 Fetch 0 0.00 0.00 0 0 0 0 ------- ------ -------- ---------- ---------- ---------- ---------- ---------- total 2 0.00 0.00 0 0 0 1 <---------Total non-recursive SQL is 0 seconds OVERALL TOTALS FOR ALL RECURSIVE STATEMENTS call count cpu elapsed disk query current rows ------- ------ -------- ---------- ---------- ---------- ---------- ---------- Parse 101 0.23 0.53 0 1 22 0 Execute 11115914 713.48 849.96 183 281305 1021 213 Fetch 11115842 1080.26 2981.83 643 160564747 0 11121030 ------- ------ -------- ---------- ---------- ---------- ---------- ---------- total 22231857 1793.97 3832.33 826 160846053 1043 11121243 <------------Total for all recursive SQL is 3,832 seconds Trace file: support_ora_1111111_sample_.trc Trace file compatibility: 11.1.0.7 Sort options: fchela exeela prsela 1 session in tracefile. 47 user SQL statements in trace file. 24 internal SQL statements in trace file. 71 SQL statements in trace file. 42 unique SQL statements in trace file. 52492692 lines in trace file. 10538 elapsed seconds in trace file. <--------------elapsed time in trace is 10,538 seconds There appears to be 6706 missing in the trace file. This is expected behavior if you start tracing a session that has already started. To ensure that elapsed seconds matches the session total, you need to begin the tracing at the start of the session. REFERENCES NOTE:199081.1 - SQL_TRACE (10046), TKProf and Explain Plan - Overview Reference NOTE:214106.1 - Using TKProf to Compare Actual and Predicted Row Counts NOTE:21154.1 - EVENT: 10046 "enable SQL statement tracing (including binds/waits)" NOTE:29012.1 - QREF: TKPROF Usage - Quick Reference NOTE:312368.1 - Why Row Source Plans or Row Counts are Missing in TKPROF Output NOTE:376442.1 - * How To Collect 10046 Trace (SQL_TRACE) Diagnostics for Performance Issues NOTE:39817.1 - Interpreting Raw SQL_TRACE output

Oracle E-Business Suite SQL Tuning Tips for Customers (Doc ID 2098522.1)

APPLIES TO: Oracle E-Business Suite Performance - Version 11.5.10.0 to 12.2 [Release 11.5 to 12.2] Oracle Concurrent Processing - Version 12.1.3 to 12.1.3 [Release 12.1] Information in this document applies to any platform. ABSTRACT Oracle E-Business Suite (EBS) uses SQL (Structured Query Language) to communicate with a database. One of the main EBS application performance issues is SQL performance. SQL performance issues can technically degrade the overall database performance. Poorly performing EBS applications will waste customers’ valuable time. Meeting the SQL performance goal and optimizing SQL is therefore extremely important. This document will provide guidance on how to diagnose and resolve SQL performance issues. This document is for Oracle EBS customers only. HISTORY Author of "EBS_SQL_Tuning_Tip_for_Customer" white papers. JIN SOO, EO Create Date: Jan-2016 Update Date:Jun-2016 DETAILS Based on working experience with EBS development and support teams, the EBS Performance team has found that almost 90% of performance issues are caused by SQL processing. Even though performance issues were addressed through many patches, performance issues are raised continuously. Learning SQL tuning skills is helpful for EBS customers in resolving or discussing the issue with performance engineers or development engineers or support engineers. It is not ideal for customers to modify the code, and this paper includes only tuning techniques with no changes to source code. This document outlines the following subjects for EBS customers. What is the challenging SQL? What information is needed? How to get the necessary information? How to interpret the output? How to tune the issue SQL? Click here to download the guide.

How to troubleshoot long running concurrent request in R12.2

Normally we are getting concurrent program running longer and concurrent request not picking jobs, etc. those are very frequent issue we expected from development team. Collect basis level of information to development team 1 Oracle seeded program/Custom program? 2 How much time it used earlier? 3 Is there any recent code change done in concurrent program? 4 Is this program fetching higher data compare to last run? 5 Does this job running any specific time/ It can be run any time? 6 Does this job fetching data using DB link? 7 Does the problem happen on both the test and production instance? Once asked above questions to development team in meanwhile we need to start basic sanity check in our environment. 1 Verify the status of the concurrent program select REQUEST_ID,phase_code,status_code,ORACLE_SESSION_ID from apps.fnd_concurrent_requests where request_id=’1234567’; 2 Find which concurrent manger ran a specific concurrent request select b.USER_CONCURRENT_QUEUE_NAME from fnd_concurrent_processes a,fnd_concurrent_queues_vl b, fnd_concurrent_requests c where a.CONCURRENT_QUEUE_ID = b.CONCURRENT_QUEUE_ID and a.CONCURRENT_PROCESS_ID = c.controlling_manager and c.request_id ='856272604'; 3 Verify the actual and target of the CM 4 Verify the concurrent manager status 5 Find SID for the concurrent request use below query set lines 200 set pages 200 col USERNAME for a10 col MODULE for a25 col ACTION for a25 col PROGRAM for a18 SELECT a.request_id, d.sid, d.serial# ,d.osuser,d.process , c.SPID,d.EVENT FROM apps.fnd_concurrent_requests a, apps.fnd_concurrent_processes b, v$process c, v$session d WHERE a.controlling_manager = b.concurrent_process_id AND c.pid = b.oracle_process_id AND b.session_id=d.audsid AND a.request_id = &Request_ID AND a.phase_code = 'R'; 6 Find any blocking session for concurrent request use below query and verify concurrent program SID select (select username from v$session where sid=a.sid) blocker, a.sid, ' is blocking ' "IS BLOCKING", (select username from v$session where sid=b.sid) blockee,b.sid from v$lock a, v$lock b where a.block = 1 and b.request > 0 and a.id1 = b.id1 and a.id2 = b.id2; If any blocking session there use below query to find the concurrent request id SELECT s.inst_id,a.request_id,s.sid,s.serial#,c.spid FROM apps.fnd_concurrent_requests a, gv$process c, gv$session s WHERE s.sid in ('&sid') AND s.paddr = c.addr AND a.oracle_process_id = c.spid AND a.phase_code = UPPER ('R'); 7 Find client identifier and module, action for concurrent request 8 Then check with module owner with the concurrent request select sid,serial#,USERNAME,CLIENT_IDENTIFIER,status,LAST_CALL_ET/60,SQL_HASH_VALUE,module,action,program from v$session where sid='&n'; 9 Find which sql query is running and inform to module owner select sql_text from v$sql where hash_value='12345678'; 10 Find stale value of the tables which is used by the concurrent program select table_name, stale_stats, last_analyzed from dba_tab_statistics where stale_stats='YES'; 11 You may have to run the gather stats against those tables which are having stale value. 12 Use OEM and monitor session ID 13 If development team ask tkprof,AWR,ASH report please generate and analysis.

Oracle database upgrades can seem like a daunting task if you are like most DBAs who only upgrade every few years.

ou’ve come to the right place to learn the steps and potential pitfalls. What if you have a stand-alone (non-Oracle RAC) database and want to switch from a non-container database to a container/pluggable configuration? I know this sounds like a lot more work, adding to an already stressful situation. However, please know it only requires a few extra steps and results in improved resource utilization. Starting with Oracle 19c, you can have up to three pluggable databases per container. This means you can take advantage of multitenant-type improvements in memory and processor sharing within a single container without having to fork over extra license fees. If you have more than one instance per database server, it becomes a no-brainer. This blog post provides step-by-step instructions for upgrading from Oracle 12.2 to 19c and converting from a non-container to a container/pluggable configuration. I’ll even throw in the steps for upgrading an OEM repository to this configuration at no extra charge. Upgrading from Oracle 12.2 to 19c First, download the latest Oracle 19c release and associated quarterly patches. In my case, I decided to try out using the rpm method for installing the Oracle software. Let’s assume you are running Red Hat Linux with 64bit OS. To do this, two rpm files are required: oracle-database-preinstall-19c-1.0-1.el7.x86_64.rpm and oracle-database-ee-19c-1.0-1.x86_64.rpm. Please note that this will install the software under the /opt mount point so make sure you have at least 6.5 Gb of free space. (I would go with 20 Gb if you don’t like to worry about running out of disk space any time soon.) 1. To start, you’ll want to install the prerequisites as root: # curl -o oracle-database-preinstall-19c-1.0-1.el7.x86_64.rpm https://yum.oracle.com/repo/OracleLinux/OL7/latest/x86_64/ getPackage/oracle-database-preinstall-19c-1.0-1.el7.x86_64.rpm # yum -y localinstall oracle-databasepreinstall-19c-1.0-1.el7.x86_64.rpm 2. Download the .rpm from Oracle and install the Oracle software, as root: # yum -y localinstall oracle-database-ee-19c-1.0-1.x86_64.rpm 3. Go ahead and apply the latest OJVM and Oracle Quarterly patch prior to creating any databases. This will save you a couple steps since you won’t have any databases created that require applying post patch steps. (Working smarter, not harder.) Next, you need to create and configure a database. There is a new way of doing this which is very simple and does not require X Windows to be running. This is more of a silent mode option. Before you do this, here are a couple steps I would take to customize your database creation. 4. Edit the /etc/init.d/oracledb_ORCLCDB-19c file to specify the name of the container/pluggable it will create automatically for you. Please see below for an example with the database being called MAINCDB and a pluggable called MAINPDB1: # Setting the required environment variables export ORACLE_HOME=/opt/oracle/product/19c/dbhome_1 export ORACLE_VERSION=19c export ORACLE_SID=MAINCDB export TEMPLATE_NAME=General_Purpose.dbc export CHARSET=AL32UTF8 export PDB_NAME=MAINPDB1 export LISTENER_NAME=LISTENER export NUMBER_OF_PDBS=1 export CREATE_AS_CDB=true 5. (Optional but please read): Another edit I would highly recommend is to change the oracledb_ORCLCDB-19c further down by adding the last argument (in bold). In one environment I did not have to make this change and the database created with no issue. For another environment, when I tried to create the database it errored out almost immediately with [DBT-50000] and [DBT-50001] messages. Below is the fix: $SU -s /bin/bash $ORACLE_OWNER -c "$DBCA -silent -createDatabase -gdbName $ORACLE_SID -templateName $TEMPLATE_NAME -characterSet $CHARSET -createAsContainerDatabase $CREATE_AS_CDB -numberOfPDBs $NUMBER_OF_PDBS -pdbName $PDB_NAME -createListener $LISTENER_NAME:$LISTENER_PORT -datafileDestination $ORACLE_DATA_LOCATION -sid $ORACLE_SID -autoGeneratePasswords -emConfiguration DBEXPRESS -emExpressPort $EM_EXPRESS_PORT -J-Doracle.assistants.dbca.validate.ConfigurationParams=false" 6. You will now need to create a config file which matches the name of the container. If you want to edit the default location of the database datafiles, you would need to change the location within the new .conf file created below. To setup the configuration for the new container database, perform the below commands: # cd /etc/sysconfig # cp oracledb_ORCLCDB-19c.conf oracledb_MAINCDB-19c.conf 7. You are ready to create the new container/pluggable. Please ensure the Oracle 12.2 listener is down prior to running this command. Otherwise, the database creation will error due to port 1521 being in use. Below includes all the commands required for ensuring the database listener for 12.2 (test database) is in the proper state along with the command to create the database (in bold). You will see a progress percentage periodically show up. It will take a few minutes to run so be patient: $ su – oracle $. oraenv test $ lsnrctl stop $ cd /opt/oracle/product/19c/dbhome_1/network/admin $ mv listener.ora listener.ora.bkup $ su - # /etc/init.d/oracledb_ORCLCDB-19c configure # exit $ lsnrctl stop $ cp listener.ora.bkup listener.ora $ lsnrctl start 8. Run a pre-check on the 12.2 database. Make sure your ORACLE_HOME (12.2) /ORACLE_SID are pointed to the appropriate locations for the database you want to upgrade. If so, run the below command: $ . oraenv test $ $ORACLE_HOME/jdk/bin/java -jar /opt/oracle/product/19c/dbhome_1/rdbms/admin/preupgrade.jar TERMINAL TEXT You will be provided an output of anything that may be an issue. In addition, scripts for both pre-upgrade and post-upgrade will be produced. You will need to run the “pre” script now and record the location for the “post” script so you can run that after. If your database happens to be on a VM, now is a great time to do a snapshot/backup, just in case. Once the “pre” script has been run, you’ll need to capture your local listener value for later use and shut down the database from Oracle 12.2 for the last time ever (hopefully). $. oraenv test $ sqlplus SQL*Plus: Release 12.2.0.1.0 Production on Thu May 28 10:16:30 2020 Copyright (c) 1982, 2016, Oracle. All rights reserved. Enter user-name: /as sysdba Connected to: Oracle Database 12c Enterprise Edition Release 12.2.0.1.0 - 64bit Production SQL> show parameter local_listener SQL> shutdown immediate; SQL> exit; 9. (Optional but please read): I had issues with performing the upgrade following the official Oracle instructions by running the command from $ORACLE_HOME/rdbms/admin. To get around this issue, I copied files (temporarily) from this directory to $ORACLE_HOME. Below are the steps to ensure you don’t hit the issue I faced (simple work around) which you can remove these files after you are done with all of the upgrades. No .sql/.plb file should exist under $ORACLE_HOME by default so you won’t break anything as long as you are under the 19c $ORACLE_HOME when you remove the files when you are done: $ cd /opt/oracle/product/19c/dbhome_1/rdbms/admin $ cp *.sql ../.. $ cp *.plb ../.. 10. You’ll want to copy the orapwd and parameter files from the 12.2 $ORACLE_HOME/dbs and $ORACLE_HOME/network/admin to the 19c equivalent. To do so, perform the following: $ cp $ORACLE_HOME/dbs/spfile* /opt/oracle/product/19c/dbhome_1/dbs $ cp $ORACLE_HOME/dbs/orapw* /opt/oracle/product/19c/dbhome_1/dbs 11. You are now ready for the upgrade. To do this manually, shutdown your database (test) from the 12.2 location (see step 8) and startup with 19c as seen below: $. oraenv MAINCDB $ export ORACLE_SID=test $ cd /opt/oracle/product/19c/dbhome_1/rdbms/admin $ sqlplus SQL*Plus: Release 19.0.0.0.0 - Production on Fri May 29 10:16:30 2020 Version 19.7.0.0.0 Copyright (c) 1982, 2020, Oracle. All rights reserved. Enter user-name: /as sysdba Connected to: Oracle Database 19c Enterprise Edition Release 19.0.0.0.0 - Production Version 19.7.0.0.0 SQL> startup upgrade; SQL> exit; Run the upgrade command from the Unix prompt as oracle. This will do all the heavy lifting and take a while to run so feel free to just check on it periodically as you work on other activities: $ dbupgrade -d /opt/oracle/product/19c/dbhome_1 12. Once the upgrade completes, you’ll want to edit the /etc/oratab to reflect the new ORACLE_HOME so that . oraenv command works as expected. See below: $ vi /etc/oratab test:/opt/oracle/product/19c/dbhome_1:N MAINCDB:/opt/oracle/product/19c/dbhome_1:N 13. You’ll now want to start up the database and run the post upgrade commands. Please note the sql (below) was generated during the pre-check. Below are the commands to run: $. oraenv test $ sqlplus SQL*Plus: Release 19.0.0.0.0 - Production on Fri May 29 10:16:30 2020 Version 19.7.0.0.0 Copyright (c) 1982, 2020, Oracle. All rights reserved. Enter user-name: /as sysdba Connected to: Oracle Database 19c Enterprise Edition Release 19.0.0.0.0 - Production Version 19.7.0.0.0 SQL> startup; SQL> exit $ $ORACLE_HOME/perl/bin/perl catcon.pl -n 1 -e -b utlrp -d '''.''' utlrp.sql $ sqlplus SQL*Plus: Release 19.0.0.0.0 - Production on Wed May 27 07:35:31 2020 Version 19.7.0.0.0 Copyright (c) 1982, 2020, Oracle. All rights reserved. Enter user-name: /as sysdba Connected to: Oracle Database 19c Enterprise Edition Release 19.0.0.0.0 - Production Version 19.7.0.0.0 SQL> @/u01/oracle/cfgtoollogs/test/preupgrade/postupgrade_fixups.sql SQL> @$ORACLE_HOME/rdbms/admin/utlusts.sql 14. You most likely need to adjust your time zone for 19c. Below are the four commands to complete this task: SQL> @$ORACLE_HOME/rdbms/admin/utltz_countstats.sql SQL> @$ORACLE_HOME/rdbms/admin/utltz_countstar.sql SQL> @$ORACLE_HOME/rdbms/admin/utltz_upg_check.sql SQL> @?/rdbms/admin/utltz_upg_apply.sql Congratulations, you have now officially upgraded to 19c! 15. (Optional but please read) If you have Transparent Data Encryption (TDE) in place on the stand-alone, you will need to perform the following steps: Update the $ORACLE_HOME/network/admin/sqlnet.ora ENCRYPTION_WALLET_LOCATION = (SOURCE = (METHOD = FILE) (METHOD_DATA = (DIRECTORY = /u01/oracle/admin/$ORACLE_SID/wallet) ) ) Create the wallet directory and set the environmental variables: $ export ORACLE_SID=MAINCDB $ export ORACLE_UNQNAME=MAINCDB $ cd /u01/oracle/admin $ mkdir MAINCDB $ cd MAINCDB $ mkdir wallet In sqlplus as sysdba: SQL> administer key management create keystore '/u01/oracle/admin/MAINCDB/wallet' identified by password123; SQL> administer key management set keystore open identified by password123 container=all; SQL> administer key management set key identified by password123 with backup container=all; SQL> ADMINISTER KEY MANAGEMENT CREATE AUTO_LOGIN KEYSTORE FROM KEYSTORE '/u01/oracle/admin/MAINCDB/wallet' IDENTIFIED by password123; SQL> exit $ export ORACLE_SID=test $ export ORACLE_UNQNAME=test This example assumes the wallet already exists for the test database and is using the same password (password123) as what was previously set for MAINCDB. In sqlplus: SQL> ADMINISTER KEY MANAGEMENT EXPORT ENCRYPTION KEYS WITH SECRET password123 TO '/u01/oracle/admin/test/wallet/test_tdekey.exp' IDENTIFIED BY password123; Converting the stand-alone 19c database into a pluggable database 16. Convert the stand-alone 19c database into a pluggable database with the below steps: $ export ORACLE_SID=test $ export ORACLE_UNQNAME=test $ sqlplus SQL*Plus: Release 19.0.0.0.0 - Production on Fri May 29 10:16:30 2020 Version 19.7.0.0.0 Copyright (c) 1982, 2020, Oracle. All rights reserved. Enter user-name: /as sysdba Connected to: Oracle Database 19c Enterprise Edition Release 19.0.0.0.0 - Production Version 19.7.0.0.0 SQL> startup mount; SQL> alter database open read only; SQL> BEGIN DBMS_PDB.DESCRIBE( pdb_descr_file => '/u01/oracle/test.xml'); END; / SQL> shutdown immediate; SQL> exit $ . oraenv MAINCDB $ sqlplus SQL*Plus: Release 19.0.0.0.0 - Production on Fri May 29 10:16:30 2020 Version 19.7.0.0.0 Copyright (c) 1982, 2020, Oracle. All rights reserved. Enter user-name: /as sysdba Connected to: Oracle Database 19c Enterprise Edition Release 19.0.0.0.0 - Production Version 19.7.0.0.0 SQL> set serveroutput on SQL> DECLARE compatible CONSTANT VARCHAR2(3) := CASE DBMS_PDB.CHECK_PLUG_COMPATIBILITY( pdb_descr_file => '/u01/oracle/test.xml', pdb_name => 'test') WHEN TRUE THEN 'YES' ELSE 'NO' END; BEGIN DBMS_OUTPUT.PUT_LINE('Is the future PDB compatible? ==> ' || compatible); END; / It should show the pluggable is ready to be converted. Now, it is time to convert test into a PDB. SQL> create pluggable database test using '/u01/oracle/test.xml' nocopy tempfile reuse; SQL> alter session set container=test; SQL> @$ORACLE_HOME/rdbms/admin/noncdb_to_pdb.sql The script runs for a while so just be patient. 17. To complete the TDE setup, import the encryption keys you previously exported: SQL> alter session set container=test; administer key management import encryption keys with secret password123 from '/u01/oracle/admin/test/wallet/test_tdekey.exp' force keystore identified by password123 with backup; For any services previously associated with the stand alone database, start these up to avoid experiencing the dreaded TNS-12514 error. SQL> exec dbms_service.start_service('test.world'); SQL> shutdown SQL> startup SQL> exit $ sqlplus SQL*Plus: Release 19.0.0.0.0 - Production on Fri May 29 10:16:30 2020 Version 19.7.0.0.0 Copyright (c) 1982, 2020, Oracle. All rights reserved. Enter user-name: /as sysdba Connected to: Oracle Database 19c Enterprise Edition Release 19.0.0.0.0 - Production Version 19.7.0.0.0 SQL> ALTER PLUGGABLE DATABASE test SAVE STATE; The above command will cause the pluggable to come up whenever the container starts up automatically. Be sure your local_listener parameter value is set to what test was previously set to with Oracle 12.2 (see step 8) to avoid connection issues. Below is just an example. SQL> ALTER PLUGGABLE DATABASE test SAVE STATE; SQL> alter system set local_listener= (ADDRESS_LIST=(ADDRESS=(PROTOCOL=ipc)(KEY=EXTPROC))(ADDRESS=(PROTOCOL=TCP)(HOST=)(PORT=1521))(ADDRESS=(PROTOCOL=TCPS)(HOST=)(PORT=1522))); Upgrading the OEM repository and converting it into a pluggable 18. (Optional but please read) This is the moment you’ve been waiting for. If you are upgrading the OEM repository and converting it into a pluggable, please follow the below steps: From the OMS Server: $ cd $ORACLE_HOME/bin $ ./emctl stop oms -all $ ./emctl start oms -admin_only $ ./emctl config oms -store_repos_details -repos_conndesc "(DESCRIPTION=(ADDRESS_LIST=(ADDRESS=(PROTOCOL=TCP)(HOST=)(PORT=1521)))(CONNECT_DATA=(SERVICE_NAME=)))" -repos_user SYSMAN -repos_pwd $ ./emctl stop oms -all $ ./emctl start oms If you did everything correctly, the OMS should come up with no issue and you are done. This concludes the steps for upgrading to Oracle 19c, converting to a container/pluggable configuration, and updating the OEM repository to reflect this new configuration if required. As you can probably tell, there are a few gotchas along the way which have been highlighted. Regardless, it’s a repeatable process which I would encourage everyone to look at so you can be on the latest, supported software version. AEM can help your organization implement technologies such as upgrading to Oracle 19c. Please reach out to learn more about our service offerings. https://www.aemcorp.com/managedservices/blog/upgrading-from-oracle-12.2-to-19c-with-a-container/pluggable-configuration http://www.br8dba.com/upgrade-oracle-database-manually-from-12-2-0-1-to-19c/#17 https://oracle-blog.com/upgrade-oracle-database-manually-from-12c-to-19c/

Oracle E-Business Suite SQL Tuning Tips for Customers White paper

Transfer oracle sql profiles between two databases.

 

                                                                                                                Date added (2015-03-29 19:58:47)

Summary
Here is a simple guide to transfer sql profiles from one database to another. For better understanding assume that you want to transfer one sql profile from the 'source' database to the 'destination' database.

1. Connect to the source database as sysdba and grant the required privileges to the user which will own the staging table.
For this example the user is osm.
$ sqlplus "/ as sysdba" 
GRANT ADMINISTER SQL MANAGEMENT OBJECT TO osm;
2. Connect as user osm to the source db and create the staging table.
BEGIN
DBMS_SQLTUNE.CREATE_STGTAB_SQLPROF (table_name => 'SQL_PROFILES_STGTAB');
END;
/
3. Check the existing sql profiles at the source db and copy the desired to the staging table
SELECT * FROM dba_sql_profiles ORDER BY created DESC;

BEGIN
DBMS_SQLTUNE.PACK_STGTAB_SQLPROF (profile_name => 'SYS_SQLPROF_014c5ae7b1c80001', staging_table_name => 'SQL_PROFILES_STGTAB');
END;
/

SELECT * FROM osm.SQL_PROFILES_STGTAB;
4. Copy the staging table SQL_PROFILES_STGTAB from the source db to the destination db.
5. Grant again at the destination db the required privilege.
GRANT ADMINISTER SQL MANAGEMENT OBJECT TO osm;
6. Add the sql profiles from the staging table to the destination db.
BEGIN
DBMS_SQLTUNE.UNPACK_STGTAB_SQLPROF(REPLACE => TRUE, staging_table_name => 'SQL_PROFILES_STGTAB');
END;
/

SELECT * FROM dba_sql_profiles ORDER BY created DESC;

http://www.runningoracle.com/product_info.php?products_id=446

Database Options/Management Packs Usage Reporting for Oracle Databases 11.2 and later (Doc ID 1317265.1)

  Database Options/Management Packs Usage Report You can determine whether an option is currently in use in a database by running options_pa...