Thursday, July 8, 2021
Express Diagnosis of Oracle E-Business Suite Release 12.2.n Upgrade Performance Issues (Doc ID 1583752.1)
Express Diagnosis of Oracle E-Business Suite Release 12.2.n Upgrade Performance Issues
The latest version of this document can be found in My Oracle Support Document "Express Diagnosis of Oracle E-Business Suite Release 12.2.n Upgrade Performance Issues (Document 1583752.1)"
In This Document
Executive Overview
1. Introduction
2. R12.2, Online Patching and Diagnostics
3. Before Running the Oracle E-Business Suite Upgrade
3.1 Set Statistics_Level = ALL (or _rowsource_execution_statistics = TRUE)
3.2 AWR Snapshot Interval and Retention Period
4. During the Oracle E-Business Suite Upgrade
4.1 Obtain Top SQL in Cursor Cache or AWR
4.2 Obtain Display Cursor Report for Long-Running SQL
4.3 Obtain SQL Monitor Report for SQL Using Parallel Query/DML
4.4 Identify when SQL ran
4.5 Match Long-Running SQL to Jobs
4.6 Report on CBO Statistics for All Oracle E-Business Suite Tables
5. After the Oracle E-Business Suite Upgrade
5.1 Obtain AD Job Timing Report
5.2 Identify Long-Running Upgrade Jobs
5.3 Obtain File versions for Long-Running Jobs
5.4 Obtain AWR Reports
5.5 SQLHC or SQLT
5.6 SQLHC
5.7 SQLT with XTRACT method
6. Online Patching Diagnostics
6.1 Online Patching Enablement - Specific Diagnostics
6.2 Online Patching (ADOP) Logs and Diagnostics
6.3 Non ADOP Logs
6.4 Online Patching Log Analyzer Utility
6.5 adzdshowlog.out
6.6 Check the current status of the adop cycle
6.7 SQL to determine status of ADOP phases
7. Fixed Object and Dictionary Statistics
Executive Overview
This document describes the diagnostic strategies and methods that can be used during an Oracle E-Business Suite Release 12.2 upgrade to minimize downtime and expedite resolution of any issues.
The content applies to upgrades from 11.5.10, R12.0.n or R12.1.n to R12.2.n.
The same principles can also be applied to other Oracle E-Business Suite upgrades.
For more detailed information see the My Oracle Support document “Best Practices for Minimizing Oracle E-Business Suite Release 12.2.n Upgrade Downtime (Document 1581549.1)”
Clarification : In all cases it is only necessary to produce diagnostics during test runs (and not during the final production run).
1. Introduction
When analyzing Release 12.2.n Upgrade performance issues, the goal is to:
Prevent wasted test iterations. Aim to provide solutions that solve the issue first time.
Maximize the number of performance issues investigated on each iteration.
Maximize the number of performance issues resolved.
Upgrade jobs cannot be tested in isolation. They can only be tested on the next iteration. If a fix does not work, it is a potential wasted test iteration.
To do this the following are needed:
Actual statistics: So it is possible to see exactly which execution plan steps are inefficient, rather than those that might be inefficient. The likely impact of a performance fix can also be estimated. There will then be a higher probability of providing a fix that can solve the performance issue first time. Also, it will be possible to identify marginal fixes (i.e. fixes that reduce elapsed times by 10-50%, for example by having more specific index access). These fixes often reduce contention between workers.
Diagnostics that are quick and easy to run, so that diagnostics can be obtained faster and on more jobs/SQL.
Diagnostics that have very little impact on the performance of the Release 12.2 upgrade: if they can be run during the upgrade then the results are obtained sooner and the issue resolved more quickly.
2. R12.2, Online Patching and Diagnostics
The new diagnostics available for ADOP are logs. These give timestamps for particular steps in the upgrade, or list errors that have occurred. However, they do not identify the SQLs or underlying events that caused the performance issue, so it is essential to use AWR reports, AD Job Timing Reports, Display Cursor Reports and SQLT Extracts etc.
Online Patching (ADOP) and the underlying Edition-Based Redefinition are only used after "Online Patching Enablement" for the "R12.AD.C.Delta.n", "R12.TXK.C.Delta.n", "12.2.n RUP" and subsequent patches.
Note that although the Online Patching method is used for the latter stages in the upgrade to R12.2.n, these are run during downtime (the application is not running).
Since R12.AD.C.Delta.5 the 12.2.n RUP patches can be applied using "downtime" mode.
So the performance advice and challenges for an upgrade to R12.2.0 are largely the same as for an upgrade to R12.1.n. The same method is used to apply many of the patches (AutoPatch).
However, the Online Patching Enablement patch, Online Patching and the Release Update Packs for 12.2.n are quite different to the main R12.2.0 upgrade, so they may introduce new performance challenges, particularly with internal SQL and ADOP SQL (i.e. SQL run from or on AD_ZD objects).
It is important to stress that although Online Patching (ADOP) provides new logs to track performance, the key diagnostics are still the same ones used for previous Oracle E-Business upgrades (e.g. Display Cursor reports, SQL Monitor reports, AWR reports, AD Job Timing Reports, SQLHC reports, SQLT Extracts etc.)
3. Before Running the Oracle E-Business Suite Upgrade
3.1 Set Statistics_Level = ALL (or _rowsource_execution_statistics = TRUE)
This can be set using the command:
SQL>alter system set statistics_level='ALL'
This is the simplest way to see actual row source statistics (including elapsed time, physical reads, buffer gets etc) for each execution plan line (on SQLT and Display Cursor report). The alternative of SQL Trace and TKPROF requires editing standard code.
Note that the internal views v$sql_plan_statistics and v$sql_plan_statistics_all will not contain any actual row source statistics for execution plan lines if statistics_level = TYPICAL, even if timed_statistics = TRUE.
When the STATISTICS_LEVEL parameter is set to ALL, additional statistics are added to the set of statistics collected with the TYPICAL setting. The additional statistics are timed OS statistics and plan execution statistics, which include row source statistics.
Using this strategy will typically speed up the resolution of issues significantly and may also allow the correct solution to be identified first time.
Alternatively, the same actual execution statistics can be collected by setting the initialization parameter _rowsource_execution_statistics=TRUE (with statistics_level = 'TYPICAL'). This gives a lower overhead than statistics_level=ALL.
Some technical architects and DBAs at customers (or implementing partners) can be resistant to setting statistics_level = ALL (or _rowsource_execution_statistics = TRUE), believing that this can slow down performance significantly.
Two points are relevant here:
Although setting statistics_level = ALL / _rowsource_execution_statistics = TRUE will have some performance impact, it is likely to be small and not significant. The Release 12.2 upgrade is made up of batch processes, and so the statistics workload is a much lower proportion of the total.
Even if the performance impact is significant, the goal is to reduce the elapsed times for the latter dry runs and go live (when it will be feasible to revert statistics_level / _rowsource_execution_statistics to their previous values). So suffering an increase in elapsed time during an early stage of testing is not an issue.
So there may be a small impact on elapsed time and the work that needs to be done initially, but it will help to subsequently reduce the elapsed time and amount of re-work that needs to be done.
Note that setting statistics_level to ALL while AWR is enabled could significantly increase the number of rows inserted to the WRH$_LATCH_CHILDREN table. So monitor the SYSAUX tablespace to ensure that it does not run out of space.
3.2 AWR Snapshot Interval and Retention Period
Automatic Workload Repository (AWR) should be enabled with a snapshot of 30 minutes (the default is 60 minutes). For short upgrades, a shorter snapshot may be more suitable.
The AWR retention period should be long enough to cover the duration of the upgrade run and a significant period afterwards (to gather diagnostics and analyze). The suggestion is N+7 days, where N is the estimated upgrade time, but a longer period will provide more time to gather subsequent diagnostics and statistics.
4. During the Oracle E-Business Suite Upgrade
4.1 Obtain Top SQL in Cursor Cache or AWR
This could be internal or application SQL. These scripts should be run regularly during the upgrade, particularly when there are long-running jobs.
If SQL is still in memory (cursor cache) the following can be used to identify long running SQLs that may not have been written to the AWR yet (at last snapshot):
SELECT * FROM
(SELECT
ss.sql_id,
ROUND(SUM(ss.elapsed_time/1000000),0) elapsed_time_secs,
ROUND(SUM(ss.cpu_time/1000000),0) cpu_time_secs,
SUM(ss.disk_reads) disk_reads,
SUM(ss.direct_writes) direct_writes,
SUM(ss.buffer_gets) buffer_gets,
SUM(ss.px_servers_executions) px_server_execs,
SUM(ss.rows_processed) rows_processed,
SUM(ss.executions) executions,
SUM(ss.application_wait_time) apwait_secs,
SUM(ss.sharable_mem) sharable_mem,
SUM(ss.total_sharable_mem) total_sharable_mem
FROM v$sqlstats ss
GROUP BY ss.sql_id
ORDER BY 2 DESC)
WHERE ROWNUM <= 100;
The following SQL script will report the longest running SQLs between two AWR snapshots.
SELECT * FROM
(SELECT
dhs.sql_id,
ROUND(SUM(dhs.elapsed_time_delta/1000000),0) elapsed_time_secs,
ROUND(SUM(dhs.cpu_time_delta/1000000),0) cpu_time_secs,
SUM(dhs.disk_reads_delta) disk_reads,
SUM(dhs.buffer_gets_delta) buffer_gets,
SUM(dhs.px_servers_execs_delta) px_server_execs,
SUM(dhs.rows_processed_delta) rows_processed,
SUM(dhs.executions_delta) executions,
ROUND(SUM(dhs.iowait_delta/1000000),0) iowait_secs,
ROUND(SUM(dhs.clwait_delta/1000000),0) clwait_secs,
ROUND(SUM(dhs.ccwait_delta/1000000),0) ccwait_secs,
ROUND(SUM(dhs.apwait_delta/1000000),0) apwait_secs
FROM dba_hist_sqlstat dhs ,
v$database d
WHERE dhs.dbid = d.dbid
AND snap_id >
AND snap_id <=
GROUP BY dhs.sql_id
ORDER BY 2 DESC)
WHERE ROWNUM <= 100;
Where and are the start and end snapshot IDs.
The output of this statement will look similar to the following:
SQL_ID ELAPSED_TIME_SECS CPU_TIME_SECS DISK_READS BUFFER_GETS ….
------------- ----------------- --------------- ---------- ----------- ….
xxxxxxxxxxxxx 367440 42999 34838244 3795838289 ….
xxxxxxxxxxxxx 264369 170788 441127 562033013 ….
xxxxxxxxxxxxx 70370 6448 3599284 469639133 ….
xxxxxxxxxxxxx 68298 38896 7125573 1327384554 ….
xxxxxxxxxxxxx 63600 27402 20043712 587615960 ….
The elapsed time is the maximum elapsed time for all workers of a job.
Enterprise Manager can also be used to identify expensive SQL as it occurs.
4.2 Obtain Display Cursor Report for Long-Running SQL
For long-running SQL reported by the above script, run a display cursor report (with ALL +ALLSTATS option).
This displays the actual execution plan of any cursor loaded in the cursor cache. At the basic level it shows the runtime execution plan. However, the format ALL also includes extra information such as pruning, parallel execution, predicate, projection, alias and remote SQL information.
This should be run (as soon as possible) whilst the jobs are running or very shortly afterwards. If running of the Display Cursor Report is delayed, the cursor may have been flushed from memory or invalidated and no actual statistics will be available and the report will display no data.
The +ALLSTATS option (which includes IOSTATS and MEMSTATS) will include actual statistics for each execution plan step. These include:
Elapsed time
Physical reads
Buffer gets
Memory used (in PGA) for memory intensive operations (such as hash-joins, sorts, bitmap operators etc).
However, this additional information is only provided if statistics_level=ALL / _rowsource_execution_statistics = TRUE
Note that SQLT with XTRACT will also report actual row source statistics in the same circumstances. However, Display Cursor provides a simpler view of the information. It can also be run during the upgrade, while the long running SQL is in progress, without much of an overhead.
The report can be produced by running the following SQL script:
SET pages 0
SET lines 300
SET LONG 10000
SET LONGCHUNKSIZE 10000
SPOOL .txt
SELECT * FROM TABLE(dbms_xplan.display_cursor('', NULL, 'ALL +ALLSTATS'));
SPOOL OFF;
For more information see the "Display Cursor" section in My Oracle Support document "Oracle E-Business Suite Performance Guide (Document 1672174.1)"
If the SQL is no longer in memory, but is in the AWR, use the Display AWR report (DBMS_XPLAN.DISPLAY_AWR) instead.
SET pages 0
SET lines 300
SET LONG 10000
SET LONGCHUNKSIZE 10000
SPOOL.txt
SELECT * FROM TABLE(dbms_xplan.display_awr('', NULL, NULL, 'ALL'));
SPOOL OFF;
However, this does not report on actuals: it does not have a +ALLSTATS option, and there are no actual statistics for execution plan steps stored in AWR.
Note that SQLT with XTRACT method will not report on actual statistics in this case either.
Note that the display cursor and AWR reports only show the sql_text (first 1000 characters) and not the full_text.
So, if necessary, run the following SQL script to obtain the full SQL text.
SET pages 0
SET lines 300
SET LONG 10000
SET LONGCHUNKSIZE 10000
SPOOL .txt
SELECT sql_id, sql_text, sql_fulltext FROM v$SQL
WHERE sql_id = '';
SPOOL OFF;
4.2.1 Automate execution of Display Cursor reports for top SQLs
The following script can be used (on linux) to automatically produce Display Cursor reports for the top SQL (by elapsed time) during each upgrade patch. It also produces Display AWR reports, which only provide the execution plan and no actual statistics, but which may be useful as a fall back if statistics_level or _rowsource_execution_statistics have not been set, or a particular SQL was no longer in cursor cache.
Although this automated process will produce many reports that are never looked at. It will mean that if a long running SQL (as part of a long running job) is identified then the Display Cursor report will already be available for analysis. It will not be necessary to wait until the next test run to obtain one.
There are 4 parameters that should be passed to the script:
Apps password
Number of iterations. This should be large enough so that the shell script keeps on producing Display Cursor reports throughout the upgrade patch. It is recommended that this is much larger than needed, the shell script can always be killed when the patch has completed.
Iteration gap in minutes. This should be short enough so that all long running sqls of interest are reported, but not cause much overhead. 10 minutes is recommended.
Number of Top SQLs to monitor (by elapsed time). This should be large enough so that all long running sqls of interest are reported, but not cause much overhead. 20 is recommended.
So, if the apps password is represented by , the number of iterations is 72, the gap is 10 minutes and the number of top SQLs is 20, the script will be called as follows:
xxauto_dc.sh 72 10 20
The shell script loops for the specified number of iterations.
In each iteration of the loop it :
Calls the SQL script xxautodc.sql, passing the number of top SQLs parameter. This builds a temporary SQL script dispcurbatch.sql to spool and run the Display Cursor reports for the top SQL and then executes that script.
Removes the temporary SQL script dispcurbatch.sql (after it has been run).
Sleeps for the specified number of minutes (before starting the next iteration of the loop).
There are 3 sets of reports :
Disp_Curs__.txt. This is the Display Cursor report for a top SQL in the AWR
Disp_AWR__.txt. This is the Display AWR report for a top SQL in the AWR
CC_Disp_Curs__.txt. This is the Display Cursor report for a top SQL in the Cursor Cache
Where
is the SQL ID for the particular SQL statement.
indicates the approximate time that the report was produced (i.e. from which iteration) in format YYMMDDHH24MI
Ensure that there is plenty of space on the file system (volume) available to accommodate the output files.
# Script xxauto_dc.sh
# Get arguments
# 1. apps password
# 2. number of iterations
# 3. iteration gap - minutes
# 4. No of Top SQLs
export appspass="$1"
export numit="$2"
export gap="$3"
export topsql="$4"
iteration=1
until [ ${iteration} -gt ${numit} ]
do
echo "Iteration ${iteration}"
sqlplus apps/${appspass} @xxautodc.sql $topsql
# remove batch file (if it exists)
rm -f dispcurbatch.sql
iteration=`expr $iteration + 1`
# Now sleep for requested number of minutes
if [ ${iteration} -gt ${numit} ]
then
echo "Do not sleep"
else
sleep ${gap}m
fi
done
The SQL script xxautodc.sql:
Reads the "Number of Top SQLs" parameter.
Runs SQL on the AWR and cursor cache to spool output to temporary SQL script file dispcurbatch.sql. The SQL script dispcurbatch.sql will contain commands to spool and run Display Cursor reports for the top SQLs (in AWR and cursor cache).
Executes the SQL script dispcurbatch.sql.
-- Script xxautodc.sql
-- example script for building script to run Display Cursor for top sql ids
-- It is absolutely essential that either statistics_level = ALL or _rowsource_execution_statistics = TRUE
-- otherwise the Display Cursor report will not contain the actual rowsource statistics (for each plan line)
-- that are essential in quickly identifying if and why a SQL has an inefficient execution plan.
WHENEVER SQLERROR CONTINUE
WHENEVER OSERROR CONTINUE
VARIABLE l_topsql NUMBER
-- pick up l_topsql and set bind var
DEFINE topsql = '&&1'
exec :l_topsql := '&topsql'
set echo off heading off feedback off verify off
set pages 0 termout off
set linesize 100
spool dispcurbatch.sql
SELECT 'WHENEVER SQLERROR CONTINUE' FROM DUAL;
SELECT 'WHENEVER OSERROR CONTINUE' FROM DUAL;
SELECT 'SET pages 0' FROM DUAL;
SELECT 'SET lines 300' FROM DUAL;
SELECT 'SET LONG 10000' FROM DUAL;
SELECT 'SET LONGCHUNKSIZE 10000' FROM DUAL;
SELECT
' ',
'SPOOL '
||'Disp_Curs_'
||sql_id
||'_'
||TO_CHAR(SYSDATE,'YYMMDDHH24MI')
||'.txt ',
'SELECT * FROM TABLE(dbms_xplan.display_cursor('''
||sql_id
||''', NULL, ''ALL +ALLSTATS'')); ',
'SPOOL OFF',
' ',
'SPOOL '
||'Disp_AWR_'
||sql_id
||'_'
||TO_CHAR(SYSDATE,'YYMMDDHH24MI') ||'.txt ',
'SELECT * FROM TABLE(dbms_xplan.display_awr('''
||sql_id
||''', format => ''ALL'')); ',
'SPOOL OFF'
FROM
(SELECT /*+ leading(d,i,dhss,dhst) */
dhss.sql_id,
ROUND(SUM(dhss.elapsed_time_delta/1000000),0) elapsed_time_secs -- needs sum because of different plan (hash)s for same SQL
FROM v$database d,
v$instance i,
dba_hist_sqlstat dhss,
dba_hist_sqltext dhst
WHERE dhss.dbid = d.dbid
AND dhss.instance_number = i.instance_number
AND dhst.dbid = dhss.dbid
AND dhst.sql_id = dhss.sql_id
AND dhst.command_type != 47 -- exclude anonymous blocks
AND dhss.snap_id = (SELECT /*+ push_subq */ MAX(dhsn.snap_id)
FROM dba_hist_snapshot dhsn,
v$database d,
v$instance i
WHERE dhsn.dbid = d.dbid
AND dhsn.instance_number = i.instance_number) -- Just report on latest completed snapshot
GROUP BY dhss.sql_id
ORDER BY 2 DESC)
WHERE rownum <= :l_topsql -- limit to the top SQLs (ordered by elapsed time).
;
-- Also get top SQLs from cursor cache - to fill any gaps
SELECT
' ',
'SPOOL '
||'CC_Disp_Curs_'
||sql_id
||'_'
||TO_CHAR(SYSDATE,'YYMMDDHH24MI')
||'.txt ',
'SELECT * FROM TABLE(dbms_xplan.display_cursor('''
||sql_id
||''', NULL, ''ALL +ALLSTATS'')); ',
'SPOOL OFF'
FROM
(SELECT
ss.sql_id,
ROUND(SUM(ss.delta_elapsed_time/1000000),0) elapsed_time_secs
FROM v$sqlstats ss,
v$sql s
WHERE s.sql_id = ss.sql_id
AND s.child_address = ss.last_active_child_address
AND s.command_type != 47 -- exclude anonymous blocks
GROUP BY ss.sql_id
ORDER BY 2 DESC)
WHERE rownum <= :l_topsql -- limit to the top SQLs (ordered by elapsed time).
;
spool off
set termout on echo on verify on heading on feedback on
set termout off
-- Now run the script that has been built above
@@dispcurbatch
set termout on
exit
4.3 Obtain SQL Monitor Report for SQL Using Parallel Query/DML
Obtain SQL Monitor Reports for SQL that uses Parallel Query or DML
The main advantage of this is that it gives a good view of how parallel SQL/DML performs across stages of the plan and parallel child (secondary) processes.
It can also give a good idea of the actual executions and row counts for each execution plan line even if "statistics_level" initialization parameter is not set to ALL ( or "_rowsource_execution_statistics" is not set to TRUE) at the time the SQL is executed.
It can be run during the upgrade, while the long running SQL is in progress (or shortly afterwards), without much of an overhead.
It can be produced by running the following SQL script:
set trimspool on
set trim on
set pages 0
set long 10000000
set longchunksize 10000000
set linesize 200
set termout off
spool sql_monitor_for_.htm
variable my_rept CLOB;
BEGIN
:my_rept := dbms_sqltune.report_sql_monitor(sql_id => '', report_level => 'ALL', type => 'HTML');
END;
/
print :my_rept
spool off;
set termout on
For more information see the "SQL Monitor Report" section in My Oracle Support document "Oracle E-Business Suite Performance Guide (Document 1672174.1)"
4.4 Identify when SQL ran
The following SQL will show when a particular piece of SQL ran (i.e. between which snapshots). This is useful in matching SQLs to jobs.
SELECT
dhs.sql_id,
dsn.snap_id,
dsn.begin_interval_time,
dsn.end_interval_time,
ROUND(SUM(dhs.elapsed_time_delta/1000000),0) elapsed_time_secs
FROM dba_hist_sqlstat dhs ,
v$database d ,
dba_hist_snapshot dsn
WHERE dhs.dbid = d.dbid
AND dsn.snap_id = dhs.snap_id
AND dsn.dbid = dhs.dbid
AND dsn.instance_number = dhs.instance_number
AND dhs.sql_id = ''
AND dsn.snap_id >
AND dsn.snap_id <=
GROUP BY dhs.sql_id, dsn.snap_id, dsn.begin_interval_time, dsn.end_interval_time
ORDER BY dsn.snap_id;
Where and are the start and end snapshot IDs.
The output of this statement will look similar to the following:
SQL_ID SNAP_ID BEGIN_INTERVAL_TIME END_INTERVAL_TIME ELAPSED_TIME_SECS
------------- ------- ----------------------- ----------------------- -----------------
xxxxxxxxxxxxx xxxx 04-JAN-13 23.00.25.5560 05-JAN-13 00.00.21.1620 23123
xxxxxxxxxxxxx xxxx 05-JAN-13 00.00.21.1620 05-JAN-13 01.00.38.2680 37145
4.5 Match Long-Running SQL to Jobs
The following SQL will give jobs running at any point between two time intervals, with the longest running jobs first. This is useful in matching SQLs to jobs.
Where:
= start of period to report in format YYYYMMDDHH24MISS
= end of period to report in format YYYYMMDDHH24MISS
Note that the job must have completed for it to be reported by this SQL script.
SELECT
phase,
phase_name,
product,
job_name,
max_elapsed_time,
min_start_time,
max_end_time,
workers
FROM
(SELECT
phase,
phase_name,
product,
job_name,
MAX(elapsed_time) elapsed_time_unconv,
LPAD(FLOOR(MAX(elapsed_time)*24), 4)||':'||
LPAD(FLOOR((MAX(elapsed_time)*24-floor(MAX(elapsed_time)*24))*60), 2, '0')||':'||
LPAD(MOD(ROUND(MAX(elapsed_time)*86400), 60), 2, '0') max_elapsed_time,
INITCAP(TO_CHAR(MIN(start_time),'MON DD HH24:MI', 'NLS_DATE_LANGUAGE = american')) min_start_time,
INITCAP(TO_CHAR(MAX(end_time),'MON DD HH24:MI', 'NLS_DATE_LANGUAGE = american')) max_end_time,
count(worker_id) workers
FROM ad_task_timing
WHERE session_id =
AND ( start_time BETWEEN TO_DATE('','YYYYMMDDHH24MISS') AND TO_DATE('','YYYYMMDDHH24MISS')
OR
NVL(end_time, start_time+elapsed_time) BETWEEN TO_DATE('','YYYYMMDDHH24MISS') AND TO_DATE('','YYYYMMDDHH24MISS') )
GROUP BY phase, phase_name, product, job_name)
ORDER BY elapsed_time_unconv DESC;
The output of this statement will look similar to the following:
Phase max elapsed
phase name product job_name time min_start_time max_end_time workers
----- ------ ------- ----------------- ----------- -------------- ------------ -------
255 upg+80 zx zxaptrxmigupd.sql 2:43:47 Mar 13 04:22 Mar 13 07:06 64
255 upg+80 ap apxlaupg.sql 1:38:57 Mar 13 04:03 Mar 13 05:42 1
To find upgrade AD jobs that are in progress, use adctrl option 1 (Show worker status).
When they started can be determined by looking at the patch log file. e.g.
$ cat u_merged.log|grep -A2 cstpostimportaad.sql
Assigned: file cstpostimportaad.sql on worker 48 for product bom username BOM. Time is: Fri Mar 22 2013 22:48:54
4.6 Report on CBO Statistics for All Oracle E-Business Suite Tables
Report on the CBO statistics for Oracle E-Business Suite tables during the upgrade, before adsstats.sql is run (in 12.2.0 CUP).
The script adsstats.sql will populate the statistics correctly before the end of the upgrade. The fact that tables may have incorrect statistics during the upgrade will not be visible. So it may not be possible to see that a table had null, zero or inaccurate CBO statistics, and that this is the reason for an expensive execution plan.
The following script will report on the CBO statistics for all Oracle E-Business Suite tables:
SELECT
owner,
table_name,
num_rows,
TO_CHAR(last_analyzed,'DD-MON-YYYY HH24:MI:SS') last_analyzed
FROM all_tables
WHERE owner IN (SELECT upper(oracle_username) sname
FROM fnd_oracle_userid
WHERE oracle_id BETWEEN 900
AND 999
AND read_only_flag = 'U'
UNION ALL
SELECT DISTINCT upper(oracle_username) sname
FROM fnd_oracle_userid a,
fnd_product_installations b
WHERE a.oracle_id = b.oracle_id )
ORDER BY owner, table_name;
The output of this statement will look similar to the following:
OWNER TABLE_NAME NUM_ROWS LAST_ANALYZED
---------------------- ---------------------------- ---------- ------------------------
ABM ABM_ACC_MAP_SUM_REP 0 06-JAN-2013 08:46:33
ABM ABM_ACT_ACC_RU_DAT 0 06-JAN-2013 08:46:35
ABM ABM_ACT_STA_RU_DAT 0 06-JAN-2013 08:46:36
ABM ABM_ACT_TAGS 0 06-JAN-2013 08:46:37
ABM ABM_API_TEMPLATES 38 06-JAN-2013 08:44:53
ABM ABM_API_TEMPLATES_TL 722 06-JAN-2013 08:41:16
ABM ABM_API_TEMPLATE_ATTRIBUTES 248 06-JAN-2013 08:44:34
5. After the Oracle E-Business Suite Upgrade
These are diagnostics to be obtained directly after each of the main patches/RUPs have completed.
This will be directly after each of 12.2.0 CUP, Online Patching Enablement, R12.AD.C.Delta.n, R12.TXK.C.Delta.n and 12.2.n RUP.
5.1 AD Job Timing Report
This reports:
Number of successful, failed, deferred, re-started or skipped jobs.
The top 100 time consuming jobs.
The failed, deferred, re-started and skipped jobs.
The timing of each upgrade phase, with the total number of jobs, and the number deferred, re-started and skipped.
However, it only reports the Top 100 Time Consuming Jobs, and for AD Parallel jobs it considers each worker to be a different job. This means it may only report a handful of jobs. An alternative is to query the AD_TASK_TIMING table for long running jobs directly.
When ADOP, AutoPatch or AD Administration is run, it automatically generates an AD Job Timing report (adt.lst). The contents of this report can be accessed from Oracle Application Manager, or reports can be obtained for completed upgrade sessions from the APPL_TOP/admin//out directory. The report is called adt.lst.
The AD Job Timing Report can also be run for AD Administration jobs from the command line.
$ cd $APPL_TOP/admin//out
$ sqlplus / @$AD_TOP/admin/sql/adtimrpt.sql \
Subscribe to:
Post Comments (Atom)
Database Options/Management Packs Usage Reporting for Oracle Databases 11.2 and later (Doc ID 1317265.1)
Database Options/Management Packs Usage Report You can determine whether an option is currently in use in a database by running options_pa...
-
In this Document Goal Ask Questions, Get Help, And Share Your Experiences With This Article Solution 12c TDE FAQ documentation Quick...
-
This document describes how to develop and deploy customizations in an Oracle E-Business Suite Release 12.2 environment. Follow thes...
-
This document also provides links to two presentations on the subject: Oracle OpenWorld presentation "Technical Upgrade Best Practice...
No comments:
Post a Comment