Tuesday, May 11, 2021

Top 5 Database and/or Instance Performance Issues in RAC Environment (Doc ID 1373500.1)

 In this Document

Purpose
Scope
Details
 Issue #1: High counts of lost blocks(gc lost blocks, gc current/cr lost blocks)
 Issue #2: High log file sync waits
 Issue #3: High Waits on Mutex
 Issue #4: High gc buffer busy, enq: TX -row lock contention, enq: TX - index contention, enq: TX - ITL allocate entry waits
 Issue #5: High CPU and memory consumption
References

APPLIES TO:

Oracle Database - Enterprise Edition - Version 10.1.0.2 to 12.1.0.1 [Release 10.1 to 12.1]
Oracle Database Cloud Schema Service - Version N/A and later
Oracle Database Exadata Express Cloud Service - Version N/A and later
Gen 1 Exadata Cloud at Customer (Oracle Exadata Database Cloud Machine) - Version N/A and later
Oracle Cloud Infrastructure - Database Service - Version N/A and later
Information in this document applies to any platform.

PURPOSE

The purpose of this document is to provide a summary of  top database and/or instance performance issues and the possible solutions in a RAC environment.

Please note that Issue #3 (High Waits on Mutexes) and Issue #5 (High CPU and Memory Consumption) are generic database issues and not RAC-specific. However, they have been included in this document because they are one of the TOP issues, and can occur locally on an instance in a RAC environment.

SCOPE

DBAs

DETAILS

Issue #1: High counts of lost blocks(gc lost blocks, gc current/cr lost blocks)

Symptoms

I. AWR reports show high number of lost blocks.

II. netstat -s reports increasing number of packet reassembly failure, dropped packets.


Solutions

Use the following document to troubleshoot and resolve lost block issue. The document describes symptoms, possible causes and solutions.

Document 563566.1 - gc block lost diagnostics

 

Issue #2: High log file sync waits

Symptoms

I. AWR reports show that log file sync is one of the TOP 5 waits consistently.

II. Average log file sync is high ( > 20 ms).

III. Average log file parallel write is high ( > 10 ms).

Iv. Average redo write broadcast ack time or wait for scn ack is high ( > 10 ms).

V. Average log file sync is low, but there are too many log file sync waits.


Background

When a user session COMMITs or ROLLBACKs, the session's redo information needs to be flushed by LGWR to the redo logfile. The user session waits on 'log file sync' while waiting for LGWR to post it back to confirm that all redo changes are safely on disk.

Example:

WAIT #0: nam='log file sync' ela= 977744 buffer#=754 p2=0 p3=0 obj#=114927 tim=1212384670107387

Parameters:

  P1 = buffer#
  P2 = Not used
  P3 = Not used
  obj# = object_id

All changes up to this buffer# (in the redo log buffer) must be flushed to disk and the writes confirmed to ensure that the transaction is committed, and will remain committed upon an instance crash.

A typical life cycle of 'log file sync' wait

1. A user sessions issues a commit or rollback and starts waiting on log file sync.
2. LGWR gather redo information to be written to redo log files, issues IO and queues BOC to an LMS process and posts LMS process.
3. LGWR waits for redo buffers to be flushed to disk and SCN to be ACK'd
4. Completion of IO and receiving ACK from LMS signal write complete. LGWR then posts foreground process to continue.
5. Foreground process wakes up and log file sync wait ends.

Important log file sync related statistics and events

redo write time - Total elapsed time of the write from the redo log buffer to the current redo log file in 10s of milliseconds.
redo writes - Total number of writes by LGWR to the redo log files. "redo blocks written" divided by this statistic equals the number of blocks per write.
log file parallel write - Time it takes for the I/Os to complete. Even though redo records are written in parallel, the parallel write is not complete until the last I/O is on disk.
redo write broadcast ack time - Total amount of the latency associated with broadcast on commit beyond the latency of the log write (in microseconds).
user commits - Number of user commits. When a user commits a transaction, the redo generated that reflects the changes made to database blocks must be written to disk. Commits often represent the closest thing to a user transaction rate.
user rollbacks - Number of times users manually issue the ROLLBACK statement or an error occurs during a user's transactions.
The script provided in Document 1064487.1 - Script to Collect Log File Sync Diagnostic Information (lfsdiag.sql)  collects useful information for troubleshooting log file sync issues.

Possible Causes

I. Poor IO service time and throughput of the devices where redo logs reside.

II. Oracle bugs. Please review WAITEVENT: "log file sync" Reference (Document 34592.1) for known oracle bugs.

III. LMS not running in RT class.

IV. Scheduling delays with LGWR process.

V. Excessive number of commits.

VI. OS resource starvation.


Solutions

I. Move redo log files to disks with higher throughput and better response time if log file parallel write is consistently high ( > 10 ms). log file parallel write should ideally be within 1-2 ms range

II. Apply fixes for the known oracle bugs, which are applicable to your environment. The most effective way to get those fixes in is to apply the latest PSU patches. Document 756671.1 has more information on the latest PSUs.

III. Ensure that LMS processes are running in RT class. LMS processes run in RT class by default.

IV. Reduce number of commits by using batch commits instead of committing after every DML operation.

V. See if any activity can safely be done with NOLOGGING / UNRECOVERABLE options.

VI. See if COMMIT NOWAIT option can be used. Please refer to Document 857576.1 for more information.          
@For Support Only: Renice LGWR to run at higher priority or run LGWR in RT class by adding LGWR to the parameter: _high_priority_processes='VKTM|LMS|LGWR'. Consider doing this only if log file sync is high and scheduling delay of LGWR is found to be causing it. Be prepared to test it thoroughly. See Doc IDs 12951619.8 and 1523164.1 where LGWR is already added to to _high_priority_processes in 11.2.0.4 (with a backport available for 11.2.0.3.5) and so there will not be any need to adjust LGWR priority manually.

Issue #3: High Waits on Mutex

Mutexes are a lighter-weight and more granular concurrency mechanism than latches.The reason for obtaining a mutex is to ensure that certain operations are properly managed for concurrency. e.g., if one session is changing a data structure in memory, then other session must wait to acquire the mutex before it can make a similar change. The following are most common mutex related waits:

A. cursor: pin S wait on X
B. cursor: pin S
C. library cache: Mutex X

Symptoms (
A)

AWR reports show cursor: pin S wait on X as one of the top wait.


Background (A)


A session may wait for this event when it is trying to get a mutex pin in Share mode but another session is holding the mutex pin on the same cursor object in exclusive. Frequently, waits for 'Cursor: pin S wait on X' is a symptom and not the cause. There may be underlying tuning requirements or known issues.

Possible Causes (A)

Please review Troubleshooting 'cursor: pin S wait on X' waits (Document 1349387.1).


Solutions (A)

Please review Troubleshooting 'cursor: pin S wait on X' waits (Document 1349387.1).


Symptoms (B)

AWR reports show cursor: pin S as one of the top waits


Background (B)

A session waits for "cursor: pin S" when it wants a specific mutex in S (share) mode on a specific cursor and there is no concurrent X holder but it could not acquire that mutex immediately. This may seem a little strange as one might question why there should be any form of wait to get a mutex which has no session holding it in an incompatible mode. The reason for the wait is that in order to acquire the mutex in S mode (or release it) the session has to increment (or decrement) the mutex reference count and this requires an exclusive atomic update to the mutex structure itself. If there are concurrent sessions trying to make such an update to the mutex then only one session can actually increment (or decrement) the reference count at a time. A wait on "cursor: pin S" thus occurs if a session cannot make that atomic change immediately due to other concurrent requests.
Mutexes are local to the current instance in RAC environments. 

Parameters:
   P1 = idn
   P2 = value
   P3 = where (where|sleeps in 10.2)

 idn is the mutex identifier value which matches to the HASH_VALUE of the SQL statement that we are waiting to get the mutex on. The SQL can usually be found using the IDN value in a query of the form:

SELECT sql_id, sql_text, version_count
FROM V$SQLAREA where HASH_VALUE=&IDN;

If the SQL_TEXT shown for the cursor is of the form "table_x_x_x_x" then this is a special internal cursor - see Document 1298471.1 for information about mapping such cursors to objects.

P1RAW is the same value in hexadecimal and it can be used to search in trace files for SQL matching to that hash value.

Possible Causes (B)

I. Heavy concurrency for a specific mutex, especially on systems with multiple CPUs.

II. Waits for very many different "idn" values when under load.

III. Oracle Bugs
      Bug 10270888 - ORA-600[kgxEndExamine-Bad-State] / mutex waits after a self deadlock
      Bug 9591812 - Wrong wait events in 11.2 ("cursor: mutex S" instead of "cursor: mutex X")
      Bug 9499302 - Improve concurrent mutex request handling
      Bug 7441165 - Prevent preemption while holding a mutex (fix only works on Solaris)
      Bug 8575528 - Missing entries in V$MUTEX_SLEEP.location


Solutions (B)

I. Apply fix for Bug 10411618.

II. For any identified "hot" SQLs, one can reduce the concurrency on specific cursors by replacing the one SQL with some variants which are executed by different sessions. Please review WAITEVENT: cursor: pin S Reference (Document 1310764.1) for further details.

III. Apply fixes for other known oracle bugs. The most effective way to get the fixes in is to apply the latest PSU patches. Document 756671.1 has more information on recommended patches.


Symptoms (C)

AWR reports show library cache: Mutex X as one of the TOP waits.


Background (C)

The library cache mutex is acquired for similar purposes that the library cache latches were acquired in prior versions of Oracle. In 10g, mutexes were introduced for certain operations in the library cache. Starting with 11g, the library cache latches were replaced by mutexes. This wait event is present whenever a library cache mutex is held in exclusive mode by a session and other sessions need to wait for it to be released.

Individual Waits:
Parameters:
  P1 = "idn" = Unique Mutex Identifier
  P2 = "value" = Session ID holding the mutex
  P3 = "where" = location in code (internal identifier) where mutex is being waited for

Systemwide Waits:
At a systemwide level, there are two views which will help diagnose this wait:

GV$MUTEX_SLEEP (or V$MUTEX_SLEEPS for non-RAC)
and GV$MUTEX_SLEEP_HISTORY (or V$MUTEX_SLEEP_HISTORY for non-RAC)

These views track the instance wide usage of mutexes since instance startup. Since these views show values that are total values since startup, they are most meaningful when you obtain the difference in values during a short time interval when there was problem. The easiest way to see this information is through an AWR or statspack report in the "Mutex Sleep Summary" section.

Possible Causes (C)

I. Frequent Hard Parses.

II. High Version Counts.

III. Invalidations and reloads.

IV. Oracle Bugs. Please review WAITEVENT: "library cache: mutex X" (Document 727400.1) for the
      list of known oracle bugs.


Solutions (C)

I. Reduce hard parsing, invalidations and reloads. Please review Troubleshooting: Tuning the Shared Pool and Tuning Library Cache Latch Contention (Document 62143.1) for more information.

II. Review  Troubleshooting: High Version Count Issues (Document 296377.1) to   resolve high version count issue.

III. Apply fixes for the known oracle bugs. The most effective way to get the fixes in is to apply the latest PSU patches. Document 756671.1 has more information on recommended patches.

Issue #4: High gc buffer busy, enq: TX -row lock contention, enq: TX - index contention, enq: TX - ITL allocate entry waits

A. enq: TX - Index Contention
B. enq: TX - ITL allocate entry waits
C. enq: TX - row lock contention

Symptoms
 (A)

I. AWR reports show high wait on enq: TX - index contention and enq: TX - row lock contention in mode 4.

II. AWR reports show high values for branch node splits, leaf node splits and leaf node 90-10 splits

III. AWR reports (Segments by Row Lock Waits) show high row lock waits for a particular segment

IV. AWR reports show other waits such as gc buffer busy waits on index branch or Leaf blocks, gc current block busy on Remote Undo Headers and gc current split.

Example:

Top 5 Timed Foreground Events

Event Waits Time(s) Avg wait (ms) % DB time Wait Class
enq: TX - index contention 29,870 1,238 41 9.52 Concurrency

Instance Activity Stats:

Statistic Total per Second per Trans
branch node splits 945 0.26 0.00
leaf node 90-10 splits 1,670 0.46 0.00
leaf node splits 35,603 9.85 0.05

Segments by Row Lock Waits:

Owner Tablespace Object Name Obj.Type Row Lock Waits % of Capture
ACSSPROD ACSS_IDX03 ACSS_ORDER_HEADER_PK INDEX 3,425 43.62
ACSSPROD ACSS_IDX03 ACSS_ORDER_HEADER_ST INDEX 883 11.25
ACSSPROD ACSS_IDX03 ACSS_ORDER_HEADER_DT INDEX 682 8.69


Background (A)

When a transaction inserting a row in an index has to wait for the end of an index block split being done by another transaction the session would wait on event enq: TX - index contention.

Possible causes (A)

I. High number of concurrent inserts leading to excessive index block splits.

II. Right-growing index with key generated from a sequence.


Solutions (A)

Solution is to reorganize the index in a way to avoid the contention or hot spots. The options are:

I. Global Hash partition the index
II. Recreate the index as reverse key index (not suitable for large table, could require buffer cache increased accordingly)
III. If index key is generated from a sequence, increase cache size of the sequence and make the sequence 'no order' if application supports it.
IV.  Beginning in 18c, "Scalable sequences significantly reduce the sequence and index block contention and provide better data load scalability, compared to the solution of configuring a very large sequence cache using the CACHE clause of ... ALTER SEQUENCE."
Reference: https://docs.oracle.com/en/database/oracle/oracle-database/18/newft/new-features.html#GUID-CB2428B8-A5BA-4B13-B437-ECB5F0C2C84E


Symptom (B)

AWR reports show high wait on enq: TX - allocate ITL entry and enq: TX - row lock contention in mode 4.


Background (B)

A session waits on enq: TX - allocate ITL entry when it wants to lock a row in the block but one or more other sessions have rows locked in the same block, and there is no free ITL slot in the block. Usually, Oracle Database dynamically adds another ITL slot. This may not be possible if there is insufficient free space in the block to add an ITL.

Possible Causes (B)

Low inittrans setting relative to the number of concurrent transactions.


Solutions (B)

Find the segments those have high ITL waits from AWR reports or using the following SQL:

SELECT OWNER, OBJECT_NAME, OBJECT_TYPE
  FROM V$SEGMENT_STATISTICS
 WHERE STATISTIC_NAME = 'ITL waits' AND VALUE > 0
ORDER BY VALUE;

Increase inittrans value of the segments, which are seeing high ITL waits.


Symptom (C)

AWR reports show high wait on enq: TX - row lock contention in Exclusive mode (6).


Background (C)

A session waits on enq: TX - row lock contention, when it waits for a row level lock that is held by another session. This happens when one user has updated or deleted a row, but has not committed or rolled back yet, and another session wants to update or delete the same row.

Solution (C)

This is an application issue and application developer needs to be engaged to look into the SQL statements involved. The following documents might be helpful in drilling down further:

Document 102925.1 - Tracing sessions: waiting on an enqueue
Document 179582.1 - How to Find TX Enqueue Contention in RAC or OPS
Document 1020008.6 - SCRIPT: FULLY DECODED LOCKING
Document 62354.1 - TX Transaction locks - Example wait scenarios
Document 224305.1 -Autonomous Transaction can cause locking

Issue #5: High CPU and memory consumption

A. High CPU Utilization
B. High Memory Utilization

Symptoms (A)

I. OS tools such as TOP, prstat, vmstat shows user CPU usage is very high, and top cpu consuming
   processes are either oracle shadow or background processes.

II. AWR reports show top waits are one or more of the following:
    latch free
    cursor pin S wait on X or cursor pin S wait or library cache mutex X
    latch: cache buffers chains
    resmgr: cpu quantum
    enq: RO - fast object reuse
    DFS lock handle

III. AWR reports (SQLs ordered by buffer gets) show SQLs with very high buffer gets per execution
      and cpu time.

IV. Process stack of the high cpu consuming process shows that the process is spinning.


Possible Causes
 (A)

I. Oracle bugs:
  Bug 12431716 - Mutex waits may cause higher CPU usage in 11.2.0.2.2 PSU / GI PSU
  Bug 8199533 - NUMA enabled by default can cause high CPU
  Bug 9226905 - Excessive CPU usage / OERI:kkfdPaPrm from Parallel Query / High Version
  count on PX_MISMATCH
  Bug 7385253 - Slow Truncate / DBWR uses high CPU / CKPT blocks on RO enqueue
  Bug 10326338 - High "resmgr:cpu quantum" but CPU has idle time
  Bug 6455161 - Higher CPU / Higher "cache buffer chains" latch gets / Higher "consistent gets"
  after truncate/Rebuild

II. Very large SGA on Linux x86-64 platform without the implementation of Hugepages.

III. Expensive SQL queries with sub-optimal execution plans.

IV. Runaway processes.


Solutions (A)

I. Apply fix for the bug(s) that you are encountering. Most effective way to get all those fixes in is to apply the latest PSU patches. Document 756671.1 has more information on the latest PSUs.

II. Implement hugepages. Please refer to Document 361670.1 -  Slow Performance with High CPU Usage on 64-bit Linux with Large SGA for further explanation.

III. Tune the SQLs, which are incurring excessive buffer gets and cpu time. Please refer to Document 404991.1 - How to Tune Queries with High CPU Usage for more info.


Symptoms (B)

I. OS tools such as ps, pmap show process is growing in memory. pmap shows the growth is in heap and/or stack area of the process. For example,

#pmap -x 26677

Address        Kbytes   RSS   Anon   Locked Mode   Mapped File
00010000    496        480     -             - r-x--             bash
0009A000   80          80       24          - rwx--             bash
000AE000  160        160     40           - rwx--             [ heap ]
FF100000   688        688     -             - r-x--              libc.so.1

II. Session uga and/or pga memory of a oracle shadow process is growing.
       
        select se.sid,n.name,max(se.value) maxmem
           from v$sesstat se,
                    v$statname n
        where n.statistic# = se.statistic#
            and n.name in ('session pga memory','session pga memory max',
                                    'session uga memory','session uga memory max')
         group by n.name,se.sid
         order by 3;

III. Number of open cursors for a session/process is growing monotonically.


Possible Causes (B)

I. Oracle bugs:
      Bug 9919654 - High resource / memory use optimizing SQL with UNION/set functions with  
      many branches
      Bug 10042937 HIGH MEMORY GROUP IN GES_CACHE_RESS AND ORA-4031 ERRORS
      Bug 7429070 BIG PGA MEM ALLOC DURING PARSE TIME - KXS-HEAP-C
      Bug 8031768 - ORA-04031 SHARED POOL "KKJ JOBQ WOR"
      Bug 10220046 - OCI memory leak using non-blocking connection for fetches
      Bug 6356566 - Memory leak / high CPU selecting from V$SQL_PLAN
      Bug 7199645 - Long parse time / high memory use from query rewrite
      Bug 10636231 - High version count for INSERT .. RETURNING statements with
      reason INST_DRTLD_MISMATCH
      Bug 9412660 - PLSQL cursor leak / ORA-600[kglLockOwnersListDelete]
      Bug 6051972 - Memory leak if connection to database is frequently opened and closed
      Bug 4690571 - Memory leak using BULK COLLECT within cursor opened using native
      dynamic sql

II. Cursor leaks caused by application not closing cursors explicitly.

III.  SQLs with abnormally large hash join and/or sort operation.


Possible Solutions (B)

I. Apply fix for the applicable bug(s). The most effective way to get those fixes in is to apply the latest PSU patches. Document 756671.1 has more information on the latest PSUs.

II. Ensure cursors are explicitly closed by application.

III. Avoid very large hash join and/or sort operation.

REFERENCES

NOTE:1064487.1 - Script to Collect Log File Sync Diagnostic Information (lfsdiag.sql)
NOTE:1356828.1 - FAQ: 'cursor: mutex ..' / 'cursor: pin ..' / 'library cache: mutex ..' Type Wait Events
NOTE:1357946.1 - Troubleshooting 'library cache: mutex X' Waits.
NOTE:1310764.1 - WAITEVENT: "cursor: pin S" Reference Note
NOTE:1349387.1 - Troubleshooting 'cursor: pin S wait on X' waits.




BUG:9591812 - INCORRECT WAIT EVENTS IN 11.2
BUG:7429070 - BIG PGA MEM ALLOC DURING PARSE TIME - KXS-HEAP-C

BUG:8031768 - ORA-04031 SHARED POOL "KKJ JOBQ WOR"


NOTE:873243.1 - Troubleshooting 'enq: TX - index contention' Waits
BUG:10042937 - HIGH MEMORY GROUP IN GES_CACHE_RESS AND ORA-4031 ERRORS
NOTE:404991.1 - How to Tune Queries with High CPU Usage
NOTE:563566.1 - Troubleshooting gc block lost and Poor Network Performance in a RAC Environment
NOTE:34592.1 - WAITEVENT: "log file sync" Reference Note
BUG:4690571 - MEMORY LEAK USING BULK COLLECT WITHIN CURSOR OPENED USING NATIVE DYNAMIC SQL
NOTE:164768.1 - Troubleshooting: High CPU Utilization
BUG:7199645 - QUERY_REWRITE_ENABLED TAKE LONG PARSE TIME AND CONSUME ALL SERVER MEMORY
BUG:7385253 - DBWR IS CONSUMING HIGH CPU
NOTE:361670.1 - Slow Performance with High CPU Usage on 64-bit Linux with Large SGA

NOTE:1511700.1 - ORA-00060 Single-Resource Deadlock Occurs on Autonomous Transaction
BUG:6051972 - MEMORY LEAK IF CONNECTION TO DATABASE SERVER IS FREQUENTLY OPENED AND CLOSED
BUG:6356566 - STATSPACK.SNAP COSUMES ALMOST 100% CPU
NOTE:1298471.1 - H$PSEUDO_CURSOR


NOTE:786507.1 - How to Determine the Blocking Session for Event: 'cursor: pin S wait on X'
NOTE:62143.1 - Troubleshooting: Understanding and Tuning the Shared Pool
NOTE:727400.1 - WAITEVENT: "library cache: mutex X"
BUG:10411618 - ADD DIFFERENT WAIT SCHEMES FOR MUTEX WAITS
NOTE:102925.1 - Tracing Sessions that are Waiting on an Enqueue or a Lock
BUG:10636231 - HIGH VERSION COUNT FOR INSERT STATEMENTS WITH REASON INST_DRTLD_MISMATCH
BUG:6455161 - HIGHER "CONSISTENT GETS" AFTER TRUNCATE
NOTE:62354.1 - Waits for 'Enq: TX - ...' Type Events - Transaction (TX) Lock Example Scenarios
NOTE:857576.1 - Alternative and Specialised Options as to How to Avoid Waiting for Redo Log Synchronization
NOTE:756671.1 - Master Note for Database Proactive Patch Program
BUG:10220046 - MEMORY LEAK OCCURS WITH NON-BLOCKING APP IN THE LATER VERSION 10.2.0.4

NOTE:1298015.1 - WAITEVENT: "cursor: pin S wait on X" Reference Note
NOTE:1020008.6 - SCRIPT: FULLY DECODED LOCKING

Performance Best Practices for Oracle E-Business Suite on Oracle Cloud Infrastructure and On-Premises (Doc ID 2528000.1)

This document offers strategic guidance on performance best practices for Oracle E-Business Suite (EBS) environments. Note: Unless specifically stated otherwise, the content applies to on-premises, Oracle Cloud Infrastructure (OCI) and Exadata Cloud@Customer EBS instances. The contents are divided into a number of separate sections and subsections that address a variety of common issues and questions about Oracle E-Business Suite performance. You may read through the document sequentially, or refer to specific areas of interest as needed. There is a change log at the end of this document. Note: To receive automatic notifications of updates, subscribe to the Hot Topics Notification feature in My Oracle Support, as described in Document 793436.2, Subscribing to Hot Topic E-Mails. In This Document This document is divided into the following sections: Section 1: Introduction Section 2: Oracle Cloud Infrastructure Topics 2.1. Pre-Migration Best Practices 2.2. Oracle Cloud Infrastructure Best Practices Section 3: Database Tier Best Practices Section 4: Application Tier Best Practices 4.1. General 4.2. Forms-Based Applications 4.3. HTML-Based Applications 4.4. Oracle Workflow 4.5. Concurrent Processing 4.5.1 Concurrent Managers 4.5.2. Data Purging 4.5.3. Job Scheduling 4.5.4. Conflict Resolution Manager 4.5.5. Transaction Managers 4.6. Oracle RAC Node Affinity Section 5: Sizing Oracle E-Business Suite for Mobile Applications Section 6: Online Patching Best Practices Section 7: Sizing Your Oracle E-Business Suite Environment Section 8: Benchmarking Best Practices Section 9: Additional Best Practices Section 10: References 1. Introduction This document provides a set of best practices to optimize the performance of Oracle E-Business Suite. It can be regarded as a form of FAQ. Note: The sections and topics are ordered and numbered solely for ease of reference, and do not necessarily reflect the order in which they may apply to your environment or should be acted on. Section 2. Oracle Cloud Infrastructure Topics Note: The guidance in this section applies specifically to Oracle E-Business Suite on Oracle Cloud Infrastructure (OCI). Oracle E-Business Suite Releases 12.2.3 (and higher) and 12.1.3 are certified with Oracle Cloud Infrastructure. Oracle E-Business Suite Cloud Manager offers a graphical user interface for creating, managing, and configuring EBS environments on OCI. This includes the capability to provision environments from backups created using the Oracle E-Business Suite Cloud Backup Module as part of a lift and shift operation. For more information about Oracle E-Business Suite on OCI, refer to My Oracle Support Knowledge Document 2517025.1, Getting Started with Oracle E-Business Suite on Oracle Cloud Infrastructure. 2.1 Pre-Migration Best Practices Before migrating Oracle E-Business Suite to OCI, you should determine the required Oracle Cloud Infrastructure shape, including such items as number of OCPUs, amount of memory, storage volume capacity, and network connectivity. Reducing the size of the database will reduce the time to move (lift and shift) an instance either from on-premises to cloud or from one cloud/region to another. Reducing the volume of the data has other advantages, including improved performance, reduced storage costs, reduced backup time, and—particularly important for many customers—reduced upgrade time and associated downtime. 2.2. Oracle Cloud Infrastructure Best Practices Oracle E-Business Suite performance is significantly affected by the network latency between the database and application tiers. These should therefore be co-located in the same (regional) data centre to minimize network latency. The AcceptMutex directive sets the method HTTP uses to serialize multiple children accepting requests on network sockets. For improved performance on application tiers, you should set AcceptMutex to sysvsem instead of fnctl. Setting AcceptMutex to sysvsem avoids lock file issues for NFS mounts. To deploy FSS with Oracle E-Business Suite on OCI, follow the guidelines in the Oracle by Example (OBE) tutorial: Sharing the Application Tier File System in Oracle E-Business Suite Release 12.2 or 12.1.3 Using the Oracle Cloud Infrastructure File Storage Service. For mount options for Network File Systems (NFS), refer to My Oracle Support Knowledge Document 1375769.1, Sharing The Application Tier File System in Oracle E-Business Suite Release 12.2. This provides valuable guidance on setting NFS parameters appropriately for your environment. On the application tier, the read and write sizes should typically both be set to at least 64 KB. On OCI, these settings will be managed automatically by Oracle E-Business Suite Cloud Manager: if you wish to set them manually, refer to My Oracle Support Knowledge Document 1375769.1, Sharing The Application Tier File System in Oracle E-Business Suite Release 12.2 (Doc ID 1375769.1). Use the SQL Performance Analyzer (SPA) database feature to compare the performance of Oracle E-Business Suite before and after your migration to cloud. For further information, refer to the Oracle White Paper: Migrating to the Oracle Database Cloud with SQL Performance Analyzer. Section 3. Database Tier Best Practices This section provides guidance on Oracle Database settings and options that are particularly relevant to performance. Refer to My Oracle Support Knowledge Document 244040.1 Oracle E-Business Suite Recommended Performance Patches for known performance related bugs and their recommended patches for Oracle E-Business Suite environments. Identify the necessary patches and apply them. Use the EBS Technology Codelevel Checker (ETCC) to ensure all that all required database tier patches have been applied. For more information about ETCC and the appropriate set of recommended patches to install, refer to My Oracle Support Knowledge Document 1594274.1 Consolidated List of patches and Technology Bug Fixes. Refer to My Oracle Support Knowledge Document 396009.1 Database Initialization Parameter Settings for Oracle E-Business Suite for a list of recommended database parameters. Ensure all the mandatory parameters are set appropriately. Be aware that these settings are also applicable to OCI environments, even if that is not explicitly stated in the document. Set the parameter FILESYSTEMIO_OPTIONS=SETALL. Ensure that asynchronous I/O is enabled for kernel operations. I/O performance can be severely degraded if you do not set this parameter. For Oracle E-Business Suite Release 12.2.x, set the parameter RESULT_CACHE_MAX_SIZE to 600 MB to make optimal use of result cache memory. This will reduce the DML activity on the tables, as a single DML action could invalidate the result cache and might result in latch free waits, potentially affecting the overall performance of the database. Example: FND_PROFILE table. Configure HugePages on the database server to improve SGA management. This enables the SGA to be pinned in memory, insuring against page-in or page-out operations and reducing the system resources needed to manage page table entries. Be aware that only the SGA (and not the PGA) can utilize and benefit from HugePages. Refer to My Oracle Support Knowledge Document 1392497.1 USE_LARGE_PAGES To Enable HugePages for information about enabling HugePages for the Oracle E-Business Suite database. Set the parameter USE_LARGE_PAGES='only' on each instance, so that the instance will only start if sufficient HugePages are available, and thereby avoid out of memory errors. If you are using Oracle Database Release 12.1.x on an Exadata Linux environment, set the parameter EXAFUSION_ENABLED=0 in your database initialization parameter file. This will ensure the parameter is disabled (the default). Exafusion is a Direct-to-Wire protocol on Exadata Linux environments that allows database processes to receive and transmit Oracle RAC messages directly over the Infiniband network, bypassing the overhead of entering the OS kernel. For more information on Exafusion and the database versions it supports, refer to My Oracle Support Knowledge Document 2037900.1, EXAFUSION Details and Supported Versions. Pinning PL/SQL packages into shared pool ensures that the package remains in memory and prevents the package from being paged out and then re-parsed upon reload. Use the DBMS_SHARED_POOL package to pin large and frequently stored programs into the shared pool. The best time to pin an object is immediately after the database has been started. This increases the likelihood that enough contiguous memory will be available to store the object. Having objects pinned will reduce shared pool fragmentation and the chances of receiving an ORA-04031 error. Refer to My Oracle Support Knowledge Document 69925.1 Pinning Oracle E-Business Suite Objects Into The Shared Pool. Set the Sequence Object Cache Size to 1000 or higher for gapless sequences utilized by bulk DML operations. Having a larger sequence cache will improve performance, especially in cases where there is contention on a particular sequence. Use a command like this example: SQL> ALTER SEQUENCE CACHE 1000; Be aware that cached sequence numbers will be lost when the database is restarted. Use native compilation to compile PL/SQL programs into native code that does not need to be interpreted at runtime. For more details, refer to My Oracle Support Knowledge Document 1086845.1 PLSQL Native Compilation (NCOMP). For Oracle RAC instances: Configure Jumbo Frames for the private Cluster Interconnects, This requires careful configuration and testing to realize its benefits. Ror current recommendations. refer to My Oracle Support Knowledge Document 341788.1, Recommendation for the Real Application Cluster Interconnect and Jumbo Frames. Set the parameter PARALLEL_FORCE_LOCAL = TRUE so that the parallel query and parallel server processes are restricted and can only operate on the same Oracle RAC node where the query coordinator resides (the node on which the SQL statement was executed). For a list of the top database and instance performance issues and solutions in Oracle RAC environments, refer to My Oracle Support Knowledge Document 1373500.1 Top 5 Database and/or Instance Performance Issues in RAC Environment. Running the Gather Schema Statistics concurrent program will ensure that the database optimizer has up-to-date statistics to use when it formulates an execution plan.This should be run whenever there have been significant changes to data (such as data incorporation or integration from a large batch job). The program can be run for all schemas or s subset of schemas. Monitoring end-user transaction times and database performance will help you identify applicable schemas and the interval needed to ensure peak performance. Refer to My Oracle Support Knowledge Document 1586374.1 Best Practices for Gathering Statistics with Oracle E-Business Suite. Table 1: Do's and Don'ts When Gathering Object Statistics Do's Don'ts Gather statistics for the application tables at regular intervals, as well as whenever there is more than 10% of data change. For data dictionary objects, gather statistics when there is a significant change in the dictionary, such as when many new objects are created after the upgrade or major patching cycle. The automatic statistics gathering job does not gather dictionary and fixed object statistics. Use the DBMS_STATS.GATHER_DICTIONARY_STATS package for gathering dictionary object statistics. Consider gathering dictionary statistics after any of: A database upgrade A platform change An Oracle E-Business Suite upgrade Use the DBMS_STATS.GATHER_FIXED_OBJECT_STATS package for gathering fixed object statistics. For optimal performance gather statistics when there is a representative workload on the system. Consider gathering fixed object statistics after any of: A database upgrade A platform change Changes to database parameters Gather statistics whenever a large of volume of data is inserted into or deleted from the tables, or when the maximum or minimum values change. Use either the FND_STATS package or Gather Schema/Table statistics concurrent programs to generate statistics. Specify the GATHER_AUTO option to gather statistics incrementally. This option collects statistics for objects with either stale or missing statistics. Using AUTO sample size instead of a specific percentage is highly recommended when gathering statistics. This automatically determines the appropriate sampling percentage. Consider gathering and locking statistics for volatile tables (such as intermediate or interface tables) when representative data is populated into the tables. Examples include tables used in batch processes, such as AP_SELECTED_INVOICES and WSH_PR_WORKERS. To generate histograms for non-seeded columns, populate the FND_STATS.LOAD_HISTOGRAM_COLS table with columns so that it generates the histograms required. Note that if the Gather Schema Statistics concurrent program is stopped or terminated, when it is resubmitted it will restart from where it left off. While submitting the concurrent request to Gather Schema Statistics, ensure the 'Invalidate Dependent Cursors” parameter' is set to 'No' to avoid invalidating cursors. This ensures that performance is maintained for the current session, and prevents excessive library cache waits due to hard parsing. Seed the table FND_STATS.LOAD_XCLUD_TAB to exclude the table from statistics being generated when the Gather Schema Statistics program runs. Do not gather statistics during peak business hours, as such operations can be a resource-intensive. Do not gather statistics excessively: for example, on either entire schemas or the entire database regularly, such as nightly or weekly. Do not gather statistics on Global Temporary Tables. Do not use the standard Oracle database ANALYZE or DBMS_STATS commands to gather statistics, as they may result in suboptimal plans and are not supported for Oracle E-Business Suite. Section 4. Application Tier Best Practices The following guidance for the application tier is divided into subsections for general points, types of application, and individual components and technologies. 4.1. General The guidance here applies across the application tier. Use the EBS Technology Codelevel Checker (ETCC) to ensure all that all required application tier patches have been applied. Ror more information about ETCC and the appropriate set of recommended patches to install, refer to My Oracle Support Knowledge Document 1594274.1 Consolidated List of patches and Technology Bug Fixes. For the latest AD and TXK updates to apply, refer to My Oracle Support Knowledge Document 1617461.1 Latest AD and TXK Release Update Packs to Oracle E-Business Suite Release 12.2. For the latest performance patches to apply, refer to My Oracle Support Knowledge Document 244040.1 Recommended Performance patches Oracle E-Business Suite. Both excessive logging and use of debugging typically have a considerable impact on application performance. The best practice is to set only the minimum logging and debugging levels, and disable them when they are no longer needed. Ensure that the following profiles are set: FND: Debug Log Enabled - set to ‘Yes’ (Releases 12.0 RUP3+, 12.1.x, 12.2) FND: Debug Log Level - Set to 'Unexpected' (Level=6) In addition, you can query the activity on FND_LOG_MESSAGES to establish whether logging has been set (or perhaps inadvertently left on). if the query returns a significant number of rows, there could be exceptions or errors that need to be investigated. Set the Debug Profile options at User level unless otherwise specifically required. Do not forget to reset the debug profiles once a debugging exercise has been completed. Use Sign-On Audit levels judiciously in order to avoid performance issues. For a description of auditing and logging features refer to Oracle E-Business Suite Security Guide. Establish a strategy to compress large files or heap dumps before transferring them, or when they are not in use. This will reduce both transfer time and disk space needed. 4.2. HTML-Based Applications These applications are based on Oracle Application Framework (FWK or OAF), and were originally known as Self-Service applications. For sizing guidelines on the number of JVM processes and their JVM capacity requirements, refer to the section JVM Parameter Settings for Java on WLS Web Tier in Oracle E-Business Suite Upgrade Guide. Configure 1 JVM per 1-2 CPUs (depending on CPU speed). For the OACORE JVM, starts with Xms 2048 MB and Xmx 2048 MB. Set the same values for Xms and Xmx to avoid heap memory reallocation during runtime. For Oracle E-Business Suite Release 12.2 in a multi-tier application server topology, consider configuring WebLogic Administration Server on a smaller shape as there is no requirement to set a large heap size here. Set the maximum heap sizes for the JVM processes according to maximum operating system physical memory available in the shape of your cloud instance. Configuring large heap sizes for managed servers may result in swapping in the OS, which can significantly degrade application performance. Configure more managed servers to scale up to concurrency levels. Consider the number of CPUs available in the cloud instance shape selected when configuring additional managed servers. Size the JVMs optimally to match with concurrency levels required. Total memory allocated to the JVMs is the most important factor affecting the garbage collection (GC) performance. Monitor the frequency of collections, especially major collections (full GCs). Enable verbose GC in the JVM configuration parameters, to facilitate tuning heap sizes based on GC traffic. On 64-bit instances, we do not recommend allocating huge heap sizes, but rather having additional managed instances in the cluster to scale up to the target concurrency levels. In Oracle E-Business Suite Release 12.2, the default 512 MB size is not enough for Admin Server. We recommend setting the JVM parameters XMS to 1 GB and XMX to 2 GB. For Oracle E-Business Suite Release 12.2, consider additional sizing requirements for online patching. For initial sizing guidance, refer to Database and Application Tier Sizing Guidelines in Oracle E-Business Suite Installation Guide: Using Rapid Install. For more details on configuring the application tier, refer to My Oracle Support Knowledge Document 1905593.1 Managing Configuration of Oracle HTTP Server and Web Application Services in Oracle E-Business Suite Release 12.2. 4.3. Forms-Based Applications These traditional applications are based on Oracle Forms. Check applicable sizing guidelines on the number of Forms Server processes and their JVM capacity requirements by referring to the section "Planning Oracle E-Business Suite 12.2 Deployment Architecture" in Oracle E-Business Suite Upgrade Guide. Train application users to optimally utilize Oracle Forms so that they avoid blind queries, or queries with unselective criteria. Encourage users to provide selective criteria in Find Windows and LOVs to enable index usage, as this will return their result set more quickly and reduce usage of system resources. Ask users to avoid opening and closing forms across transactions. Combine forms from multiple products into a single form to reduce network traffic and form opening times. Reduce the system load and network overhead to help maintain proper performance with high concurrency levels and high latency connections. Generate SQL trace and Forms Runtime Diagnostic (FRD) report to debug Oracle Forms performance issues. For more information on how to set up FRD logging, refer to My Oracle Support Knowledge Document 445166.1, R12: How To Create An FRD (Forms Runtime Diagnostic) Log Using Forms 10g. 4.4. Oracle Workflow Oracle Workflow is used across a number of Oracle E-Business Suite application modules, and its role means performance tuning can be particularly important. Once a flow is complete, much of the associated data becomes obsolete, not only taking space needlessly but also potentially affecting application runtime performance. You can purge obsolete data using the 'Purge Obsolete Workflow Runtime Data (FNDWFPR)' concurrent program. Optionally, you can use the 'Item Type' parameter to specify a particular item type to purge, or otherwise leave this parameter blank to purge runtime data for all item types. Failure to remove obsolete data may result in extended processing times. Refer to Purge Obsolete Workflow Runtime Data, Oracle Workflow Administrator's Guide. Workflow Background Process is a concurrent program that is run for processing deferred and timeout activities, and stuck processes. Set the Oracle Workflow background engine ‘Process Stuck’ parameter to 'No'. Ensure there is at least one background engine that can handle deferred and timeout and activities, as well as stuck processes. Where possible, use deferred activities to speed up online response times for flows such as scheduling PO Document Approval. Refer to Setting Up Background Workflow Engines, Oracle Workflow Administrator's Guide. Use database partitioning to partition runtime Oracle Workflow tables such as WF_ITEM_TYPES and WF_ITEM_ATTRIBUTES for improved performance and scalability. Refer to Partitioning Tables in Oracle Workflow Administrator's Guide. On Oracle RAC instances, employ node affinity for improved performance and scalability. Use the ITEM_TYPE column for node affinity. Refer to Setting Up Workflow RAC Affinity, Oracle Workflow Administrator's Guide. Disable retention (number of days for which processed messages are retained in an Oracle Workflow queue) using the command: SQL> DBMS_AQADM.ALTER_QUEUE (queuename=>:b1,retention_time=>0); Note: Do not use the DBMS_AQADM PL/SQL procedure to change any other aspects of Oracle Workflow queue setup. 4.5. Concurrent Processing This section covers key aspects of concurrent processing where performance tuning can be critical. 4.5.1 Concurrent Managers Concurrent managers are responsible for running requests for execution of concurrent programs. When an application user submits a request to run a concurrent program, the request is entered into a database table that stores all concurrent requests. Concurrent managers read requests from this table and start concurrent programs to process them. Requests can be scheduled to run on a recurring basis. Check whether such automatic recurrence is appropriate, and if performance is being impacted by the scheduling. Remove any requests that do not need to be run repeatedly. Tune the concurrent manager parameters such as number of workers, sleep time and cache size depending on the concurrent program execution time. Table 2 provides recommendations for the optimal setting of key concurrent manager parameters. Table 2: Recommendations for Key Concurrent Manager Parameters Concurrent Manager Average Request Duration Number of Workers Sleep Time Footnote 1 Cache Size Footnote 2 Fast 10 seconds Many 15-20 seconds Variable Medium 5 minutes A few 60 seconds 2 x workers Slow 30 minutes A few 60 seconds 2 x workers Long > 2 hours Very few 60 seconds 1 x workers Critical Variable Variable Variable Variable Footnote 1 - A manager does not sleep unless there is nothing for it to do, i.e. there are no pending requests available to run. As long as requests are available, the manager will not sleep. Each time a manager fails to lock a request in its cache. it will increment an environment variable, AFSPINMAX. Each time through the main loop it will check this variable, and if the value is more than the threshold defined by the variable the manager will go to sleep. This is to prevent 'spinning', where there are pending requests available but the manager is unable to lock any of them. Such a situation would cause the manager to execute its SQL continuously, without any break, and cause high CPU usage. The default value of AFSPINMAX is (Cache Size x 20), but this can be set to a different number if required. Footnote 2 - The most important factor in choosing a cache size is how long requests will be running. There is little point in caching a lot of requests if each one runs for more than two hours. The figures provided in Table 2 can be translated into the following practical guidance for concurrent manager setup: For a manager that completes program execution in an average of 10 seconds, configure as many workers as possible (depending on CPU availability) and specify a sleep time of 15-20 seconds. Set the cache size as needed. For a manager that completes program execution in an average of 5 minutes, configure a modest number of workers and specify a sleep time of 60 seconds. Set the cache size as 2 x number of workers. For a manager that completes program execution in an average of 30 minutes, do not configure many workers and set sleep time to 60 seconds. Set the cache size as 2 x number of workers. For a manager that completes program execution in an average of more than two hours, configure very few workers and set sleep time to 60 seconds. Set the cache size as 1 x number of workers. For a manager that executes critical programs with potentially very long duration, it is not feasible to recommend specific values for parameters, and you will need to review each case individually. 4.5.2. Data Purging Use the Purge Concurrent Request and Manager Data program (FNDCPPUR) to purge request log files, concurrent manager log files, tables, and report output files. Schedule the purge program as appropriate. Double-check the concurrent manager job activities and load on the system. Every concurrent program has a parameter, 'SHELF_LIFE', which can be used to specify how long the logs and output from the program should be kept. Note that when SHELF_LIFE is defined for a program, that parameter comes into play only when the Purge program is run: SHELF_LIFE on its own does not cause log or out files be purged. In addition, the purge program will not purge log and output for requests if the assocated program's SHELF_LIFE has not expired, regardless of the age or count parameters defined for the purge request. Optimize log directory access time by reducing the inode file entries. Oracle E-Business Suite Release 12.2 introduced a new environment variable, APPLLDM, to specify whether log and out files are stored in a single directory for all Oracle E-Business Suite products, or in one subdirectory per product. Refer to My Oracle Support Knowledge Document 1616827.1 Managing Concurrent Managers Log and Out Directories. The cpadmin command line tool can be used to set the APPLLDM environment variable, and in addition can be used to move existing log and out files to other directories without purging. Refer to Concurrent Processing Command-Line Utility (cpadmin) in Oracle E-Business Suite Setup Guide. Apply Patch 21786827 to define another environment variable, APPLWDIR. You can set the APPLWDIR variable to an empty local directory that the concurrent managers will use as their current working directory. This can be especially useful if NFS-mounted storage devices are being used. Ask users to provide selective parameters when submitting concurrent jobs, to avoid the concurrent jobs having a needlessly long run time. If you need to cater for critical short duration requests, create a modest number of specialized concurrent managers dedicated to processing them. This will help ensure that the critical short duration requests are not waiting in the queue for an excessive time. However, avoid creating more specialized managers than really needed, as doing so can reduce overall concurrent processing performance because of polling on queue tables. 4.5.3. Job Scheduling Taking into account the business criticality and execution time of your concurrent programs, schedule them to a particular window so the performance of other processes will not be impacted. It is particularly advisable to avoid scheduling resource intensive requests during peak time OLTP windows. Configure concurrent managers to schedule requests out of peak time. To ensure that a concurrent program will not run during peak time, you need to define both the manager's work shift and specialization rules. This will ensure that even if a concurrent program is submitted during peak time, the concurrent manager will execute the request outside this period. Define a Workload Management Strategy based on average job duration and system usage profile. Truncate the reports.log file in your log directory. Refer to My Oracle Support Knowledge Document 842850.1, R12 E-Business Suite Concurrent Processing Users Report The Following Failure Message 'Error: emsg:was terminated by signal 25' For All Concurrent Programs Which Access $APPLCSF / $APPLLOG (Doc ID 842850.1). Set the profile option 'Concurrent: Active Request Limit' to restrict the number of concurrent requests that can be run simultaneously by each user. 4.5.4. Conflict Resolution Manager The Conflict Resolution Manager (CRM) checks the incompatibility rules effective for the pending concurrent request against other running requests. The CRM allows the request to execute only when all incompatible requests have completed. To maximize throughput for jobs which spawn parallel workers (such as jobs for Auto Invoice and Payroll), consider reducing the sleep time of the Conflict Resolution Manager from the default of 60 seconds to 5 or 10 seconds. Be aware that Patch 23021248 is important for CRM performance. 4.5.5. Transaction Managers Transaction managers exist to process day to day transactions. Each transaction manager has its own set of tasks and responsibilities to process some set of records. While conventional concurrent managers let you execute long-running, data-intensive application programs asynchronously, transaction managers support synchronous processing of particular requests from client machines. A request from a client program to run a server-side program synchronously causes a transaction manager to run it immediately, and then to return a status to the client program. Transaction managers are implemented as immediate concurrent programs. At runtime, concurrent processing starts a number of these managers. Rather than polling the concurrent requests table to determine what to do, a transaction manager waits to be signaled by a client program. The execution of the requested transaction program takes place on the server, transparent to the client and with minimal time delay. At the end of program execution, the client program is notified of the outcome by a completion message and a set of return values. Communication with a transaction manager is automatic. The transaction manager mechanism does not establish an ongoing connection between the client and the transaction manager processes. The mechanism is designed to enable a small pool of server processes to service a large number of clients with a real-time response. Set the profile 'Concurrent:Wait for Available TM' to 1 second to minimize TM latency. This profile sets the total time to wait for a transaction manager before switching over to the next available transaction manager. If (and only if) you are using dbms_pipes, set the sleep time on transaction managers to 30 minutes. This avoids over-frequent polling to check for possible shutdown requests. Note that if you set sleep time this high you may have to wait up to 30 minutes to do a shutdown or restart of the managers. 4.6. Oracle RAC Node Affinity Distribute the workload to specific Oracle RAC nodes where possible. This will maximize scalability by minimizing the need for internode communication and cache synchronization. For concurrent requests, run a concurrent program against a specific Oracle RAC instance with PCP/RAC set up. To define node affinity at the program level, refer to My Oracle Support Knowledge Document 1129203.1, How To Run a Concurrent Program Against a Specific RAC Instance with PCP/RAC Setup. This document also describes how you can make use of instance affinity. For HTML-based applications, enable node affinity either by setting the profile option 'App%Agent' to application tier hosts configured for specific services, or setting the profile option "Application Database ID" to a node-specific DBC file name. For more information, refer to My Oracle Support Knowledge Document 1375686.1 Using Load Balancers with Oracle E-Business Suite Release 12.2 and Document 380489.1 Using Load-Balancers with Oracle E-Business Suite Release 12.0 and 12.1. For Forms-based applications, you can enable node affinity by setting the profile option 'Database Instance' at the application or responsibility level, which can be tied to the value of the $TWO_TASK environment variable or to a database service. Section 5: Sizing Oracle E-Business Suite for Mobile Applications Oracle E-Business Suite offers many mobile applications from different product groups. These mobile applications share EBS system resources such as CPU and memory, and the user load is co-located with that of self-service and forms application users. This sharing of resources means that the same managed instance sizing guidance applies to the mobile user community. So for every additional 150-180 concurrent mobile users, it is advisable to configure an additional managed instance for the additional workload, based on the standard managed instance sizing guidelines. Size the database parameters such as PROCESSES, SESSIONS, SGA and PGA to match the additional load requirements. Section 6: Online Patching Best Practices Consider the sizing and resource requirements needed to support the online patching capability introduced in Oracle E-Business Suite Release 12.2. For initial sizing guidance, refer to Database and Application Tier Sizing Guidelines in Oracle E-Business Suite Installation Guide: Using Rapid Install. Some specific examples are given in other points of this section. Ensure the number of online patching processes is set correctly, based on system activity and number of available CPU cores. Before you run the adop online patching utility, ensure there is the same number of managed servers, with the same configuration, for both the run and patch file systems. This is so AutoConfig can successfully complete in both file systems before the adop prepare phase is run. If it cannot complete, the prepare phase will fail. Ensure there is adequate memory to load all the configured managed servers (OA and forms) for both the run and patch file systems, as adop will start them. The database will accumulate old database edition objects as each online patching cycle is completed, and system performance may be reduced when too much space is used. As a best practice, consider dropping all old database editions once the number of editions reaches 20. You can drop old editions by running the actualize_all online patching phase, and then performing a full cleanup. For more information on online patching operations and management, refer to Oracle E-Business Suite Maintenance Guide. Section 7. Sizing Your Oracle E-Business Suite Environment Sizing an Oracle E-Business Suite environment is a multi-dimensional activity that largely involves database and application server sizing. Specifically, sizing for better performance and scalability involves a process of refinement, where you need to define the key performance indicators and perform representative stress testing. By reviewing key performance indicators (KPIs) and verifying the performance metrics carefully, you can choose the correct size for your production system. When analyzing performance data, focus on the following metrics to determine whether actual usage is lower than instance capacity. 7.1 Considerations for User Load Analysis Answering the following questions will help you both size your EBS environment correctly and enable you to select the optimum deployment topology. Is your hardware adequately sized to handle the peaks in usage? When are the periods of peak usage? For example, these may be at period close or with batch data load. Is there a pattern of load spikes on the system? If so, is it due to transactional or batch load? Does your system have surges in demand at month end or year end? Is your system overloaded at certain times of the day? If so, could some of the load be moved to quieter periods in the day and thereby maximize your available resources? What is the number of named concurrent users in the source system, and what are their usage profiles? What are the response time requirements from the system? 7.2 Sizing Newly Deployed Instance For sizing a newly provisioned instance, refer to Database and Application Tier Sizing Guidelines in Oracle E-Business Suite Installation Guide: Using Rapid Install to define the initial state of an instance according to your workload requirements. Once it is provisioned, set the initialization parameters based on the CPU and memory available on the server. You then need to perform a pre-production load test on the provisioned instance, measure the performance and deduce values for various database initialization parameters and application server JVM parameters. The key metrics to look for are CPU usage and memory usage. Once you have chosen the right values, you can if desired move the test instance to be a production instance. The following steps describe the process for various deployment scenarios. You should review them all carefully when migrating your EBS environment to OCI. 7.3 Sizing An Existing Instance If you are migrating to OCI from on-premises, your existing environment can provide a wealth of information that can be used to configure the target instance in OCI correctly, while preserving the original performance levels on the migrated environment. This information includes, but is not limited to, the current workload characteristics (peak/average active users on the system); the resource utilization levels for CPU, Memory and IO resources; and the acceptable performance figures for crucial business processes. 7.4 When There is No Change in Workload Between Source and Target Environments Based on the performance data from your source environment to choose the OCI shape that most closely matches the hardware resource profile of the source environment. Set the database initialization parameter values on the target instance to match the values on the source instance. Ensure that parameter settings are in accordance with My Oracle Support Knowledge Document 396009.1 Database Initialization Parameter Settings for Oracle E-Business Suite. Define benchmarks that match performance tests run on your on-premises instance, with the goal of achieving similar performance metrics in OCI. Carry out pre-production sizing and performance benchmark tests, keeping in mind the expected number of concurrent users and workload on the target environment. Review the results to ensure the performance figures are acceptable, and also to make necessary adjustments to sizing instance-related initialization parameter values. Once the environment is configured to provide optimal performance, use the validated values to size the production environment. 7.5 Where There is an Increase in the Projected Workload on Target Environment Based on the current resource utilization data and workload characteristics on the source environment, extrapolate your hardware requirements and choose an OCI shape that can accommodate the expected increase in workload without impacting the performance. Use the scaling factor method based on the projected workload to estimate the target environment sizing, and validate it by pre-production testing. Carry out pre-production sizing and performance benchmark tests, keeping in mind the expected number of concurrent users and workload on the target environment. Review the results to ensure the performance figures are acceptable, and also to make any necessary adjustments to sizing instance-related initialization parameter values. Once the environment is configured to provide optimal performance, use the validated values to size the production environment. Section 8. Benchmarking Best Practices 8.1 Introduction The purpose of this section is to provide a set of best practices for Oracle E-Business Suite pre-production validation benchmarks. Benchmarks are crucial for validating system dynamics in the context of system state, saturation of domain capacity and overall system performance signature, as a function of projected workload, transaction payloads and frequency. The benchmark process flow comprises the following procedures: Transaction Workload Definition and Data Distribution. Environment Setup and Deployment Validation. Workload Simulation and Performance Data Collection. Results Analysis and Exit Criteria Validation. For all the aforementioned process steps, a valid EBS benchmark should have the following characteristics: The benchmark should be representative of the target production system including, but not limited to: Deployment profile and sizing characteristics. Data distribution and high watermark levels. Projected workload characteristics which include online and batch transaction payloads, batch and concurrent manager execution frequencies, online users concurrency levels, and online and batch co-location dynamics. The benchmark should be re-entrant, with the benchmark environment always having a consistent re-entrant state at the start of each cycle of execution. The re-entrant state should be maintained on multiple levels, including data distribution and high watermark levels, transaction bindings and execution variables, and workload state and concurrency levels. The benchmark should be well-formed in the context of transaction error rate and functional consistency for each benchmark testing cycle. Proper checks and balances should be in place through the life cycle of the benchmark to ensure the above properties are maintained. 8.2 Transaction Workload Definition and Data Distribution Transaction workload definition and data distribution is the first and most important process step in the benchmark life cycle. Proper gap analysis is critical while defining EBS benchmark workloads, not only to cover important and active transactions in the system but also to define transaction concurrency levels in the target benchmark workload. The following are some key guidelines and recommendations for EBS transaction workloads: Identify a representative selection of transactions including Self-Service, Forms, and Concurrent Manager workloads. Particular attention should be paid to custom code and transaction extensions impact. For Self-Service and Forms flows, define projected workloads as a function of transaction types and concurrency levels. For Self-Service and Forms flows, make sure that both query and posting flows/navigation are included, to simulate an active state of the system. For Self-Service and Forms flows, define user pools, user concurrency levels, iteration cycles, iteration sleep times and user navigation sleep times. It is crucial to properly set the sleep times at proper levels, to avoid synthetic connection storms and system overload. For Concurrent Manager flows, define projected workloads as a function of concurrent manager request types and execution frequency. Define co-location test cycles for projected workloads. Ideally three benchmark cycles should be planned: one dedicated cycle for Self-Service and Forms workload, another cycle for Concurrent Manager workload, and a combined workload cycle that includes both the Self-Service and Concurrent Manager flows. Ideally benchmark times should reflect a warm execution state, so a warmup cycle is recommended: the goal is to bring the system to a warm state after restore, before running formal cycles and collecting performance data. Following are some guidelines and recommendations covering EBS data distribution: Define the initial data state of the system as a function of tables row counts, tables high watermarks and correct fixed, dictionary and application statistics. Define a backup/restore cycle to the initial data state to be used before each benchmark iteration. 8.3 Environment Setup and Deployment Validation Pre-production environments should be identical in capacity and system characteristics to the target production environment. It is also important to have the pre-production benchmark system state and production system state synchronized in terms of patching at the operating system, database, and application tier levels. The primary goal of any pre-production benchmark is to define the target production system sizing as a function of workload and system dynamics, In order to achieve that goal, multiple benchmark cycles are needed to properly size the environment. These should be based on the combined projected workload, system saturation and contention levels. During the benchmark cycles for proper sizing of the target system, 70-80% of the system resources (CPU, memory and IO) should be the maximum threshold of utilization. For guidance on the initial sizing phase for the pre-production environment, refer to Database and Application Tier Sizing Guidelines in Oracle E-Business Suite Installation Guide: Using Rapid Install. 8.4 Workload Simulation and Performance Data Collection Simulation runs should cover all workload buckets, including for Self-Service, Forms, and Concurrent Processing, plus co-located or combined workloads with multiple iterations for each of the target buckets. For each iteration, a predefined set of performance Key Performance Indicators (KPIs) should be collected, baselined, and compared in terms of factors including response times, and system vitals covering both saturation and contention states. Care should be taken to monitor the load generator system state as well, as load generator saturation will disrupt the benchmark system dynamics and will deviate all collected performance KPIs. To validate the load generator state along with the test scripts, a set of dry runs is required in order to collect all the performance KPIs and transaction success rates. Proper monitoring of system vitals during EBS benchmark cycles is key to the success of the benchmark project. Monitoring should be done at the database operating system level, application tier operating system level, database tier level, and application tier level. The following are some key monitoring resources and associated My Oracle Support knowledge document references: Overall System Monitoring Enterprise Manager System Monitoring: OS - OSWatcher (My Oracle Support Knowledge Document 301137.1) Network Test Utilities Best Practices (My Oracle Support Knowledge Document 556738.1) Database Monitoring Diagnostics AWR Report (My Oracle Support Knowledge Document 748642.1) ADDM report (My Oracle Support Knowledge Document 250655.1) Active Session History (ASH) Application Monitoring and Diagnostics Concurrent Manager timings Load Generator Timings Forms Tracing (My Oracle Support Knowledge Document 373548.1) FRD Log (My Oracle Support Knowledge Document 445166.1) Managed instances or JVM Logs 8.5 Results Analysis and Exit Criteria Validation The exit criteria of Oracle E-Business Suite pre-production benchmarks are defined on the following: Self-Service and forms acceptable 90% percentile response times. Concurrent Manager requests throughput and elapsed times. EBS system dynamics as a function of saturation and contention states. Final EBS system sizing as a function of co-located workload and projected concurrency levels. Patching levels for all EBS artifacts in the deployment topology. Final initialization parameters as verified by the pre-production benchmark. Based on the above exit criteria, pre-production settings are graduated to the production environment with a sizing contingency scaling factor of 10-20%. The purpose of the sizing scaling factor is to compensate for any deviation between the synthetic benchmark workload and the real-life production workload. Section 9: Additional Best Practices This section mentions various extra points that are useful in specific circumstances. Monitor the database server for any jobs consuming significant resources. Validate the need for such jobs, and eliminate any unnecessary ones to help reduce the server workload. Wherever possible, use bind variables rather than literal values in custom code and ad hoc SQL queries. This will reduce shared pool fragmentation that might otherwise eventually result in an ORA-04031 error. To identify SQL statements that use literals, review the database AWR report, 'SQL ordered by executions' section. Use standard archive and purge programs and functionality provided with Oracle E-Business Suite: Partitioning tables Data compression using advanced compression Information Life Cycle Management (ILM) Partitioning selected tables can bring significant benefits in performance and manageability. If a table is partitioned incorrectly, however, it can adversely affect performance. For details of how partitioning can be used effectively with EBS, refer to My Oracle Support Knowledge Document 554539.1 Database Partitioning for Oracle E-Business Suite. For details on different methods for purging and archiving data, refer to My Oracle Support Knowledge Document 752322.1 Reducing the Oracle E-Business Suite Data Footprint. Oracle Advanced Compression (ACO) will save space when data is written to the database, and also has the benefit of reducing the associated disk I/O for read operations. Refer to My Oracle Support Knowledge Document 2458161.1 Oracle E-Business Suite Release 12.2 with Oracle Database 12c Release 1 Advanced Compression for details on the performance impact of advanced compression. Managing the high water mark (HWM) and reclaiming space of Oracle E-Business Suite tables is highly recommended. Purge unwanted data from the tables frequently, to avoid full table scans and their performance issues. This will also improve transaction response times, as measured in transactions per second. Refer to My Oracle Support Knowledge Document 284692.1 How to Reduce the High water Mark for Advanced Queueing Objects Using Truncate in Pre-10g Databases. Section 10: References 10.1 My Oracle Support Knowledge Documents Document 2125596.1, Achieving Optimal Performance with Oracle E-Business Suite Document 69925.1, Pinning Oracle E-Business Suite Objects Into The Shared Pool Document 1121043.1, Collecting Diagnostic Data for Performance Issues in Oracle E-Business Suite Document 1057802.1, Best Practices for Performance for Concurrent Managers in E-Business Suite 10.2 Other Documents Oracle Cloud Infrastructure Classic Performance Tuning Best Practices Migrating to the Oracle Database Cloud with SQL Performance Analyzer Oracle E-Business Suite Maintenance Guide Oracle E-Business Suite Security Guide Oracle E-Business Suite Setup Guide Oracle Workflow Administrator's Guide

Monday, May 3, 2021

AutoConfig and options | adautocfg.sh

 

Execution of Scripts

As well as its instantiation activities, AutoConfig may execute other scripts, depending on the requirements of the specific Oracle E-Business Suite system.

Phases of Operation

As AutoConfig parses the driver files, it carries out a series of actions, grouped into several distinct phases:

  • INSTE8 - Instantiates AutoConfig template files to the AutoConfig configuration files specified in the relevant template driver files.

  • INSTE8_SETUP - Executes setup scripts that carry out activities not involving connection to the database.

  • INSTE8_PRF - Executes setup scripts that update profile options.

  • INSTE8_APPLY - Executes setup scripts that carry out activities involving updates to the database.

  • BINCPY - Copies the file mentioned from the source file to the configuration file, creating parent directories for the latter if necessary. AutoConfig will report an error if the source file cannot be found.

  • BINCPY_IGERR - Copies the file mentioned from the source file to the configuration file, creating parent directories for the latter if necessary. AutoConfig will not report an error if the source file cannot be found.

AutoConfig carries out these actions in the following order:

  1. All INSTE8 and BINCPY actions - Carries out all file instantiations called for during INSTE8, INSTE8_SETUP, INSTE8_PRF and INSTE8_APPLY, and all copying from source files to target configuration files.

  2. INSTE8_SETUP actions - For the files that were instantiated in Step 1, AutoConfig runs all SETUP scripts.

  3. INSTE8_PRF actions - For the files that were instantiated in Step 1, AutoConfig runs all PRF scripts.

  4. INSTE8_APPLY actions - For the files that were instantiated in Step 1, AutoConfig runs all APPLY scripts.

At the end of this process, the required configuration files and profile options have been created for the E-Business Suite installation.

https://docs.oracle.com/cd/E26401_01/doc.122/e22949/T120505T120514.htm#T388398

https://docs.oracle.com/cd/E18727_01/doc.121/e12841/T120505T120514.htm

ORA-04063: package body “APPS.AD_ZD_ADOP” has errors

 adop prepare phase was failing with the following error in customer test environment.



echo $FILE_EDITION
run

echo $TWO_TASK
test


adop phase=prepare

Enter the APPS password:
Enter the SYSTEM password:
Enter the WLSADMIN password:

Validating credentials.

Initializing.
    Run Edition context  : /u01/apps/fs1/inst/test_erptest/appl/admin/test_erptest.xml
    Patch edition context: /u01/apps/fs2/inst/test_erptest/appl/admin/test_erptest.xml
    Patch file system free space: 89.80 GB

Validating system setup.



    [ERROR]     Failed to execute SQL statement:
 select AD_ZD_ADOP.GET_INVALID_NODES() from dual
    [ERROR]     Error Message:
    [ERROR]     ORA-04063: package body "APPS.AD_ZD_ADOP" has errors (DBD ERROR: OCIStmtExecute)
    [UNEXPECTED]Error determining whether this is a multi-node instance


[STATEMENT] Please run adopscanlog utility, using the command

"adopscanlog -latest=yes"

to get the list of the log files along with snippet of the error message corresponding to each log file.


adop exiting with status = 2 (Fail)

Cause :

Package “APPS.AD_ZD_ADOP” is not valid since this package became invalid due to incorrect execution of adgrants.sql script.



Solution:

SQL> alter package APPS.AD_ZD_ADOP compile body;

Warning: Package Body altered with compilation errors.

SQL> show error
Errors for PACKAGE BODY APPS.AD_ZD_ADOP:

LINE/COL ERROR
-------- -----------------------------------------------------------------
2686/3   PL/SQL: Statement ignored
2686/7   PLS-00201: identifier 'SYS.DBMS_METADATA_UTIL' must be declared
SQL>
SQL> conn / as sysdba
Connected.

SQL> grant execute on DBMS_METADATA_UTIL to apps;

Grant succeeded.

SQL> conn apps
Enter password:
Connected.

SQL> alter package APPS.AD_ZD_ADOP compile body;

Package body altered.

adop phase=prepare

ADOP cycle completed without any further issues.

Thank you for reading.

Oracle E-Business Suite Release 12.2 System Schema Migration

In This Document Section 1: Overview of the EBS System Schema Section 2: Requirements for Using the EBS System Schema Section 3: Migrating t...