Showing posts with label DATABASE. Show all posts
Showing posts with label DATABASE. Show all posts

Wednesday, March 2, 2022

CREATE PASSWORD FILE IN ASM DISK GROUP

 FOR ORACLE 12C ONLY

ASMCMD> pwcreate –dbuniquename {db_unique_name} {file_path} {sys_password}

ASMCMD> pwcreate –dbuniquename PRDPRE +DATA/PWDFILE/pwdPRDPREoracle

FOR ALL VERSION

orapwd file=’+DATA/orapwPRODPRE’ ENTRIES=10 DBUNIQUENAME=’PRODPRE’



https://docs.oracle.com/database/121/OSTMG/GUID-2ACBBB1E-A39D-473E-A9EF-E7BC3872C36E.htm

Thursday, July 22, 2021

Oracle-Provided Diagnostic Tools

Oracle-Provided Diagnostic Tools • Individual tools • HANGFG: semi-automates hang and dumpfile generation • HANGFG User Guide (Doc ID 362094.1) • Procwatcher • Procwatcher: Script to Monitor and Examine Oracle DB and Clusterware Processes (Doc ID 459694.1) • How To Troubleshoot Database Contention With Procwatcher (Doc ID 1352623.1) • ORATOP • oratop - Utility for Near Real-time Monitoring of Databases, RAC and Single Instance (Doc ID 1500864.1)

Thursday, July 8, 2021

Table or Index Recommendation for reorg

set lines 300 pages 200 col INDEX_NAME for a25 col TABLE_NAME for a35 col INDEX_NAME for a30 SELECT UT.TABLE_NAME, UI.INDEX_NAME, US.BLOCKS AS "TABLE BLOCKS", UI.CLUSTERING_FACTOR AS "INDEX CLUSTERING FACTOR", UT.NUM_ROWS AS "TABLE ROWS", TRUNC((UT.NUM_ROWS/UI.CLUSTERING_FACTOR)/(UT.NUM_ROWS/UT.BLOCKS),2) as CF FROM USER_INDEXES UI, USER_TABLES UT, USER_CONSTRAINTS UC, USER_SEGMENTS US WHERE UI.TABLE_NAME = UT.TABLE_NAME AND UT.TABLE_NAME = UC.TABLE_NAME AND UI.INDEX_NAME = UC.INDEX_NAME AND UT.TABLE_NAME = US.SEGMENT_NAME AND US.SEGMENT_TYPE = 'TABLE' AND UC.CONSTRAINT_TYPE = 'P' AND UI.CLUSTERING_FACTOR > 0 AND UT.BLOCKS > 0 AND UT.NUM_ROWS > 0 AND TRUNC((UT.NUM_ROWS/UI.CLUSTERING_FACTOR)/(UT.NUM_ROWS/UT.BLOCKS),2) < 0.75 and UT.TABLE_NAME not like '%RDF%' ORDER BY CF; set lines 300 pages 200 col INDEX_NAME for a25 col TABLE_NAME for a15 SELECT UT.TABLE_NAME, UI.INDEX_NAME, US.BLOCKS AS "TABLE BLOCKS", UI.CLUSTERING_FACTOR AS "INDEX CLUSTERING FACTOR", UT.NUM_ROWS AS "TABLE ROWS", TRUNC((UT.NUM_ROWS/UI.CLUSTERING_FACTOR)/(UT.NUM_ROWS/UT.BLOCKS),2) CF FROM USER_INDEXES UI, USER_TABLES UT, USER_CONSTRAINTS UC, USER_SEGMENTS US WHERE UI.TABLE_NAME = UT.TABLE_NAME AND UT.TABLE_NAME = UC.TABLE_NAME AND UI.INDEX_NAME = UC.INDEX_NAME AND UT.TABLE_NAME = US.SEGMENT_NAME AND US.SEGMENT_TYPE = 'TABLE' AND UC.CONSTRAINT_TYPE = 'P' AND UT.TABLE_NAME IN ('SALES_DATA','MDP_MATRIX','ITEMS','T_EP_CTO_DATA','T_EP_CTO_MATRIX','GPS_OPPTY_DATA','GPS_OPPTY_DATES','GE_OPPTY','GE_OPPTY_MATRIX') ;

Performance Tuning Basics 15 : AWR Report Analysis

https://expertoracle.com/2018/02/06/performance-tuning-basics-15-awr-report-analysis/ Performance Tuning Basics 15 : AWR Report Analysis PUBLISHED FEBRUARY 6, 2018 by BRIJESH GOGIA The Oracle’s Automatic Workload Repository (AWR) collects, processes, and maintains performance statistics for problem detection and self-tuning purposes. The report generated by AWR is a big report and it can take years of experience to actually understand all aspects of this report. In this post we will try to explain some important sections of AWR, significance of those sections and also some important tips. Please note that explaining all sections of AWR will not be possible so we will stick to some of the most frequently used sections. Note that this is not comprehensive information and goal is to help in giving an overview of few key sections to Junior DBAs as a primer and to encourage them to build further the knowledge in related fields. To start with let us mention some high level important tips regarding AWR: 1. Collect Multiple AWR Reports: It’s beneficial to have two AWR Reports, one for the good time and other when performance is poor or you can create three reports (Before/Meantime/After reports) during the time frame problem was experienced and compare it with the time frame before and after. 2. Stick to Particular Time: You must have a specific time when Database was slow so that you can choose a shorter timeframe to get a more precise report. 3. Split Large AWR Report into Smaller Reports: Instead of having one report for long time like one report for 3 hrs. it is better to have three reports each for one hour. This will help to isolate the problem 4. FOR RAC, take each instance’s individual report: For RAC environment, you need to do it separately of all the instances in the RAC to see if all the instances are balanced the way they should be. 5. Use ASH also : Use AWR to identify the troublesome areas and then use ASH to confirm those areas. 6. Increase the retention period : Some instances where you get more performance issues you should increase the retention time so that you can have historical data to compare. TIME UNITS USED IN VARIOUS SECTIONS OF AWR REPORTS -> s – second -> cs – centisecond – 100th of a second -> ms – millisecond – 1000th of a second -> us – microsecond – 1000000th of a second Top Header SIGNIFICANCE OF THIS SECTION: This contains information about the Database and environment. Along with the snapshot Ids and times. Important thing to notice is that the configuration like CPU and Memory has not changed when the performance is degraded. PARAMETER DESCRIPTION ANALYSIS DB TIME Time spent in database during the Elapsed Time OR Sum of the time taken by all sessions in the database during the ‘Elapsed’ time.DB Time= CPU Time + Non IDLE wait time. Note: it does not include background processes DB TIME > Elapsed Time will mean that the sessions were active on database concurrently You cam find the average active sessions during AWR Time: DB TIME/ELAPSED => 1964.97/899.99 = 2.18 So database load (average active sessions) = 2.18 (SAME IDEA AS CPU LOAD on UNIX) It means that May be ~2 users were active on database for ‘Elapsed’ Time. or May be 4 users were active for Elapsed Time/2 each or May be 8 users were active for Elapsed Time/4 each or May be 64 users were active for Elapsed Time/32 each If DB Time has higher value means DB Activity/Sessions were High during the AWR Time. AWR REPORT THAT WE WILL REVIEW BELOW IS BASED ON THIS DB TIME. This means that for every minute of Elapsed time there is 2.2 minutes of work in done in the database ELAPSED TIME The time duration in which this AWR report has been generated. Elapsed time should contain the issue duration. Take manual snapshots if required CPUs Thread count per core. It is not “actual” CPU. STARTUP TIME Database startup time RAC If you have more than one node then take AWR from all nodes if you don’t know issues are happening in which node. Load Profile SIGNIFICANCE OF THIS SECTION: Here in load profile (average active sessions, DB CPU, logical and physical reads, user calls, executions, parses, hard parses, logons, rollbacks, transactions) — check if the numbers are consistent with each other and with general database profile (OLTP/DWH/mixed) Pay most attention to physical reads, physical writes, hard parse to parse ratio and executes to transaction ratio. The ratio of hard parses to parses tells you how often SQL is being fully parsed. Full parsing of SQL statements has a negative effect on performance. High hard parse ratios (>2 – 3 percent) indicate probable bind variable issues or maybe versioning problems. Rows per sort can also be reviewed here to see if large sorts are occurring. This section can help in the load testing for application releases. You can compare this section for the baseline as well as high load situation. PARAMETER DESCRIPTION ANALYSIS Redo Size (Bytes) The main sources of redo are (in roughly descending order): INSERT, UPDATE and DELETE. For INSERTs and UPDATE s Not very scary number in our report High redo figures mean that either lots of new data is being saved into the database, or existing data is undergoing lots of changes. What do you do if you find that redo generation is too high (and there is no business reason for that)? Not much really — since there is no “SQL ordered by redo” in the AWR report. Just keep an eye open for any suspicious DML activity. Any unusual statements? Or usual statements processed more usual than often? Or produce more rows per execution than usual? Also, be sure to take a good look in the segments statistics section (segments by physical writes, segments by DB block changes etc.) to see if there are any clues there. DB CPU Its the amount of CPU time spent on user calls. Same as DB time it does not include background process. The value is in microseconds We have 8 CORES and so we can potentially use 8 seconds of CPU time per second. In this case DB CPU (s) : 1.9 (per second) is reporting that the system is using 1.9 seconds of CPU of the potential 8 seconds/second that it can use.We are not CPU Bound LOGICAL READS Consistent gets + db block gets = Logical Reads As a process, Oracle will try to see if the data is available in Buffer cache i.e. SGA? If it does, then logical read increases to 1. To explain a bit further, if Oracle gets the data in a block which is consistent with a given point in time, then a counter name “Consistent Gets” increases to 1. But if the data is found in current mode, that is, the most up-to-date copy of the data in that block, as it is right now or currently then it increases a different counter name “db block Gets”. Therefore, a Logical read is calculated as = Total number of “Consistent Gets” + Total number of “db block gets”. These two specific values can be observed in ‘Instance Activity Stats’ section. Logical and physical reads combined shows measure of how many IO’s (Physical and logical) that the database is performing.. If this is high go to section “SQL by logical reads”. That may help in pointing which SQL is having more logical reads. USER QUERIES Number of user queries generated PARSES The total of all parses, hard and soft HARD PARSES The parses requiring a completely new parse of the SQL statement. These consume both latches and shared pool area. How much hard parsing is acceptable? It depends on too many things, like number of CPUs, number of executions, how sensitive are plans to SQL parameters etc. But as a rule of a thumb, anything below 1 hard parse per second is probably okay, and everything above 100 per second suggests a problem (if the database has a large number of CPUs, say, above 100, those numbers should be scaled up accordingly). It also helps to look at the number of hard parses as % of executions (especially if you’re in the grey zone). If you suspect that excessive parsing is hurting your database’s performance: 1) check “time model statistics” section (hard parse elapsed time, parse time elapsed etc.) 2) see if there are any signs of library cache contention in the top-5 events 3) see if CPU is an issue. Soft Parses: Soft parses are not listed but derived by subtracting the hard parses from parses. A soft parse reuses a previous hard parse; hence it consumes far fewer resources. Physical Reads But if it Oracle does not find the data in buffer cache, then it reads it from physical block and increases then Physical read count to 1. Clearly, buffer get is less expensive than physical read because database has to work harder (and more) to get the data. Basically time it would have taken if available in buffer cache + time actually taken to find out from physical block. If this is high go to section “SQL by Physical reads”. That may help in pointing which SQL is having more Physical reads. Executes (SQL) If executes per second looks enormous then its a red flag. Example: This numbers, combined with high CPU usage, are enough to suspect there MAY BE context switching as the primary suspect: a SQL statement containing a PL/SQL function, which executes a SQL statement hundreds of thousands of time per function call. if it is high then it also suggests that most of the database load falls on SQL statements in PL/SQL routines. User Calls number of calls from a user process into the database – things like “parse”, “fetch”, “execute”, “close” This is an extremely useful piece of information, because it sets the scale for other statistics (such as commits, hard parses etc.). In particular, when the database is executing many times per a user call, this could be an indication of excessive context switching (e.g. a PL/SQL function in a SQL statement called too often because of a bad plan). In such cases looking into “SQL ordered by executions” will be the logical next step. Logons logons – really means what it means. Number of logons Establishing a new database connection is also expensive (and even more expensive in case of audit or triggers). “Logon storms” are known to create very serious performance problems. If you suspect that high number of logons is degrading your performance, check “connection management elapsed time” in “Time model statistics”. Sorts Sort operations consume resources. Also, expensive sorts may cause your SQL fail because of running out of TEMP space. So obviously, the less you sort, the better (and when you do, you should sort in memory). However, I personally rarely find sort statistics particularly useful: normally, if expensive sorts are hurting your SQL’s performance, you’ll notice it elsewhere first. DB Time average number of active sessions is simply DB time per second. Block Changes Number of blocks modified during the sample interval Instance Efficiency Percentage SIGNIFICANCE OF THIS SECTION: Rule of thumb: Always minimize the number of Hard parses. This reduction yields the benefits of minimizing CPU overhead spent performing costly parse work. Every ratio here should reach 100% PARAMETER DESCRIPTION ANALYSIS In memory sort % Shows %of times Sorting operations happened in memory than in the disk (temporary tablespace). In Memory Sort being low (in the high 90s or lower) indicates PGA_AGGREGATE_TARGET or SORT_AREA_SIZE issues soft parse % Shows % of times the SQL in shared pool is used. Shows how often sessions issued a SQL statement that is already in the shared pool and how it can use an existing version of that statement. Soft Parsing being low indicates bind variable and versioning issues. With 99.25 % for the soft parse meaning that about 0.75 % (100 – soft parse) is happening for hard parsing. Low hard parse is good for us. % Non-Parse CPU Oracle utilizes the CPU mostly for statement execution but not for parsing. If this value is near 100% means most of the CPU resources are used into operations other than parsing, which is good for database health. Most of our statements were already parsed so we weren’t doing a lot of re parsing. Re parsing is high on CPU and should be avoided. Execute to Parse % Shows how often parsed SQL statements are reused without re-parsing. The way this ratio is computed, it will be a number near 100 percent when the application executes a given SQL statement many times over but has parsed it only once. If the number of parse calls is near the number of execute calls, this ratio trends towards zero. If the number of executes increase while parse calls remain the same this ratio trends up. When this number is low, parsing is consuming CPU and shared pool latching. Parse CPU to Parse Elapsed % Gives the ratio of CPU time spent to parse SQL statements. If the value are low then it means that there could be a parsing problem. You may need to look at bind variable issues or shared pool sizing issue. If low it also means some bottleneck is there related to parsing. We would start by reviewing library cache contention and contention in shared pool latches. You may need to increase the shared pool. Buffer Hit % Measures how many times a required block was found in memory rather than having to execute an expensive read operation on disk to get the block. Buffer Nowait% Indicates % of times data buffers were accessed directly without any wait time. This ratio relates to requests that a server process makes for a specific buffer. This is the percentage of those requests in which the requested buffer is immediately available. All buffer types are included in this statistic. If the ratio is low, check the Buffer Wait Statistics section of the report for more detail on which type of block is being contended. Most likely, additional RAM will be required. Library Hit% Shows % of times SQL and PL/SQL found in shared pool. Library hit % is great when it is near 100%. If this was under 95% we would investigate the size of the shared pool. In this ration is low then we may need to: • Increase the SHARED_POOL_SIZE init parameter. • CURSOR_SHARING may need to be set to FORCE. • SHARED_POOL_RESERVED_SIZE may be too small. • Inefficient sharing of SQL, PLSQL or JAVA code. • Insufficient use of bind variables Latch Hit % Shows % of time latches are acquired without having to wait. If Latch Hit % is <99%, you may have a latch problem. Tune latches to reduce cache contention Redo NOWait% Shows whether the redo log buffer has sufficient size. Top 10 Foreground Wait Events by Total Wait Time SIGNIFICANCE OF THIS SECTION: This section is critical because it shows those database events that might constitute the bottleneck for the system. Here, first of all check for wait class if wait class is User I/O , System I/O, Others etc this could be fine but if wait class has value “Concurrency” then there could be some serious problem. Next to look at is Total Wait Time (sec) which show how many times DB was waiting in this class and then Wait Avg (ms). If Total Wait Time(sec) are high but Wait Avg(ms) is low then you can ignore this. If both are high or Wait Avg(ms) is high then this has to further investigate. Note that there could be significant waits that are not listed here, so check the Foreground Wait Events (Wait Event Statistics) section for any other time consuming wait events. For the largest waits look at the Wait Event Histogram to identify the distribution of waits. PARAMETER DESCRIPTION ANALYSIS DB CPU Time running in CPU (waiting in run-queue not included) Here 84.8% is the %DB Time for this Event which is really HIGH! DB Time was 1965 minutes and DB CPU is (100000/60= 1667 minutes) 1667/1965 is 84.8% which is shown in above table. We can find 1) DB CPU LOAD =Total wait time /AWR TIME =100,000/(900*60) =1.85 2) DB CPU UTLIZATION % =DB CPU LOAD/Number of Cores = =(1.85/8) X 100 = 23% of Host cores IMPORTANT: Your server may have other database instances sharing the CPU resources so take into account those too. Also this do not mean that server CPU is 84% Utilized! Sum of %DB Time The sum should be approx 100%. If it is way below 100% then it may mean that wait events were irrelevant OR Server is overloaded. enq TX – row lock contention waited for locked rows This parameter value currently is only 0.2% of total DB time so we don’t have to much worry about it. Say that it was higher value, 10% then we will have to look into root cause. You will have to go to “Segments by Row Lock Waits” and see what tables are getting locked and then you will have to see in which SQL_ID these are used. DB FILE SEQUENTIAL READ single block i/o Sequential read is an index read followed by table read because it is doing index lookups which tells exactly which block to go to Average I/O call is 2ms which is not very high. If you have say very high wait average example 100ms or 200ms, it means that your disks are slow Are your SQLs returning too many rows, is the I/O response pretty bad on the server, is DB not sized to cache enough result sets You need to see then the “File IO Stats” section in the AWR report. The event indicates that index scan is happening while reading data from table. High no. of such event may be a cause of unselective indexes i.e. oracle optimizer is not selecting proper indexes from set of available indexes. This will result in extra IO activity and will contribute to delay in SQL execution. Generally high no. is possible for properly tuned application having high transaction activity. •If Index Range scans are involved, more blocks than necessary could be being visited if the index is un-selective.By forcing or enabling the use of a more selective index, we can access the same table data by visiting fewer index blocks (and doing fewer physical I/Os). •If indexes are fragmented, we have to visit more blocks because there is less index data per block. In this case, re-building the index will compact its contents into fewer blocks. • If the index being used has a large Clustering Factor, then more table data blocks have to be visited in order to get the rows in each index block. By rebuilding the table with its rows sorted by the particular index columns we can reduce the Clustering Factor and hence the number of table data blocks that we have to visit for each index block. LOG FILE SYNC Here Wait AVG (MS) is 6 which is not a cry number. Above 20ms we don’t consider good numberAlso go to “Instance Activity Stats” section and see how many commits actually happened and then see here that what % of COMMITS have to wait.Remember that short transactions, frequent commits is property of OLTP Application. DB FILE SCATTERED READ caused due to full table scans may be because of insufficient indexes or un-avilablity of updated statistics To avoid this event, identify all the tables on which FTS is happening and create proper indexes so that oracle will do Index scans instead of FTS. The index scan will help in reducing no. of IO operations. To get an idea about tables on which FTS is happening please refer to “Segment Statistics” -> “Segments By Physical Read” section of AWR report. This section lists down both Tables and Indexes on which Physical Reads are happening. Please note that physical reads doesn’t necessarily means FTS but a possibility of FTS. Concurrency, wait class Concurrency wait class is not good and if high then need to be analyzed. direct path read temp or direct path write temp this wait event shows Temp file activity (sort,hashes,temp tables, bitmap) check pga parameter or sort area or hash area parameters. You might want to increase them Wait Class, column helps in classifying whether the issue is related to application or infrastructure. Wait events are broadly classified in to different WAIT CLASSES: Administrative Application Concurrency User IO System IO Cluster Commit Configuration Idle Network Buffer Busy Wait Indicates that particular block is being used by more than one processes at the same. When first process is reading the block the other processes goes in a wait as the block is in unshared more. Typical scenario for this event to occur is, when we have batch process which is continuously polling database by executing particular SQL repeatedly and there are more than one parallel instances running for the process. All the instances of the process will try to access same memory blocks as the SQL they are executing is the same. This is one of the situation in which we experience this event. enq: TX – row lock contention: Oracle maintenance data consistency with the help of locking mechanism. When a particular row is being modified by the process, either through Update/ Delete or Insert operation, oracle tries to acquire lock on that row. Only when the process has acquired lock the process can modify the row otherwise the process waits for the lock. This wait situation triggers this event. The lock is released whenever a COMMIT is issued by the process which has acquired lock for the row. Once the lock is released, processes waiting on this event can acquire lock on the row and perform DML operation. enq: UL – contention: This enq wait occurs when application explicitly locks by executing the lock table command. enq: TM – contention This usually happens due to a missing foreign key constraint on a table that’s part of a DML operation. Host CPU SIGNIFICANCE OF THIS SECTION: A high level of DB CPU usage in the Top N Foreground Events (or Instance CPU: %Busy CPU) does not necessarily mean that CPU is a bottleneck. In this example also we have DB CPU as the highest consuming category in the “Top 10 Foreground Events” Look at the Host CPU and Instance CPU sections. The key things to look for are the values “%Idle” in the “Host CPU” section and “%Total CPU” in the “Instance CPU” section. If the “%Idle” is low and “%Total CPU” is high then the instance could have a bottleneck in CPU (be CPU constrained). Otherwise, the high DB CPU usage just means that the database is spending a lot of time in CPU (processing) compared to I/O and other events. In either case (CPU is a bottleneck or not) there could be individual expensive SQLs with high CPU time, which could indicate suboptimal execution plans, especially if accompanied with high (buffer) gets. If you see in our case %idle is high 74% AND %Total CPU is just 7.45 so CPU is not a bottle neck in this example. PARAMETER DESCRIPTION ANALYSIS CPUs are actually threads. Here we have 8 Cores and 8 Threads per Core so CPU = number of core X number of threads per core = 8 X 8 = 64 Load Average Compare Load average with Cores. Very Ideal thing is that Load Average should be less than Cores although this may not be happening (any it may not be issue also!) %Idle Can be misleading as sometimes your %Idle can be 50% but your server is starving for CPU. 50% means that all your cores are BUSY. You may have free threads (CPU) but you can not run two processes CONCURRENTLY on same CORE. All % in this reports are calculated based on CPU ( which are actually threads) Cores Here we have 8 core system So we have 8 cores, meaning in a 60 min hour we have 60 X 8 = 480 CPU minsand total AWR duration is 8 hoursso 480X8 = 3840 CPU minutes in total Instance CPU SIGNIFICANCE OF THIS SECTION: A high level of DB CPU usage in the Top N Foreground Events (or Instance CPU: %Busy CPU) does not necessarily mean that CPU is a bottleneck. In this example also we have DB CPU as the highest consuming category in the “Top 10 Foreground Events” Look at the Host CPU and Instance CPU sections. The key things to look for are the values “%Idle” in the “Host CPU” section and “%Total CPU” in the “Instance CPU” section. If the “%Idle” is low and “%Total CPU” is high then the instance could have a bottleneck in CPU (be CPU constrained). Otherwise, the high DB CPU usage just means that the database is spending a lot of time in CPU (processing) compared to I/O and other events. In either case (CPU is a bottleneck or not) there could be individual expensive SQLs with high CPU time, which could indicate suboptimal execution plans, especially if accompanied with high (buffer) gets. If you see in our case %idle is high 74% AND %Total CPU is just 7.45 so CPU is not a bottle neck in this example. Cache Sizes SIGNIFICANCE OF THIS SECTION: From Oracle 10g onwards, database server does Automatic Memory Management for PGA and SGA components. Based on load, database server keeps on allocating or deallocating memory assigned to different components of SGA and PGA. Due to this reason, we can observe different sizes for Buffer Cache and Shared Pool, at the beginning or end of AWR snapshot period. Shared Pool Statistics SIGNIFICANCE OF THIS SECTION: PARAMETER DESCRIPTION ANALYSIS Memory Usage% shared pool usage If your usage is low (<85 percent) then your shared pool is over sized. if Memory Usage % is too large like 90 % it could mean that your shared pool is tool small % SQL with executions >1 Shows % of SQLs executed more than 1 time. The % should be very near to value 100. If your reuse is low (<60 – 70 percent) you may have bind variable or versioning issues. Ideally all the percentages in this area of the report should be as high (close to 100) as possible. memory for SQL w/exec>1 From the memory space allocated to cursors, shows which % has been used by cursors more than 1. Time Model Statistics SIGNIFICANCE OF THIS SECTION: Important statistics here is the DB Time. The statistic represents total time spent in database calls. It is calculated by aggregating the CPU time and wait time of all sessions not waiting on idle event (non-idle user sessions). Since this timing is cumulative time for all non-idle sessions, it is possible that the time will exceed the actual wall clock time. PARAMETER DESCRIPTION ANALYSIS SQL EXECUTE ELAPSED TIME Time spent executing the SQL Statement Out of all the DB Time which is 117,898.47 seconds , 92.32% of time, 108,843.51 seconds, database is executing the SQL query so our attention will be to find out what all SQLs took so much of DB Time DB CPU DB CPU represents time spent on CPU resource by foreground user processes. This time doesn’t include waiting time for CPU. DB time and DB CPU define two important timescales: wait times should be measured against the DB TIME, while CPU consumption during certain activity (e.g. CPU time parsing) should be measured against DB CPU. Above % showing for DB CPU may not be the only % to focus on. You should find below number and then see what is DB CPU usage. DB CPU usage (% of CPU power available) = CPU time / NUM_CPUS / elapsed time Where NUM_CPUS is found in the Operating System statistics section. Of course, if there are other major CPU users in the system, the formula must be adjusted accordingly. To check that, look at OS CPU usage statistics either directly in the OS. Parse Time Elapsed “Parse time elapsed” represents time spent for Syntax and Semantic checks. Hard parse elapsed time “Hard parse include time” represents time spent for Syntax and Semantic checks PLUS time spent for optimizing the SQL and generating optimizer plan. % DB Time In the time model statistics hierarchy, a child statistic may be counted under more than one parent and that is why the sum of the percentages equal more than 100 soft parse can be get by subtracting parse time from hard parse Foreground Wait Class SIGNIFICANCE OF THIS SECTION: This is of less use. A wait could have multiple possible causes (in different classes) depending on the context. There are normally only a handful of time consuming waits, which can be analyzed and investigated separately. There are over 800 distinct wait events. Oracle has grouped these wait events in 12 wait classes. These wait classes are further divided in 2 categories, Administrative Wait Class and Application Wait Class. These wait classes gives overall information about whether the waits happening for Application or for System events. PARAMETER DESCRIPTION ANALYSIS User I/O High User IO means, From the pool of available indexes proper indexes are not being used OR FTS is happening on big tables with millions of rows Foreground Wait Events SIGNIFICANCE OF THIS SECTION: Mostly The idle events are listed down in the end which should not be focused much. This is useful because there could be time consuming report wait events that do not appear in the “Top N Timed Foreground Events”. For the larger waits look at the Wait Event Histogram to identify the distribution of waits. Are they closely clustered around an average value or are there a wide variance of values ? Are there a large number of smaller waits or a few larger waits ? PARAMETER DESCRIPTION ANALYSIS SQL*Net Message from client Idle wait event We can find the number of average inactive sessions by this wait event Number of inactive sessions = Total Wait Time/ (AWR Time * 60) = 12679570/ (900 * 60) = 235 average inactive sessions This doesn’t mean user sessions as such but the number of such connections from Application Server connection pool. Direct path read/write to temp Shows excessive sorting/hashing/global temp table/bitmap activity going to your temporary tablespace. Review PGA_AGGREGATE_TARGET settings. Even if it looks like it is big enough, if you aregetting multiple small sorts to disk it could mean your user load is over-utilizing it. SQL*Net Message to client SQL*Net message to client waits almost always indicates network contention. SQL*Net more data from client If it is very low then it indicates that the Oracle Net session data unit size is likely set correctly. Db file sequential reads Usually indicates memory starvation, look at the db cache analysis and for buffer busy waits along with cache latch issues. Db file scattered reads Usually indicates excessive full table scans, look at the AWR segment statistics for tables that are fully scanned Log file Sync Log file related waits: Look at excessive log switches, excessive commits or slow IO subsystems. Wait Event Histogram SIGNIFICANCE OF THIS SECTION: This can be used to determine the distribution of wait times. These days less than 5ms is expected and more than 10ms is considered poor. An analysis of the histogram can indicate if a high average time is due to a few individual long waits. PARAMETER DESCRIPTION ANALYSIS DB FILE SEQUENTIAL READ This parameter will have mostly higher number of wait events in the histogram. Now if you see approx 50% wait events have less than 1 ms of wait and another 30% has less than 2 ms. It means that our disks are working good. Wait is low for most of the sessions going to database. We simply don’t want that high% (and high wait events numbers) are above 8ms of wait. Now if you see that the DB FILE SEQUENTIAL READ is the key wait event then next thing will be to find a) which segment is the bottleneck (go to “Segments by Physical Reads” section b) which sql query has that segment used. SQL ordered by Elapsed Time SIGNIFICANCE OF THIS SECTION: This can be used to identify the long running SQLs that could be responsible for a performance issue. It can give useful information about the CPU time, the number of executions and the (SQL) Module. The Top SQLs can be matched to long running or slow Processes in the application. In this report, look for query has low executions and high Elapsed time per Exec (s) and this query could be a candidate for troubleshooting or optimizations. In above report, you can see first query has maximum Elapsed time but only 2 execution. So you have to investigate this. NOTE 1: The Elapsed time can indicate if a SQL is multithreaded (either Parallel DML/SQL or multiple workers). In this case the elapsed time will be multiple times the AWR duration (or the observed clock time of the process/SQL). The elapsed time for multithreaded SQL will be the total of elapsed time for all workers or parallel slaves. NOTE 2: The “SQL Ordered” sections can often contain the PL/SQL call that contains SQLs. So in this case the procedure WF_ENGINE (via procedures) ultimately calls the SQL b6mcn03jvfg41. Also if you see the first line here that is also a package BEGIN XXINV7566…. and inside this package it is running the SQL query running in the line 2 which is insert into XXINV7566_IQR….. PARAMETER DESCRIPTION ANALYSIS Elapse per Exec (s) Elapse time in seconds for per execution of the SQL. Captured SQL Account for 79.1% of total DB Time Shows that how many % of SQL this AWR report was able to capture and show us Remember that AWR reports shows those SQL which were in shared pool at the end of the AWR Time. This number should be high value which will mean that we were able to capture all those SQLs which consumed the DB Time. If this is low number than try to generate AWR for lower snap duration so that we are able to capture the required SQLs which are consuming DB Time Executions Total no. of executions for the SQL during the two snapshot period. An important point, if executions is 0 also sometimes, it doesn’t means query is not executing, this might be the case when query was still executing and you took AWR report. That’s why query completion was not covered in Report. SQL Module Provides module detail which is executing the SQL. Process name at the OS level is displayed as SQL Module name. If the module name starts with any of the names given below, then don’t consider these SQLs for tuning purpose as these SQLs are oracle internal SQLs, DBMS, sqlplusw, TOAD, rman, SQL, Enterprise Manager, ORACLE, MMON_SLAVE, emagent etc… In the list XXIN1768 has two SQLIDs. The SQL id #1 is PL/SQL code as a wrapper and it took around 53k seconds. The sql #2 took 51k seconds and seems to be called in sql ID# 1, as their module names are same. Since the SQL#2 insert statement took almost all of the time so we sill focus on this query for tuning. Elasped Time The Elapsed Time is the sum of all individual execution time for the sql_id. So if multiple sessions execute the same SQLs, the elapsed time can be greater than the period of two snap_ids. SQL ordered by CPU Time SIGNIFICANCE OF THIS SECTION: The most useful sections are SQL ordered by Elapsed Time, CPU Time, Gets and Reads. All the sections can be useful in identifying if a particular SQL from a particular module was running during the AWR report period However, In most cases this section does not reveal much more information than the “SQL Ordered by Elapsed Time” section. However, it does sort by CPU and can output SQLs that are not in the previous section. PARAMETER DESCRIPTION ANALYSIS The top record in this table The first and second report are part of same transaction. Second SQL is the inside part of first PLSQL.It is accounting huge % of the DB CPU and remember that DB CPU was the top event in our AWR. SQL ordered by Gets SIGNIFICANCE OF THIS SECTION: The most useful sections are SQL ordered by Elapsed Time, CPU Time, Gets and Reads. All the sections can be useful in identifying if a particular SQL from a particular module was running during the AWR report period This is the logical Reads from Buffer Cache When “CPU Other” is a significant component of total Response Time, then it is likely that the time is being spent retrieving and manipulating Blocks and Buffers (Block accesses are also known as Buffer Gets and Logical I/Os). Then as a next step we will come to this section to find the SQL statements that access the most blocks because these are likely to be responsible for the majority of this time. This section will tell you HOW MANY BLOCKS WERE READ and “SQL Ordered by Executions” section will tell you how many rows were fetched. Many a time this and that section will have same SQL_ID. Now if more BLOCKS are read here and that section tells that number of rows fetched are real low means something is not right with the query. Why it has to read so many blocks to get less rows.May be bad execution plan. A high number of buffer gets is one of the main indicators of SQLs with suboptimal execution plans Example if you see in this example, insert statement in row number 4 (which is related to PLSQL in row number 3) is doing very high number of buffer gets for single execution.. BUT Bear in mind that the SQL could have a good execution plan and just be doing a lot of work. So we need to bring into account the data volume (and parameters that are getting passed to the query). You can easily see execution plans by running @awrsqrpt.sql and passing offending SQL_ID as parameter. You can refer to the “SQL Ordered by Executions” after reading this section. PARAMETER DESCRIPTION ANALYSIS Gets per Exec HOW MANY BLOCKS WERE READ insert statement in XXINV1738 module has Gets per Exec that is too high, you need to analyze the SQL with some additional output such as sqlt and sqlhc. SQL ordered by Reads SIGNIFICANCE OF THIS SECTION: This section reports the contents of the SQL area ordered by the number of reads from the data files and can be used to identify SQL causing IO bottlenecks which consume the following resources. • CPU time needed to fetch unnecessary data. • File IO resources to fetch unnecessary data. • Buffer resources to hold unnecessary data. • Additional CPU time to process the query once the data is retrieved into the buffer. • % Total can be used to evaluate the impact of each statement. If we talk about wait time then “Wait Time” is the sum of time waited for non-idle Wait Events. These include I/O waits for reading blocks from disk as measured by the Wait Events ‘db file sequential read’ for single-block reads and ‘db file scattered read’ for multi-block reads. When such Wait Events are found to be significant components of Response Time, the next step is to find the SQL statements that read the most blocks from disk. We will refer to this section then. This is the Physical Reads from Disk If the physical I/O waits (e.g. db file sequential read, db file scattered read, direct path read) are relatively high then this section can indicate which SQLs are responsible. This section can help identify the SQLs that are responsible for high physical I/O and may indicate suboptimal execution plans, particularly if the execution plan contains full table scans or large index range scans (where more selective index scans are preferable). PARAMETER DESCRIPTION ANALYSIS Captured SQL account for 76% of total Our goal is to have this % as high as possible. Probably breaking down this AWR into smaller interval will increase this %. The top record in this table We have seen in the “Segments by Physical Reads” section that MTL_ITEM_CATEGORIES account for 51.32% and looking here the SQL_ID which is a top and having 40.18% TOTAL is using this table.Yousee here that although it has executed 73 times but reads per executions is high making it top query consuming physical i/o.In contrast query at number 4 in this table has been executed around 40k times but since reads per execution is low so number 4th query is not the top query to worry about. Reads per Exec Possible reasons for high Reads per Exec are use of unselective indexes require large numbers of blocks to be fetched where such blocks are not cached well in the buffer cache, index fragmentation, large Clustering Factor in index etc. SQL ordered by Physical Reads (UnOptimized) SIGNIFICANCE OF THIS SECTION: This section is of concern when you have exadata machine in use Read requests that are satisfied from the Smart Flash Cache in Exadata are termed ‘optimized’ since they are returned much faster than requests from disk (the implemention uses solid state device (SSD) technology). Additionally, read requests accessing Storage Indexes using smart scans in Oracle Exadata V2 (and significantly reducing I/O operations) also fall under the category ‘optimized read requests’ since they avoid reading blocks that do not contain relevant data. IMPORTANT: In database systems where ‘Optimized Read Requests’ are not present (which are not using EXA) , UnOptimized Read Reqs will be equal to Physical Read Reqs (I/O requests satisfied from disk). In this case columns ‘UnOptimized Read Reqs’ and ‘Physical Read Reqs’ will display the same values and column ‘%Opt’ will display zero (as seen in extract from AWR report above). PARAMETER DESCRIPTION ANALYSIS Physica Read Reqs Note that the ‘Physical Read Reqs’ column in the ‘SQL ordered by Physical Reads (UnOptimized)’ section is the number of I/O requests and not the number of blocks returned. Be careful not to confuse these with the Physical Reads statistics from the AWR section ‘SQL ordered by Reads’, which counts database blocks read from the disk not actual I/Os (a single I/O operation may return many blocks from disk). SQL ordered by Parse Calls SIGNIFICANCE OF THIS SECTION: This section shows the number of times a statement was parsed as compared to the number of times it was executed. One to one parse/executions may indicate that: • Bind variables are not being used. • On RDBMS version 8172 and higher the init.ora parameter session_cached_cursors was not set in the init.ora (100 is usually the suggested starting value). • The shared pool may be too small and the parse is not being retained long enough for multiple executions. • init.ora cursor_sharing should be set to FORCE When “CPU Parse” is a significant component of total Response Time the next step is to find the SQL statements that have the most parses. AWR and Statspack list such SQL statements in sections such as “SQL ordered by Parse Calls”. Tablespace IO Stats SIGNIFICANCE OF THIS SECTION: These are useful to see what your hot tablespaces are. For example, having the SYSTEM tablespace as the number one source of IO could indicate you have improper temporary tablespace assignments as these used to default to SYSTEM. Having the TEMP or UNDO tablespaces in the top position has already been discussed. Usually in an OLTP system one of your index tablespaces should be at the top. In a DWH or OLAP a data tablespace should be at the top. Also look at the latency values. For disk based systems 5.0 ms is considered good performance. PARAMETER DESCRIPTION ANALYSIS Av Rd(ms) Av Rd(ms) on the tablespace IO stats should be controlled under 10, which is ideal. But Avg read (ms) of up to 20 is acceptable for IO performance. NOTE: When the figure in Reads column is too low, you can ignore the Av Rd(ms). Av Buf Wt(ms Av Buf Wt(ms) on the tablespace IO stats should be controlled under 10, which is ideal. Buffer Pool Advisory SIGNIFICANCE OF THIS SECTION: The buffer pool advisory report answers the question, how big should you make your database buffer cache. It provides an extrapolation of the benefit or detriment that would result if you added or removed memory from the database buffer cache. These estimates are based on the current size of the buffer cache and the number of logical and physical IO’s encountered during the reporting point. This report can be very helpful in “rightsizing” your buffer cache. See what the effect of doubling the cache size would be. If it is significant (>20 percent reduction in physical IO) you can suggest increasing the pool. Example here if you see when size factor goes from 1 to 1.96 , the estimated physical read factor is going down from 1 to 0.26 which is significant drop and is good for database. The two ‘Size Factor’ and ‘Estimated Phys Read Factor’ columns show how quickly or slowly number of ‘Estimated Physical Reads’ goes down or up if the size of the buffer cache would go up or down. They greatly simplify evaluation of the buffer cache situation, as you do not need to compare big figures in columns ‘Size for Estimate’ and especially ‘Estimated Physical Reads’. We will also generate the AWR at good time for comparison purpose. Our intention is to not add more buffer pool just because one bad query sucked it up ( in which case we will focus on query and not the buffer pool). When you see huge Physical I/O happening in other section, you may be tempted to increase the database buffer cache in order to lessen the amount of physical I/O. That however does not take into account the overhead of serving the bigger buffer cache (at Oracle level, longer search for a required buffer in a much bigger space, in particular), which may outweigh benefits of further reduction in the amount of physical reads, especially if such a reduction is small compared with the portion of the increased buffer cache. PARAMETER DESCRIPTION ANALYSIS First Parameter “P” Apart from default buffer cache – pool (or subpool) which is always present, buffer cache may have other subpools. Buffer Cache Advisory section will then have separate subsection for each of those subpools distinguished from others by a letter in the very left column of the section as follows: ‘D’ – Default buffer cache (always present), ‘K’ – Keep buffer cache (if db_keep_cache_size parameter is defined), ‘R’ – Recycle buffer cache (if db_recycle_cache_size parameter is defined), – Caches for non-default block sizes (if defined with parameters db_k_cache_size) Size Factor Changing ‘Size Factor’ shows ratio of the proposed size of the buffer cache (increased or decreased) to the approximate actual size currently in use found in the row with ‘Size Factor’ = 1.0. Estimated Phys Read Factor Changing ‘Estimated Phys Read Factor’ shows ratio of the estimated number of Physical Reads for the proposed (increased or decreased) size of the buffer cache to the number of Physical Reads calculated for the current size of buffer cache found in the row with ‘Estimated Phys Read Factor’ = 1.0. HERE WE CAN SEE THAT IF WE INCREASE OUR BUFFER CACHE FROM 6 GB TO 10 GB IT IS HELPING US SIGNIFICANTLY THIS FACTOR WILL COME DOWN FROM 1 TO 0.3. PGA Memory Advisory SIGNIFICANCE OF THIS SECTION: Similar to Buffer Pool Advisory, the statistic provides information on how the increase or decrease in PGA memory will cause increase or decrease in Estd PGA Cahce Hit %. Starting point here is “Size Factor” = 1.0. This gives current memory allocation for PGA. In this example 12 GB is being allocated to PGA. With this allocation the Estd PGA Cahce Hit % is 100, which is good. Hence even if we increase PGA to any value Estd PGA Cahce Hit % won’t change. Hence it won’t be advisable to increase PGA further. However decreasing can save memory without hitting performance In this section, you first need to find the row with the ‘Size Factr’ column value of 1.0. This column indicates the size factor of the PGA estimates; a value of 1 indicates the current PGA size. The ‘PGA Target Est(MB)’ value of this row will show your current PGA size: 12 GB in this example. Other columns you will be interested in are ‘Estd Extra W/A MB Read/Written to Disk ‘ and ‘Estd PGA Overalloc Count’. When you go down or up the advisory section from the row with ‘Size Factr’ = 1.0, you get estimates for Disk usage – column ‘Estd Extra W/A MB Read/Written to Disk ‘ – for bigger or smaller settings of PGA_AGGREGATE_TARGET. The less Disk usage figure in this column, usually the better. A lower value means less work areas have to be spilled to disk, enhancing performance of the Oracle instance. The question on whether to increase or decrease the PGA_AGGREGATE_TARGET from the current value should be always investigated. The answer depends on how much of total memory (SGA+PGA) can be allocated for this database instance on the machine, taking into account memory needs of other database instances on the same machine, non-Oracle software and the OS itself. Too much memory allocated wastes memory, and too less memory allocated causes possible performance issues in the Oracle environment. PARAMETER DESCRIPTION ANALYSIS Estd PGA Overalloc Count Shows how many times the database instance processes would need to request more PGA memory at the OS level than the amount shown in the ‘PGA Target Est (MB)’ value of the respective row. Ideally this field should be 0 (indicating that the PGA is correctly sized, and no overallocations should take place), and that is your equally important second goal. In the given example this goal is achieved with PGA_AGGREGATE_TARGET of even 1,536MB. So our PGA allocation is way too high. Shared Pool Advisory SIGNIFICANCE OF THIS SECTION: SHARED POOL IS QUALITATIVE KIND OF POOL WHILE OTHER POOLS ARE QUANTITATIVE POOL. This means that it greatly depends “what” you are keeping in shared pool and not just “how much” you are keeping. Similar to Buffer Pool Advisory and PGA, the statistic provides information on how the increase or decrease in Shared pool memory will cause increase or decrease in Estd LC Load Time (s). Starting point here is “SP Size Factor” = 1.0. This gives current memory allocation for shared pool. In this example approx 2 GB is being allocated to shared pool. With this allocation the Estd LC Load Time (s) is 58,972. If we increase the shared pool size to 4 GB then Estd LC Load Time (s) will come down to value 39301. Which is not a huge benefit but still the shared pool can be increased if possible. Also you can analyze by the “Est LC Time Saved Factr” column . if “Est LC Time Saved Factr” increases as the “Size Factor” increases then increasing the shared pool will improve performance. PARAMETER DESCRIPTION ANALYSIS EST LC TIME SAVED LC means LIBRARY CACHE you have to see that if increase your shared pool then what is the amount of this time that you can save SGA Target Advisory SIGNIFICANCE OF THIS SECTION: The SGA target advisory report is somewhat of a summation of all the advisory reports previously presented in the AWR report. It helps you determine the impact of changing the settings of the SGA target size in terms of overall database performance. The report uses a value called DB Time as a measure of the increase or decrease in performance relative to the memory change made. Also the report will summarize an estimate of physical reads associated with the listed setting for the SGA. Starting at a “Size Factor” of 1 (this indicates the current size of the SGA). If the “Est DB Time (s)” decreases significantly as the “Size Factor” increases then increasing the SGA will significantly reduce the physical reads and improve performance. but here in our example the Est DB Time is not reducing as much with increase in SGA so increasing SGA in our case will not be beneficial. When the SQL requires a large volume of data access, increasing the SGA_TARGET size can reduce the amount of disk I/O and improve the SQL performance. Buffer Wait Statistics SIGNIFICANCE OF THIS SECTION: The buffer wait statistics report helps you drill down on specific buffer wait events, and where the waits are occurring We focus on Total wait time(s) and in this example this value is only 702 seconds Avg time(ms) is also only 1 ms Enqueue Activities The Enqueue activity report provides information on enqueues (higher level Oracle locking) that occur. As with other reports, if you see high levels of wait times in these reports, you might dig further into the nature of the enqueue and determine the cause of the delays. This can give some more information for enqueue waits (e.g. Requests, Successful gets, Failed gets), which can give an indication of the percentage of times that an enqueue has to wait and the number of failed gets. In our example the top row do have failed gets but the number of waits is only 55 and wait time (s) is also not high number. So Enqueue is not our major issue in this AWR. Undo Segment Summary SIGNIFICANCE OF THIS SECTION: PARAMETER DESCRIPTION ANALYSIS Min/MAX TR (mins) Represents Minimum and Maximum Tuned Retention Minutes for Undo data. This data will help to set the UNDO_RETENTION database parameter. In this example this parameter can be set to 868.4 min Max Qry Len(s) Represents Maximum query length in seconds. In this example the max query length is 51,263 seconds. STO/ OOS Represents count for Sanpshot Too Old and Out Of Space errors, occurred during the snapshot period. In this example, we can see 0 errors occurred during this period. Latch Activity SIGNIFICANCE OF THIS SECTION: The latch activity report provides information on Oracle’s low level locking mechanism called a latch. From this report you can determine if Oracle is suffering from latching problems, and if so, which latches are causing the greatest amount of contention on the system. There are a plethora of latch statistics Misses, unless they cause significant amount of sleeps aren’t of concern Sleeps can be a problem May need to look at spin count if you have excessive sleeps Spin count (undocumented (_SPIN_COUNT) was based on CPU speed and 2000 setting was several years ago If latch waits or other latch related events aren’t showing up, then latches probably aren’t an issue Usually cache buffer and shared pool related latches are the major latches. PARAMETER DESCRIPTION ANALYSIS WAIT TIME (S) should be 0 (Pct Get Miss should be 0 or near 0 Segments by Logical Reads SIGNIFICANCE OF THIS SECTION: The statistic displays segment details based on logical reads happened. Data displayed is sorted on “Logical Reads” column in descending order. It provides information about segments for which more logical reads are happening. Most of these SQLs can be found under section SQL Statistics -> SQL ordered by Gets. These reports can help you find objects that are “hot” objects in the database. You may want to review the objects and determine why they are hot, and if there are any tuning opportunities available on those objects (e.g. partitioning), or on SQL accessing those objects. When the segments are suffering from high logical I/O, those segments are listed here. When the table has high logical reads and its index has relatively small logical reads, there is a high possibility some SQL is using the index inefficiently, which is making a throw-away issue in the table. Find out the columns of the condition evaluated in the table side and move them into the index. When the index has high logical reads, the index is used excessively with wide range. You need to reduce the range with an additional filtering condition whose columns are in the same index. If a SQL is suboptimal then this can indicate the tables and indexes where the workload or throwaway occurs and where the performance issue lies. It can be particularly useful if there are no actual statistics elsewhere (e.g. Row Source Operation Counts (STAT lines) in the SQL Trace or no actuals in the SQLT/Display Cursor report). Segments by Physical Reads SIGNIFICANCE OF THIS SECTION: If there are a high number of physical read waits (db file scattered read, db file sequential read and direct path read) then this section can indicate on which segments (tables or indexes) the issue occurs. This can help identify suboptimal execution plan lines. It can also help identify changes to tablespace and storage management that will improve performance. When the SQLs need excessive physical reads on the particular segments, this section lists them. You need to check if some of SQLs are using unnecessary full scan and wide range scan. The statistic displays segment details based on physical reads happened. Data displayed is sorted on “Physical Reads” column in descending order. It provides information about segments for which more physical reads are happening. Queries using these segments should be analysed to check whether any FTS is happening on these segments. In case FTS is happening then proper indexes should be created to eliminate FTS. Most of these SQLs can be found under section SQL Statistics -> SQL ordered by Reads. These reports can help you find objects that are “hot” objects in the database. You may want to review the objects and determine why they are hot, and if there are any tuning opportunities available on those objects (e.g. partitioning), or on SQL accessing those objects. For example, if an object is showing up on the physical reads report, it may be that an index is needed on that object. PARAMETER DESCRIPTION ANALYSIS Captured Segments account for 90.8% This % number is important. It should be high value which shows that we are looking at correct data. The top segment record MTL_ITEM_CATEGORIES MTL_ITEM_CATEGORIES account for 51.32% of the total physical reads which is a big number. we need to see which SQL statement is using this segment and probably tune that SQL. You will have to go to “SQL Ordered by Reads” section of AWR to see which SQL Statement is using this segment. Segments by Row Lock Waits SIGNIFICANCE OF THIS SECTION: If there is a high level of “enq: TX row lock contention” waits then this section can identify the segments (tables/indexes) on which they occur. The statistic displays segment details based on total “Row lock waits” which happened during snapshot period. Data displayed is sorted on “Row Lock Waits” column in descending order. It provides information about segments for which more database locking is happening. DML statements using these segments should be analysed further to check the possibility of reducing concurrency due to row locking. When the segments are suffering from Row Lock, those segments are listed in this section. The general solution is to provide more selective condition for the SQL to lock only rows that are restricted. Or, after DML execution, commit or rollback as soon as possible. Or so on. Segments by ITL Waits SIGNIFICANCE OF THIS SECTION: If there is a high level of “enq: TX allocate ITL entry” waits then this section can identify the segments (tables/indexes) on which they occur. Whenver a transaction modifies segment block, it first add transaction id in the Internal Transaction List table of the block. Size of this table is a block level configurable parameter. Based on the value of this parameter those many ITL slots are created in each block. ITL wait happens in case total trasactions trying to update same block at the same time are greater than the ITL parameter value. Total waits happening in the example are very less, 34 is the Max one. Hence it is not recommended to increase the ITL parameter value. Usually when the segments are suffering from Row Lock, those segments are listed in this section. The general solution is to provide more selective condition for the SQL to lock only rows that are restricted. Or, after DML execution, commit or rollback as soon as possible. Or so on. Segments by Buffer Busy Waits SIGNIFICANCE OF THIS SECTION: If there is a high level of “Buffer Busy Waits” waits then this section can identify the segments (tables/indexes) on which they occur. The section lists segments that are suffering from buffer busy waits. Based on the reason code or class#, the treatment of each is different. The physical segment’s attributes such as freelist, freelist groups, pctfree, pctused and so on are handled by rebuilding the object. But before this treatment, you need to check if your SQLs can visit different blocks at the same time if possible to avoid the contention. Buffer busy waits happen when more than one transaction tries to access same block at the same time. In this scenario, the first transaction which acquires lock on the block will able to proceed further whereas other transaction waits for the first transaction to finish. If there are more than one instances of a process continuously polling database by executing same SQL (to check if there are any records available for processing), same block is read concurrently by all the instances of a process and this result in Buffer Busy wait event. This is one of the post in Performance Tuning Fundamentals Series. Click on below links to read more posts from the series: Performance Tuning Basics 1 : Selectivity and Cardinality Performance Tuning Basics 2 : Parsing Performance Tuning Basics 3 : Parent and Child Cursors Performance Tuning Basics 4 : Bind Variables Performance Tuning Basics 5 : Trace and TKPROF – Part 1: Trace Performance Tuning Basics 6 : Trace and TKPROF – Part 2: Generating TKPROF Performance Tuning Basics 7 : Trace and TKPROF – Part 3: Analyzing TKPROF Files Performance Tuning Basics 8 : Trace File Analyzer (TRCA) Performance Tuning Basics 9 : Optimizer Mode Performance Tuning Basics 10 : Histograms Performance Tuning Basics 11 : Steps to analyze a performance problem Performance Tuning Basics 12 : Dynamic Performance Views Performance Tuning Basics 13 : Automatic Workload Repository (AWR) Basics Performance Tuning Basics 14 : Active Sessions History (ASH) Basics Performance Tuning Basics 15 : AWR Report Analysis Performance Tuning Basics 16 : Using SQL Tuning Health-Check Script (SQLHC) [Post Views: 33809] Brijesh Gogia Brijesh Gogia I’m an experienced Oracle Applications DBA Architect with more than a decade of full-time DBA/Architect experience. I have gained a wide knowledge of the Oracle and Non-Oracle software stack and have worked on several big projects for multi-national companies. I enjoy working with the leading-edge technology and have passion for database performance and stability. Thankfully my work allows me time for researching new technologies (and to write about them). You can connect with me on LinkedIn.

Adaptive Cursors and SQL Plan Management

Adaptive Cursors and SQL Plan Management Use bind variables that intelligently pick the right plan every time and make sure a new execution plan is perfect before it's used. See Series TOC By now many of you have heard an earful about how using bind variables enhances performance; for those who haven't, let me try to explain the core concepts in as simple manner as I can. (I also recommend a visit to Tom Kyte's asktom.oracle.com, where you can learn how singularly important it is to use bind variables to improve the performance of SQL statements as well as how to use them in several languages.) Assume you have a table called CUSTOMERS that has, among others, a column called STATE_CODE, which store the customer's residence state in the two-letter abbreviation of the U.S. states—CT, NY and so on. When you want to find out how many customers have purchased more than three times and are from the state of Connecticut ('CT'), you will most likely issue: Copy Copied to ClipboardError: Could not Copy select count(1) from customers where state_code = 'CT' and times_purchased > 3; select count(1) from customers where state_code = 'CT' and times_purchased > 3; When you issue this query, Oracle has to perform an activity called parsing, which will generate an execution plan for the SQL statement you just issued. After parsing the query is ready fro execution. Parsing is similar in concept to compilation of code in software; when you write something in C++, say, you can't run that in the operating system—first you must compile it and make it an executable. The activity of parsing makes an executable from the SQL statement. Now suppose another user issues a statement as shown below:> Copy Copied to ClipboardError: Could not Copy select count(1) from customers where state_code = 'NY' and times_purchased > 3; select count(1) from customers where state_code = 'NY' and times_purchased > 3; This is almost identical to the query above, with one exception: the state_code searched is NY instead of CT. Ideally, the parsed code would be the same and the value of the literal would be supplied at runtime. But the way the queries are written, Oracle interprets them as different and has to do another parse for the second query. Consider instead if the queries were written as: Copy Copied to ClipboardError: Could not Copy select count(1) from customers where state_code = and times_purchased > 3; select count(1) from customers where state_code = and times_purchased > 3; The first query would have passed NY as the value of and the second, CT. The query would not have to be parsed again. In this example, the is conceptually known as a bind variable, which is a place holder for values to be passed during execution. Bind variables are represented in the form of :VariableName, as shown below: where state_code = :state_code If your code does not have bind variables and instead littered with references to literal values such as where state_code = 'CT', you can force all literals to be converted to bind variables by specifying an initialization parameter: cursor_sharing = force This parameter will cause the statement where state_code = 'CT' to be rewritten as where state_code = ":SYS_0001" where SYS_0001 is a system generated variable name. This approach will make these statements identical. The Problem with Bind Variables Well, if bind variables are so great, why not use them all the time? Don't we have a magic bullet—cursor_sharing—which transforms all the bad code to sharable statements? (Those already familiar with the reasons, especially the concept of bind-peeking, can skip to the section titled "Adaptive Cursors".) Consider the case where there is an index on the column STATE_CODE. The values in the column are shown below: select state_code, count(1) from customers group by state_code; ST COUNT(1) -- ---------- NY 994901 CT 5099 select state_code, count(1) from customers group by state_code; ST COUNT(1) -- ---------- NY 994901 CT 5099 As you can see, the data is highly skewed; about 5% of the rows have 'CT' in them while the rest have 'NY'. It's not surprising considering the population of the states. Now, let's see what type of execution plan is generated for the query shown earlier: SQL> set autot traceonly explain SQL> select * from customers where state_code = 'NY' and times_purchased > 3 2 / Execution Plan ---------------------------------------------------------- Plan hash value: 2008213504 ------------------------------------------------------------------------------- | Id | Operation | Name | Rows | Bytes | Cost (%CPU)| Time | ------------------------------------------------------------------------------- | 0 | SELECT STATEMENT | | 895K| 26M| 1532 (9)| 00:00:19 | |* 1 | TABLE ACCESS FULL| CUSTOMERS | 895K| 26M| 1532 (9)| 00:00:19 | ------------------------------------------------------------------------------- Predicate Information (identified by operation id): --------------------------------------------------- 1 - filter("TIMES_PURCHASED">3 AND "STATE_CODE"='NY') SQL> set autot traceonly explain SQL> select * from customers where state_code = 'NY' and times_purchased > 3 2 / Execution Plan ---------------------------------------------------------- Plan hash value: 2008213504 ------------------------------------------------------------------------------- | Id | Operation | Name | Rows | Bytes | Cost (%CPU)| Time | ------------------------------------------------------------------------------- | 0 | SELECT STATEMENT | | 895K| 26M| 1532 (9)| 00:00:19 | |* 1 | TABLE ACCESS FULL| CUSTOMERS | 895K| 26M| 1532 (9)| 00:00:19 | ------------------------------------------------------------------------------- Predicate Information (identified by operation id): --------------------------------------------------- 1 - filter("TIMES_PURCHASED">3 AND "STATE_CODE"='NY') The query used a full table scan—the appropriate action since 95% of the rows are returned with the query and an index scan would have been very expensive. Now issue the same query with 'CT': SQL> c/NY/CT 1* select * from customers where state_code = 'CT' and times_purchased > 3 SQL> / Execution Plan ---------------------------------------------------------- Plan hash value: 4876992 --------------------------------------------------------------------------------------------- | Id | Operation | Name | Rows | Bytes | Cost (%CPU)| Time | --------------------------------------------------------------------------------------------- | 0 | SELECT STATEMENT | | 4589 | 138K| 56 (2)| 00:00:01 | |* 1 | TABLE ACCESS BY INDEX ROWID| CUSTOMERS | 4589 | 138K| 56 (2)| 00:00:01 | |* 2 | INDEX RANGE SCAN | IN_CUST_STATE | 5099 | | 12 (0)| 00:00:01 | --------------------------------------------------------------------------------------------- Predicate Information (identified by operation id): --------------------------------------------------- 1 - filter("TIMES_PURCHASED">3) 2 - access("STATE_CODE"='CT') SQL> c/NY/CT 1* select * from customers where state_code = 'CT' and times_purchased > 3 SQL> / Execution Plan ---------------------------------------------------------- Plan hash value: 4876992 --------------------------------------------------------------------------------------------- | Id | Operation | Name | Rows | Bytes | Cost (%CPU)| Time | --------------------------------------------------------------------------------------------- | 0 | SELECT STATEMENT | | 4589 | 138K| 56 (2)| 00:00:01 | |* 1 | TABLE ACCESS BY INDEX ROWID| CUSTOMERS | 4589 | 138K| 56 (2)| 00:00:01 | |* 2 | INDEX RANGE SCAN | IN_CUST_STATE | 5099 | | 12 (0)| 00:00:01 | --------------------------------------------------------------------------------------------- Predicate Information (identified by operation id): --------------------------------------------------- 1 - filter("TIMES_PURCHASED">3) 2 - access("STATE_CODE"='CT') It used the index. Again, that was appropriate; CT accounts for only 5% of the rows and an index scan will be beneficial. Let's see the behavior when using a bind variable. Here is the demonstrated behavior in Oracle Database 10g. SQL> var state_code varchar2(2) SQL> exec :state_code := 'CT' PL/SQL procedure successfully completed. SQL> select max(times_purchased) from customers where state_code = :state_code 2 / Execution Plan ---------------------------------------------------------- Plan hash value: 296924608 -------------------------------------------------------------------------------- | Id | Operation | Name | Rows | Bytes | Cost (%CPU)| Time | -------------------------------------------------------------------------------- | 0 | SELECT STATEMENT | | 1 | 6 | 1511 (8)| 00:00:19 | | 1 | SORT AGGREGATE | | 1 | 6 | | | |* 2 | TABLE ACCESS FULL| CUSTOMERS | 500K| 2929K| 1511 (8)| 00:00:19 | -------------------------------------------------------------------------------- Predicate Information (identified by operation id): --------------------------------------------------- 2 - filter("STATE_CODE"=:STATE_CODE) SQL> var state_code varchar2(2) SQL> exec :state_code := 'CT' PL/SQL procedure successfully completed. SQL> select max(times_purchased) from customers where state_code = :state_code 2 / Execution Plan ---------------------------------------------------------- Plan hash value: 296924608 -------------------------------------------------------------------------------- | Id | Operation | Name | Rows | Bytes | Cost (%CPU)| Time | -------------------------------------------------------------------------------- | 0 | SELECT STATEMENT | | 1 | 6 | 1511 (8)| 00:00:19 | | 1 | SORT AGGREGATE | | 1 | 6 | | | |* 2 | TABLE ACCESS FULL| CUSTOMERS | 500K| 2929K| 1511 (8)| 00:00:19 | -------------------------------------------------------------------------------- Predicate Information (identified by operation id): --------------------------------------------------- 2 - filter("STATE_CODE"=:STATE_CODE) The optimizer chose Full Table Scan on CUSTOMERS table. Shouldn't the index be used because we are searching for CT only, which accounts for a mere 5% of the total records? What made the optimizer choose full table scan over index scan? The answer is a phenomenon called bind peeking. Earlier, when you ran that query with the bind variable value set to 'NY', the optimizer had to do a hard parse for the first time and while doing so it peeked at the bind variable to see what value had been assigned to it. The value was 'NY'. Since 'NY' accounts for about 95% of the rows, the optimizer chose full table scan (as expected). In addition, it also froze the plan for the query. Next, when we issued the same query, but for 'CT', the plan was not re-calculated and the optimizer used the same plan used earlier, even though it was not the best one for the purpose. Had you used a value such as 'CT' instead of the bind variable in the query, the optimizer would have picked the correct plan. Thus as you can see, bind variables, even though they were good in most cases, actually failed in cases where the selectivity of the values radically affected the plans, as in this example where the selectivity of the values 'CT' and 'NY' were 5% and 95% respectively. In cases where the distribution of data is such that the selectivity is almost the same for all values, the execution plan would remain the same. Therefore smart SQL coders will choose when to break the cardinal rule of using bind variables, employing literals instead. Adaptive Cursors But what if you don't have a lot of smart coders or the time to rewrite these statements? Does Oracle provide some smart alternatives? Yes it does. With Oracle Database 11g, cursors suddenly have a new kind of intelligence. Instead of blindly using the cached execution plan whenever the query is executed, they actually decide if a plan has to be recalculated when the bind variable value changes. If a cursor has a bind variable in it, the database observes it for a while to see what type of values are passed to the variable and if the plan needs recalculation. If the plan does need to be recalculated, the cursor is marked as "Bind-Sensitive". The example query shown previously is a perfect candidate. The correct optimizer plan will be used based on the value of the bind variable. There is no need for you to do anything; it happens automatically. The dictionary view V$SQL has been modified to add two more columns: IS_BIND_SENSITIVE and IS_BIND_AWARE. Let's see how they are used: select is_bind_sensitive, is_bind_aware, sql_id, child_number from v$sql where sql_text = 'select count(1) from customers where state_code = :state_code and times_purchased > 3' I I SQL_ID CHILD_NUMBER - - ------------- ------------ Y Y 7cv5271zx2ttg 0 Y N 7cv5271zx2ttg 1 select is_bind_sensitive, is_bind_aware, sql_id, child_number from v$sql where sql_text = 'select count(1) from customers where state_code = :state_code and times_purchased > 3' I I SQL_ID CHILD_NUMBER - - ------------- ------------ Y Y 7cv5271zx2ttg 0 Y N 7cv5271zx2ttg 1 Let's see what the columns mean. Oracle observes the cursors for a while and sees how the values differ. If the different values can potentially alter the plan, the cursor is labeled "Bind-Sensitive" and the column IS_BIND_SENSITIVE shows "Y". After a few executions, the database knows more about the cursors and the values and decides if the cursor should be made to change plans based on the values. If that is the case, the cursor is called "Bind-Aware" and the column IS_BIND_AWARE shows "Y". In summary: Bind-Sensitive cursors are potential candidates for changed plans and Bind-Aware ones are where the plans actually change. A new view V$SQL_CS_HISTOGRAM shows how many times the SQL statement was executed, organized into three buckets for each child cursor as shown below: select * from v$sql_cs_histogram where sql_id = '7cv5271zx2ttg' / ADDRESS HASH_VALUE SQL_ID CHILD_NUMBER BUCKET_ID COUNT -------- ---------- ------------- ------------ ---------- ---------- 45C8218C 2144429871 7cv5271zx2ttg 5 0 0 45C8218C 2144429871 7cv5271zx2ttg 5 1 2 45C8218C 2144429871 7cv5271zx2ttg 5 2 0 45C8218C 2144429871 7cv5271zx2ttg 4 0 8 ... and so on ... 45C8218C 2144429871 7cv5271zx2ttg 0 2 0 select * from v$sql_cs_histogram where sql_id = '7cv5271zx2ttg' / ADDRESS HASH_VALUE SQL_ID CHILD_NUMBER BUCKET_ID COUNT -------- ---------- ------------- ------------ ---------- ---------- 45C8218C 2144429871 7cv5271zx2ttg 5 0 0 45C8218C 2144429871 7cv5271zx2ttg 5 1 2 45C8218C 2144429871 7cv5271zx2ttg 5 2 0 45C8218C 2144429871 7cv5271zx2ttg 4 0 8 ... and so on ... 45C8218C 2144429871 7cv5271zx2ttg 0 2 0 As the adaptive cursor sharing feature uses the correct plan based on the value of the bind variable, the database must be holding that information somewhere. It exposes that information through another new view V$SQL_CS_SELECTIVITY that shows the selectivity of the different values passed to the bind variable. select * from v$sql_cs_selectivity where sql_id = '7cv5271zx2ttg' / ADDRESS HASH_VALUE SQL_ID CHILD_NUMBER PREDICATE R LOW HIGH -------- ---------- ------------- ----------- ----------- - -------- ---------- 45C8218C 2144429871 7cv5271zx2ttg 5 =STATE_CODE 0 0.895410 1.094391 45C8218C 2144429871 7cv5271zx2ttg 4 =STATE_CODE 0 0.004589 0.005609 45C8218C 2144429871 7cv5271zx2ttg 4 =STATE_CODE 1 0.002295 0.002804 45C8218C 2144429871 7cv5271zx2ttg 3 =STATE_CODE 0 0.002295 0.002804 45C8218C 2144429871 7cv5271zx2ttg 0 =STATE_CODE 0 0.004589 0.005609 select * from v$sql_cs_selectivity where sql_id = '7cv5271zx2ttg' / ADDRESS HASH_VALUE SQL_ID CHILD_NUMBER PREDICATE R LOW HIGH -------- ---------- ------------- ----------- ----------- - -------- ---------- 45C8218C 2144429871 7cv5271zx2ttg 5 =STATE_CODE 0 0.895410 1.094391 45C8218C 2144429871 7cv5271zx2ttg 4 =STATE_CODE 0 0.004589 0.005609 45C8218C 2144429871 7cv5271zx2ttg 4 =STATE_CODE 1 0.002295 0.002804 45C8218C 2144429871 7cv5271zx2ttg 3 =STATE_CODE 0 0.002295 0.002804 45C8218C 2144429871 7cv5271zx2ttg 0 =STATE_CODE 0 0.004589 0.005609 This view shows a wealth of information. The column PREDICATE shows the various predicates (the WHERE condition) users have used. The LOW and HIGH values show the range of values passed. Finally, a third new view, V$SQL_CS_STATISTICS, shows the activities by the cursors marked either Bind-Aware or Bind-Sensitive. select child_number, bind_set_hash_value, peeked, executions, rows_processed, buffer_gets, cpu_time from v$sql_cs_statistics where sql_id = '7cv5271zx2ttg'; CHILD_NUMBER BIND_SET_HASH_VALUE P EXECUTIONS ROWS_PROCESSED BUFFER_GETS CPU_TIME ------------ ------------------- - ---------- -------------- ----------- ---------- 1 22981142 Y 1 9592 3219 0 0 22981142 Y 1 9592 3281 0 select child_number, bind_set_hash_value, peeked, executions, rows_processed, buffer_gets, cpu_time from v$sql_cs_statistics where sql_id = '7cv5271zx2ttg'; CHILD_NUMBER BIND_SET_HASH_VALUE P EXECUTIONS ROWS_PROCESSED BUFFER_GETS CPU_TIME ------------ ------------------- - ---------- -------------- ----------- ---------- 1 22981142 Y 1 9592 3219 0 0 22981142 Y 1 9592 3281 0 This view shows the statistics about the execution as recorded by the database. The column EXECUTIONS shows how many times the query was seen to be executed with different values in the bind variable. The column PEEKED (shown as "P") in the output shows if the optimizer peeked into the bind variable to arrive at a good plan. These views show additional information that is not necessary for you to understand how this feature works. Adaptive Cursors are activated and used automatically. SQL Plan Management How many times you have seen this situation: A query has the best possible plan but suddenly something happens to throw the plan off? The "something" could be that someone analyzed the table again or some optimizer influencing parameters such as star_transformation is changed—the list of possibilities is endless. Out of frustration you may clamp down on any changes on the database, meaning no database stats collection, no parameter changes, and so on. But that's easier said than done. What happens when the data pattern changes? Take, for instance, the example shown in the section on Adaptive Cursors. The CUSTOMERS table is now filled with customers from New York; so the STATE_CODE is mostly "NY". So when a query with a predicate as shown below is executed: where state_code = 'NY' the index scanning does not occur; instead the system does a full table scan. When the predicate is: where state_code = 'CT' ... the index is used since it will return few rows. However, what happens if the pattern changes—say, suddenly there are a lot more customers from Connecticut (state_code = 'CT'); so much more so that the percentage of CT now jumps to 70%. In that case the CT queries should use full table scans. But as you have stopped collecting optimizer stats, the optimizer will not know about the change in pattern and will continue to derive an index scan path which is inefficient. What can you do? What if Oracle used the optimal plan but reevaluated that plan when underlying factors such as stats collection or database parameters change, at which point it used the new plan if and only if the new plan is better? That would be splendid, wouldn't it? Well, it's possible in Oracle Database 11g. Let's see how. SQL Plan Baselining In Oracle Database 11g, when an already calculated optimizer plan needs to be updated because of changes in the underlying factors, it does not go into effect immediately. Rather Oracle evaluates the new plan and implements it in only if improves on the one already there. In addition, tools and interfaces are available to see the history of the plans calculated for each query and how they compare. The life cycle starts with Oracle identifying a statement as one that is executed more than once, or "repeatable". Once a repeatable statement is identified, its plan is captured and stored as a SQL Plan Baseline, in the database in a logical construct known as SQL Management Base (SMB). When a new plan is calculated for this query for whatever reason, the new plan is also stored in the SMB. So the SMB stores each plan for the query, how it was generated, and so on. The plans are not stored in SMB automatically. If that were the case, the SMB would hold plans of every type of query and become huge. Instead, you can and should control how many queries go into the SMB. There are two ways to do that: making all repeatable queries baselined in SMB automatically, or manually loading the queries that should be baselined Let's look at the simple case first: you can make the SQL Plan Management feature capture SQL Plan Baselines for all repeatable queries automatically by setting a database parameter optimizer_capture_sql_plan_baselines, which is by default FALSE, to TRUE. Fortunately, this is a dynamic parameter. SQL> alter system optimizer_capture_sql_plan_baselines = true; After this statement is executed, the execution plans for all repeatable statements are stored as SQL Plan Baselines in the SMB. The SQL Plan Baselines are stored in the view called DBA_SQL_PLAN_BASELINES. You can also see it in the Enterprise Manager. To examine the baselined plans, bring up EM and click on the tab "Server" as shown in figure below: From this page, click SQL Plan Control in the section Query Optimizer, which brings up the main SPM page shown below: Click the SQL Plan Baseline tab, which brings up a screen similar to as shown below: This is the main SQL Plan Baseline screen. At the top left corner, you will see the configuration parameters. It shows Capture SQL Plan Baselines as TRUE, which is what you enabled with the ALTER SYSTEM command. Below that is the Use SQL Plan Baselines set to TRUE (the default). It indicates that SQL Plan Baselines are to be used for a query if one is available. Whenever a new plan is generated for the query, the old plan is retained in the history in the SMB. However, it also means that the SMB will be crowded with plan histories. A parameter controls how many weeks the plans are retained for, which is shown in the text box against Plan Retention (Weeks). In this screen it shows as set to 53 weeks. If a SQL Plan Baseline has not been used for 53 weeks it will be purged automatically. The middle part of the screen has a search box where you can search for SQL statements. Enter a search string here and press Go, you will see the SQL statements and associated plans as shown in the figure above. Each baselined plan has a lot of status information associated with it. Let's see what they are: Enabled - A baselined plan has to be enabled to be considered Accepted - A baselined plan is considered to an be acceptable plan for a query Fixed - If a plan is marked as FIXED, then the optimizer considers only that in deciding the best plan. So, if five plans are baselined for a query and three are marked "fixed", then the optimizer considers only those three in choosing the best plan. Auto-Purge - If the plan should be purged automatically The same information and more is also available in the view DBA_SQL_PLAN_BASELINES: If you click the plan name, it will show you the plan details. Here is an output: In the details you can see the explain plan of the query, along with the other relevant details such as whether the plan is accepted, enabled, fixed, and so on. Another important attribute is "Origin", which shows AUTO-CAPTURE— meaning the plan was captured automatically by the system because optimizer_capture_sql_plan_baselines was set to TRUE. Click Return to get back to the list of plans as shown in the previous figure. Now select a plan whose status is not accepted and click Evolve to see if the plan should be examined for a potentially better plan. The following screen comes up. The important point to note in this screen is the Verify Performance radio button. If you want to examine the plans and compare its performance to that of the existing SQL Plan Baseline for the query, you should select that. Click OK. This shows the report of the comparison: ------------------------------------------------------------------------------- Evolve SQL Plan Baseline Report ------------------------------------------------------------------------------- Inputs: ------- PLAN_LIST = SYS_SQL_PLAN_b5429522ee05ab0e SYS_SQL_PLAN_b5429522e53beeec TIME_LIMIT = DBMS_SPM.AUTO_LIMIT VERIFY = YES COMMIT = YES Plan: SYS_SQL_PLAN_b5429522e53beeec ----------------------------------- It is already an accepted plan. Plan: SYS_SQL_PLAN_b5429522ee05ab0e ----------------------------------- Plan was verified: Time used 3.9 seconds. Failed performance criterion: Compound improvement ratio <= 1.4. Baseline Plan Test Plan Improv. Ratio ------------- --------- ------------- Execution Status: COMPLETE COMPLETE Rows Processed: 1 1 Elapsed Time(ms): 3396 440 7.72 CPU Time(ms): 1990 408 4.88 Buffer Gets: 7048 5140 1.37 Disk Reads: 4732 53 89.28 Direct Writes: 0 0 Fetches: 4732 25 189.28 Executions: 1 1 ------------------------------------------------------------------------------- Evolve SQL Plan Baseline Report ------------------------------------------------------------------------------- Inputs: ------- PLAN_LIST = SYS_SQL_PLAN_b5429522ee05ab0e SYS_SQL_PLAN_b5429522e53beeec TIME_LIMIT = DBMS_SPM.AUTO_LIMIT VERIFY = YES COMMIT = YES Plan: SYS_SQL_PLAN_b5429522e53beeec ----------------------------------- It is already an accepted plan. Plan: SYS_SQL_PLAN_b5429522ee05ab0e ----------------------------------- Plan was verified: Time used 3.9 seconds. Failed performance criterion: Compound improvement ratio <= 1.4. Baseline Plan Test Plan Improv. Ratio ------------- --------- ------------- Execution Status: COMPLETE COMPLETE Rows Processed: 1 1 Elapsed Time(ms): 3396 440 7.72 CPU Time(ms): 1990 408 4.88 Buffer Gets: 7048 5140 1.37 Disk Reads: 4732 53 89.28 Direct Writes: 0 0 Fetches: 4732 25 189.28 Executions: 1 1 This is a pretty good comparison report that shows how the plans compare. If a specific plan is shown to perform better, then the optimizer will use it. If the new plan does not show an appreciable performance improvement, then it should not be accepted and be used. SQL Performance Management allows you to see first hand how the plans compare and use the ones that are truly better. You can change the accepted status of a plan manually by executing the DBMS_SPM package: Copy Copied to ClipboardError: Could not Copy declare ctr binary_integer; begin ctr := dbms_spm.alter_sql_plan_baseline ( sql_handle => 'SYS_SQL_e0b19f65b5429522', plan_name => 'SYS_SQL_PLAN_b5429522ee05ab0e', attribute_name => 'ACCEPTED', attribute_value => 'NO' ); end; declare ctr binary_integer; begin ctr := dbms_spm.alter_sql_plan_baseline ( sql_handle => 'SYS_SQL_e0b19f65b5429522', plan_name => 'SYS_SQL_PLAN_b5429522ee05ab0e', attribute_name => 'ACCEPTED', attribute_value => 'NO' ); end; You can disable a SQL Plan Baseline so that it does not get used by the optimizer. Later you can re-enable the plan so that it gets used again. To disable, use this: Copy Copied to ClipboardError: Could not Copy declare ctr binary_integer; begin ctr := dbms_spm.alter_sql_plan_baseline ( sql_handle => 'SYS_SQL_e0b19f65b5429522', plan_name => 'SYS_SQL_PLAN_b5429522ee05ab0e', attribute_name => 'ENABLED', attribute_value => 'NO' ); end; declare ctr binary_integer; begin ctr := dbms_spm.alter_sql_plan_baseline ( sql_handle => 'SYS_SQL_e0b19f65b5429522', plan_name => 'SYS_SQL_PLAN_b5429522ee05ab0e', attribute_name => 'ENABLED', attribute_value => 'NO' ); end; When a specific SQL statement's plan is fixed by a baseline, the explain plan shows it clearly. At the end of the plan you will see a line that confirms that the plan has been fixed by a baseline. Differences vs. Stored Outlines If you are familiar with Stored Outlines, you must be wondering how SQL Plan Management is different. They seem to be doing the same thing: forcing a specific execution plan for a query. But they have one subtle difference—namely, with the latter the baseline can be evaluated for better plan and activated in place of the original one. Outlines, on the other hand, are fixed and can't be overridden unless you disable them or replace them with a different profile. In addition, plan baselines also have a history and you can know how the plans evolved over a period of time. A related question may be: What happens if you have a stored outline on a query and the baseline finds a better plan? That will be a conflict, won't it? Well, not really. When a query is parsed with an outline, the execution plan forced by the outline will be captured as the SQL Plan Baseline for that query. If the optimizer finds a different execution plan for that statement it will also be captured and stored in the SMB but it will not be an accepted plan. You would have to execute the evolve process to prove the new execution plan is better than the existing SQL Plan Baseline (old stored outline) before it will be used. Differences vs. Stored Profiles Profiles are not "plans" but rather metadata stored as a part of the execution plans, which is based on data. So a query plan would change based on the predicate in case of profiles. With SQL Plan Baselines, however, the plan would be the same regardless of the value in the predicate. Use Cases So what are example scenarios where you can use this feature? The best example is during upgrades or other parameter changes. One of the ways you can baseline a set of queries is using a SQL Tuning Set and loading the statements from the STS to the SPM. Thus you can generate an STS from Oracle Database 10g, export it, import it into Oracle Database 11g, and run the DBMS_SPM.UNPACK_STGTAB_BASELINE package to import the execution plans as SQL Plan Baselines. Later, when the optimizer finds better plans, it will add that to the SMB, allowing you to compare. Conversion of Outlines to SQL Plan Management (Release 2 Only) In versions previous to Oracle Database 11g, when the good ol’ Cost Based Optimizer (CBO) produces bad plans, what do you do? Dig into the problem to unearth the reason, of course. When all else fails, you have a trusted weapon: stored outlines, which force a specific plan for a specific SQL statement. The moment the SQL statement is re-executed, the fixed plan takes, eliminating all risks of a bad plan. These outlines are great because they ensure the plan doesn't change, but they are also bad for the same reason: i.e., they don’t change even when the CBO could have computed a better plan based on the changed execution environment. In 11g, SQL Plan Management baselines solve that problem because they provide plan stability. The CBO is not prevented from computing a new plan; however, the new plan is not used until the DBA confirms it. Now that you see how useful baselines are, you may be tempted to change all your outlines to baselines. Fortunately, in Oracle Database 11g Release 2, you can do that in one step. Before I show you how, let me start with the creation of outlines. Let’s create an outline on SQLs in the SH schema. We have to grant the schema the necessary privilege. SQL> conn / as sysdba SQL> grant create any outline to sh; SQL> conn / as sysdba SQL> grant create any outline to sh; Now connect as SH, check the execution plan for a simple query: SQL> set lines 132 SQL> set autot traceonly explain SQL> set echo on SQL> select sum(amount_sold) 2 from sales 3 where channel_id = 1 4 / Execution Plan ---------------------------------------------------------- Plan hash value: 3519235612 ---------------------------------------------------------------------------------------------- | Id | Operation | Name | Rows | Bytes | Cost (%CPU)| Time | Pstart| Pstop | ---------------------------------------------------------------------------------------------- | 0 | SELECT STATEMENT | | 1 | 8 | 489 (2)| 00:00:06 | | | | 1 | SORT AGGREGATE | | 1 | 8 | | | | | | 2 | PARTITION RANGE ALL| | 196K| 1538K| 489 (2)| 00:00:06 | 1 | 28 | |* 3 | TABLE ACCESS FULL | SALES | 196K| 1538K| 489 (2)| 00:00:06 | 1 | 28 | ---------------------------------------------------------------------------------------------- Predicate Information (identified by operation id): --------------------------------------------------- 3 - filter("CHANNEL_ID"=1) The SQL used a full table scan. There is more than one way to create an outline for the index scan. Let me show you one to create an outline named MYOUTLINE1: SQL> alter session set create_stored_outlines=myoutline1 2 / Session altered. SQL> alter session set optimizer_index_cost_adj = 1 2 / Session altered. SQL> select sum(AMOUNT_SOLD) 2 from sales 3 where CHANNEL_ID = 1 4 / Execution Plan ---------------------------------------------------------- Plan hash value: 2232880448 ------------------------------------------------------------------------------------------------------------------------- | Id | Operation | Name | Rows | Bytes | Cost (%CPU)| Time | Pstart| Pstop | ------------------------------------------------------------------------------------------------------------------------- | 0 | SELECT STATEMENT | | 1 | 8 | 12 (17)| 00:00:01 | | | | 1 | SORT AGGREGATE | | 1 | 8 | | | | | | 2 | PARTITION RANGE ALL | | 196K| 1538K| 12 (17)| 00:00:01 | 1 | 28 | | 3 | TABLE ACCESS BY LOCAL INDEX ROWID| SALES | 196K| 1538K| 12 (17)| 00:00:01 | 1 | 28 | | 4 | BITMAP CONVERSION TO ROWIDS | | | | | | | | |* 5 | BITMAP INDEX SINGLE VALUE | SALES_CHANNEL_BIX | | | | | 1 | 28 | ------------------------------------------------------------------------------------------------------------------------- Predicate Information (identified by operation id): --------------------------------------------------- 5 - access("CHANNEL_ID"=1) At this time the outline has been created for the index scan. Now let’s test the use of outlines. First let’s see the effect without the outline: SQL> conn sh/sh Connected. SQL> set autot traceonly explain SQL> select sum(AMOUNT_SOLD) 2 from sales 3 where CHANNEL_ID = 1 4 / Execution Plan ---------------------------------------------------------- Plan hash value: 3519235612 ---------------------------------------------------------------------------------------------- | Id | Operation | Name | Rows | Bytes | Cost (%CPU)| Time | Pstart| Pstop | ---------------------------------------------------------------------------------------------- | 0 | SELECT STATEMENT | | 1 | 8 | 489 (2)| 00:00:06 | | | | 1 | SORT AGGREGATE | | 1 | 8 | | | | | | 2 | PARTITION RANGE ALL| | 196K| 1538K| 489 (2)| 00:00:06 | 1 | 28 | |* 3 | TABLE ACCESS FULL | SALES | 196K| 1538K| 489 (2)| 00:00:06 | 1 | 28 | ---------------------------------------------------------------------------------------------- Predicate Information (identified by operation id): --------------------------------------------------- 3 - filter("CHANNEL_ID"=1) The query produced full table scan, as expected. Now, let’s see the plan after the outline is in effect: SQL> alter session set use_stored_outlines = MYOUTLINE1 2 / Session altered. SQL> select sum(AMOUNT_SOLD) 2 from sales 3 where CHANNEL_ID = 1 4 / Execution Plan ---------------------------------------------------------- Plan hash value: 2232880448 ------------------------------------------------------------------------------------------------------------------------- | Id | Operation | Name | Rows | Bytes | Cost (%CPU)| Time | Pstart| Pstop | ------------------------------------------------------------------------------------------------------------------------- | 0 | SELECT STATEMENT | | 1 | 8 | 12 (17)| 00:00:01 | | | | 1 | SORT AGGREGATE | | 1 | 8 | | | | | | 2 | PARTITION RANGE ALL | | 196K| 1538K| 12 (17)| 00:00:01 | 1 | 28 | | 3 | TABLE ACCESS BY LOCAL INDEX ROWID| SALES | 196K| 1538K| 12 (17)| 00:00:01 | 1 | 28 | | 4 | BITMAP CONVERSION TO ROWIDS | | | | | | | | |* 5 | BITMAP INDEX SINGLE VALUE | SALES_CHANNEL_BIX | | | | | 1 | 28 | ------------------------------------------------------------------------------------------------------------------------- Predicate Information (identified by operation id): --------------------------------------------------- 5 - access("CHANNEL_ID"=1) Note ----- - outline "SYS_OUTLINE_10062716393818901" used for this statement SQL> set lines 132 SQL> set autot traceonly explain SQL> set echo on SQL> select sum(amount_sold) 2 from sales 3 where channel_id = 1 4 / Execution Plan ---------------------------------------------------------- Plan hash value: 3519235612 ---------------------------------------------------------------------------------------------- | Id | Operation | Name | Rows | Bytes | Cost (%CPU)| Time | Pstart| Pstop | ---------------------------------------------------------------------------------------------- | 0 | SELECT STATEMENT | | 1 | 8 | 489 (2)| 00:00:06 | | | | 1 | SORT AGGREGATE | | 1 | 8 | | | | | | 2 | PARTITION RANGE ALL| | 196K| 1538K| 489 (2)| 00:00:06 | 1 | 28 | |* 3 | TABLE ACCESS FULL | SALES | 196K| 1538K| 489 (2)| 00:00:06 | 1 | 28 | ---------------------------------------------------------------------------------------------- Predicate Information (identified by operation id): --------------------------------------------------- 3 - filter("CHANNEL_ID"=1) The SQL used a full table scan. There is more than one way to create an outline for the index scan. Let me show you one to create an outline named MYOUTLINE1: SQL> alter session set create_stored_outlines=myoutline1 2 / Session altered. SQL> alter session set optimizer_index_cost_adj = 1 2 / Session altered. SQL> select sum(AMOUNT_SOLD) 2 from sales 3 where CHANNEL_ID = 1 4 / Execution Plan ---------------------------------------------------------- Plan hash value: 2232880448 ------------------------------------------------------------------------------------------------------------------------- | Id | Operation | Name | Rows | Bytes | Cost (%CPU)| Time | Pstart| Pstop | ------------------------------------------------------------------------------------------------------------------------- | 0 | SELECT STATEMENT | | 1 | 8 | 12 (17)| 00:00:01 | | | | 1 | SORT AGGREGATE | | 1 | 8 | | | | | | 2 | PARTITION RANGE ALL | | 196K| 1538K| 12 (17)| 00:00:01 | 1 | 28 | | 3 | TABLE ACCESS BY LOCAL INDEX ROWID| SALES | 196K| 1538K| 12 (17)| 00:00:01 | 1 | 28 | | 4 | BITMAP CONVERSION TO ROWIDS | | | | | | | | |* 5 | BITMAP INDEX SINGLE VALUE | SALES_CHANNEL_BIX | | | | | 1 | 28 | ------------------------------------------------------------------------------------------------------------------------- Predicate Information (identified by operation id): --------------------------------------------------- 5 - access("CHANNEL_ID"=1) At this time the outline has been created for the index scan. Now let’s test the use of outlines. First let’s see the effect without the outline: SQL> conn sh/sh Connected. SQL> set autot traceonly explain SQL> select sum(AMOUNT_SOLD) 2 from sales 3 where CHANNEL_ID = 1 4 / Execution Plan ---------------------------------------------------------- Plan hash value: 3519235612 ---------------------------------------------------------------------------------------------- | Id | Operation | Name | Rows | Bytes | Cost (%CPU)| Time | Pstart| Pstop | ---------------------------------------------------------------------------------------------- | 0 | SELECT STATEMENT | | 1 | 8 | 489 (2)| 00:00:06 | | | | 1 | SORT AGGREGATE | | 1 | 8 | | | | | | 2 | PARTITION RANGE ALL| | 196K| 1538K| 489 (2)| 00:00:06 | 1 | 28 | |* 3 | TABLE ACCESS FULL | SALES | 196K| 1538K| 489 (2)| 00:00:06 | 1 | 28 | ---------------------------------------------------------------------------------------------- Predicate Information (identified by operation id): --------------------------------------------------- 3 - filter("CHANNEL_ID"=1) The query produced full table scan, as expected. Now, let’s see the plan after the outline is in effect: SQL> alter session set use_stored_outlines = MYOUTLINE1 2 / Session altered. SQL> select sum(AMOUNT_SOLD) 2 from sales 3 where CHANNEL_ID = 1 4 / Execution Plan ---------------------------------------------------------- Plan hash value: 2232880448 ------------------------------------------------------------------------------------------------------------------------- | Id | Operation | Name | Rows | Bytes | Cost (%CPU)| Time | Pstart| Pstop | ------------------------------------------------------------------------------------------------------------------------- | 0 | SELECT STATEMENT | | 1 | 8 | 12 (17)| 00:00:01 | | | | 1 | SORT AGGREGATE | | 1 | 8 | | | | | | 2 | PARTITION RANGE ALL | | 196K| 1538K| 12 (17)| 00:00:01 | 1 | 28 | | 3 | TABLE ACCESS BY LOCAL INDEX ROWID| SALES | 196K| 1538K| 12 (17)| 00:00:01 | 1 | 28 | | 4 | BITMAP CONVERSION TO ROWIDS | | | | | | | | |* 5 | BITMAP INDEX SINGLE VALUE | SALES_CHANNEL_BIX | | | | | 1 | 28 | ------------------------------------------------------------------------------------------------------------------------- Predicate Information (identified by operation id): --------------------------------------------------- 5 - access("CHANNEL_ID"=1) Note ----- - outline "SYS_OUTLINE_10062716393818901" used for this statement The last line, under Notes, says clearly that the outline has been used along with the name of the outline. The outline caused the index scan to occur. Now that we know the outline is in place, let’s convert it to a baseline, which is the objective of this section. The process is ridiculously trivial. The package DBMS_SPM now has a new function called MIGRATE_STORED_OUTLINE. It returns a report of the changes performed. SQL> conn / as sysdba Connected. SQL> DECLARE 2 l_rep clob; 3 BEGIN 4 l_rep := DBMS_SPM.MIGRATE_STORED_OUTLINE( attribute_name => 'all' ); 5 END; 6 / PL/SQL procedure successfully completed. SQL> conn / as sysdba Connected. SQL> DECLARE 2 l_rep clob; 3 BEGIN 4 l_rep := DBMS_SPM.MIGRATE_STORED_OUTLINE( attribute_name => 'all' ); 5 END; 6 / PL/SQL procedure successfully completed. If you check for the baselines now: SQL> select plan_name, sql_text, fixed from dba_sql_plan_baselines 2 / PLAN_NAME SQL_TEXT FIX ------------------------------ -------------------------------------------------------------------------------- --- ... SYS_OUTLINE_10062716393818901 select sum(AMOUNT_SOLD) NO from sales where CHANNEL_ID = 1 SQL> select plan_name, sql_text, fixed from dba_sql_plan_baselines 2 / PLAN_NAME SQL_TEXT FIX ------------------------------ -------------------------------------------------------------------------------- --- ... SYS_OUTLINE_10062716393818901 select sum(AMOUNT_SOLD) NO from sales where CHANNEL_ID = 1 Checking some other columns in this view (shown below), you may also notice that the baseline has been enabled and accepted so it will be used by the CBO, unless the default value of the parameter use_spm_baselines has been altered from TRUE to FALSE. The ORIGIN column shows how the baseline came into being. Most outlines are created by auto-capture but this one was migrated from outline, so the value of the ORIGIN column shows that clearly. The plan name is the same as the outline name and the MODULE is set to the category of the outline. SQL> l 1 SELECT SQL_HANDLE, PLAN_NAME, ORIGIN, ENABLED, ACCEPTED, FIXED, MODULE 2* FROM DBA_SQL_PLAN_BASELINES SQL> / SQL_HANDLE PLAN_NAME ORIGIN ENA ACC FIX ------------------------------ ------------------------------ -------------- --- --- --- MODULE ------------------------------------------------ ... SYS_SQL_ec28978ecd0909c3 SYS_OUTLINE_10062716393818901 STORED-OUTLINE YES YES NO MYOUTLINE1 SQL> l 1 SELECT SQL_HANDLE, PLAN_NAME, ORIGIN, ENABLED, ACCEPTED, FIXED, MODULE 2* FROM DBA_SQL_PLAN_BASELINES SQL> / SQL_HANDLE PLAN_NAME ORIGIN ENA ACC FIX ------------------------------ ------------------------------ -------------- --- --- --- MODULE ------------------------------------------------ ... SYS_SQL_ec28978ecd0909c3 SYS_OUTLINE_10062716393818901 STORED-OUTLINE YES YES NO MYOUTLINE1 Now that this baseline is in place, let’s examine its effect on the query. SQL> conn sh/sh Connected. SQL> set autot traceonly explain SQL> select sum(AMOUNT_SOLD) 2 from sales 3 where CHANNEL_ID = 1 4 / Execution Plan ---------------------------------------------------------- Plan hash value: 2232880448 ------------------------------------------------------------------------------------------------------------------------- | Id | Operation | Name | Rows | Bytes | Cost (%CPU)| Time | Pstart| Pstop | ------------------------------------------------------------------------------------------------------------------------- | 0 | SELECT STATEMENT | | 1 | 8 | 12 (17)| 00:00:01 | | | | 1 | SORT AGGREGATE | | 1 | 8 | | | | | | 2 | PARTITION RANGE ALL | | 196K| 1538K| 12 (17)| 00:00:01 | 1 | 28 | | 3 | TABLE ACCESS BY LOCAL INDEX ROWID| SALES | 196K| 1538K| 12 (17)| 00:00:01 | 1 | 28 | | 4 | BITMAP CONVERSION TO ROWIDS | | | | | | | | |* 5 | BITMAP INDEX SINGLE VALUE | SALES_CHANNEL_BIX | | | | | 1 | 28 | ------------------------------------------------------------------------------------------------------------------------- Predicate Information (identified by operation id): --------------------------------------------------- 5 - access("CHANNEL_ID"=1) Note ----- - SQL plan baseline "SYS_OUTLINE_10062716393818901" used for this statement SQL> conn sh/sh Connected. SQL> set autot traceonly explain SQL> select sum(AMOUNT_SOLD) 2 from sales 3 where CHANNEL_ID = 1 4 / Execution Plan ---------------------------------------------------------- Plan hash value: 2232880448 ------------------------------------------------------------------------------------------------------------------------- | Id | Operation | Name | Rows | Bytes | Cost (%CPU)| Time | Pstart| Pstop | ------------------------------------------------------------------------------------------------------------------------- | 0 | SELECT STATEMENT | | 1 | 8 | 12 (17)| 00:00:01 | | | | 1 | SORT AGGREGATE | | 1 | 8 | | | | | | 2 | PARTITION RANGE ALL | | 196K| 1538K| 12 (17)| 00:00:01 | 1 | 28 | | 3 | TABLE ACCESS BY LOCAL INDEX ROWID| SALES | 196K| 1538K| 12 (17)| 00:00:01 | 1 | 28 | | 4 | BITMAP CONVERSION TO ROWIDS | | | | | | | | |* 5 | BITMAP INDEX SINGLE VALUE | SALES_CHANNEL_BIX | | | | | 1 | 28 | ------------------------------------------------------------------------------------------------------------------------- Predicate Information (identified by operation id): --------------------------------------------------- 5 - access("CHANNEL_ID"=1) Note ----- - SQL plan baseline "SYS_OUTLINE_10062716393818901" used for this statement Note the use of baseline, shown in the last line under “Note”. The execution plan also uses index scan; exactly what we intended. We accomplished our objective with a very trivial set of commands. What about the outline we just migrated from? If you check for the outlines: SQL> SELECT name, category, used FROM user_outlines; NAME CATEGORY USED ------------------------------ ------------------------------ ------ SYS_OUTLINE_05092314510581419 SCOTT_OUTLINES USED SQL> SELECT name, category, used FROM user_outlines; NAME CATEGORY USED ------------------------------ ------------------------------ ------ SYS_OUTLINE_05092314510581419 SCOTT_OUTLINES USED The outline is still there. The migration is not strictly as such, it was more like a copy -- so the old outline is left behind. You may want to drop it, and all such migrated outlines. One simple command does that, calling the new function DROP_MIGRATED_STORED_OUTLINE in the package DBMS_SPM does it. It returns the number of outlines dropped. SQL> declare 2 l_plans number; 3 begin 4 l_plans := dbms_spm.drop_migrated_stored_outline; 5 dbms_output.put_line('Migrated stored outlines dropped: ' || l_plans); 6* end; SQL> / Migrated stored outlines dropped: 9 PL/SQL procedure successfully completed. SQL> declare 2 l_plans number; 3 begin 4 l_plans := dbms_spm.drop_migrated_stored_outline; 5 dbms_output.put_line('Migrated stored outlines dropped: ' || l_plans); 6* end; SQL> / Migrated stored outlines dropped: 9 PL/SQL procedure successfully completed. Recall that the baseline you got earlier by migrating the outline is not fixed. This is desirable because in the future there may be better plans and you don’t want to restrict the CBO to this plan alone. However, if you want precisely that – that this plan and this plan alone should be used by CBO and nothing else – you can “fix” the plan. sql> declare 2 l_plans number; 3 begin 4 l_plans := dbms_spm.alter_sql_plan_baseline( 5 sql_handle=>'SYS_SQL_ec28978ecd0909c3', 6 attribute_name=>'FIXED', 7 attribute_value=>'YES'); 8 dbms_output.put_line('plans altered: ' || l_plans); 9 end; 10 / PL/SQL procedure successfully completed. SQL> select plan_name, sql_text, fixed from dba_sql_plan_baselines 2 / PLAN_NAME SQL_TEXT FIX ------------------------------ -------------------------------------------------------------------------------- --- ... SYS_OUTLINE_10062716393818901 select sum(AMOUNT_SOLD) YES from sales where CHANNEL_ID = 1 9 rows selected. sql> declare 2 l_plans number; 3 begin 4 l_plans := dbms_spm.alter_sql_plan_baseline( 5 sql_handle=>'SYS_SQL_ec28978ecd0909c3', 6 attribute_name=>'FIXED', 7 attribute_value=>'YES'); 8 dbms_output.put_line('plans altered: ' || l_plans); 9 end; 10 / PL/SQL procedure successfully completed. SQL> select plan_name, sql_text, fixed from dba_sql_plan_baselines 2 / PLAN_NAME SQL_TEXT FIX ------------------------------ -------------------------------------------------------------------------------- --- ... SYS_OUTLINE_10062716393818901 select sum(AMOUNT_SOLD) YES from sales where CHANNEL_ID = 1 9 rows selected. Outlines gave you the ability to avoid unpleasant surprises as a result of the changes in plan. Baselines gave you that ability and potential to compute new, better plans. Instead of reinventing the wheel, you can extend the life of your investment, i.e. outlines by converting them to baselines. Conclusion Adaptive Cursors and SQL Plan Management are just two examples of how the database is now very intelligent about the kinds of requests it gets, and how to react to them. Both allow you to get the best of both worlds—with Adaptive Cursors, you can use bind variables and yet not risk a sub-optimal plan and with SQL Plan Management, the execution plan is not cast in stone but rather can evolve over time while retaining the stability in the short term. Back Series TOC

ewallet p12 vs cwallet sso

The  ewallet.p12  stores credentials and certificates protected by a user password, while  cwallet.sso  provides an obfuscated, random passw...