Wednesday, May 17, 2023

Oracle Technology Prerequisites Rules for Oracle E-Business Suite Applications

This section provides an overview of Oracle’s licensing prerequisites for the underlying database and

other technologies when license Oracle E-Business Suite Applications as it relates to the level of

modifications that are performed to the application programs. Additional prerequisites may apply to

specific application programs, please contact your Oracle sales representative for more information.

Please note that there are two tables: one table applicable for orders after February 28, 2007 (see

pages 3 and 4); and one table applicable for orders up to and including February 28, 2007 (see pages

5 and 6).

Oracle distinguishes between three levels of modifications to its Oracle E-Business Suite Applications,

which have increasing licensing requirements for the underlying database and other technologies.

These modification levels are:

No Modifications: The customer is deploying the application without modifications. In this case, the

customer receives a restricted-use license for the Oracle Database Enterprise Edition, Oracle Internet

Application Server Enterprise Edition, which includes WebLogic Server Basic, and a limited set of

other products. This restricted-use license may only be used to run with the licensed application(s); it

cannot be used for any other purposes.

Modifications implemented as Java programs and/or modifications to Forms, Reports and/or

Workbooks: These modifications include any creation or modification of reports, forms, and/or

workbooks. In addition, modifications implemented as Java programs, which include Java that

produces html interface or Java business logic fall under this category. In these two scenarios, the

customer receives a restricted-use license for the Oracle Database Enterprise Edition and a limited

set of other products. This restricted-use license may only be used to run with the licensed

application(s); it cannot be used for any other purposes. Additionally, the customer is required to

purchase licenses for a limited set of products, including Oracle Internet Application Server Enterprise

Edition for the total number of end users1 or processors for which the modifications are deployed.

Modifications to the database: Examples of these modifications include adding and/or modifying the

tables, columns, stored procedures and triggers, which are the part of Oracle E-Business Suite

application schema. In this situation, the customer is required to purchase licenses for the Oracle

Database Enterprise Edition, as well as Oracle Internet Application Server Enterprise Edition for the

total number of end users1 or processors for which the modifications are deployed. The customer

receives a restricted-use license of a limited set of products. Additionally, a limited set of other

products is required to be licensed as defined in the following table



https://www.oracle.com/us/corporate/pricing/application-licensing-table-070571.pdf

Thursday, May 4, 2023

Oracle Database In-Memory Option (DBIM) Basics and Interaction with Data Warehousing Features (Doc ID 1903683.1)

 In this Document

Purpose
Questions and Answers
  
 Basics
 1.  What patches are required for Database In-Memory?
 2.  Where can I find information about how the IM column store works?
 3.  How do I enable/disable the IM column store, and what is the minimum size? 
 4.  Will objects age out of the IM column store, and can I increase the size of the IM column store dynamically?
 5.  What objects can I put in the IM column store?  
 6.  When will an object benefit from being in the IM column store?
 7.  When will an object not benefit from being in the IM column store?
 8.  How do I populate the IM column store?
 9.  How should I adjust SGA_TARGET and PGA_TARGET when using the IM column store?
 10. How can I estimate how much memory I need to allocate for each object I want to put in the IM column store?
 11. Is the IM column store transactionally consistent with the buffer cache?  
 12. Is IM column store compression the same as Advanced Compression?
 13.. How is DML handled?
 12.2 New Features
 18.1 New Features
 Licensing
 1. I do not have a license for the DBIM option.  How do I demonstrate it is not in use in my database in case of audit?
 Monitoring
 1. Is there an Advisor for the IM column store?
 2. How can I monitor the contents of the IM column store?
 Advanced Compression
 1. What happens if data on disk is compressed with either Basic or Advanced Compression?
 Data Pump (non-TTS)
 1. Will objects be imported with the INMEMORY attribute?
 Information Lifecycle Management - Automatic Data Optimization (ADO)
 Materialized Views
 1. Can I put my materialized view logs in the IM column store?  
 Multitenant
 1. Should I set the INMEMORY_SIZE parameter at the PDB level, as well as at the CDB level?
 Optimizer and Plan Operations
 1. My table is in the IM column store, so why do I see primary key (PK) access instead of TABLE ACCESS IN MEMORY FULL?
 Parallel Execution
 1. Will parallel execution work with the IM column store?
 Partitioning
 1. If only some of my partitions of a partitioned table are in the IM column store, will the IM column store be used? 
 2. If I exchange partitions with a partitioned table that is INMEMORY, will the table that was exchanged into the partition be in the IM column store?
 3. If you want to be able to define at the partition level what columns will or won't be in the column store, how can this be done?  For example, column1 in partition 1 might be INMEMORY, but column1 might not be INMEMORY for partition 2.
 Query Rewrite
 1. If I have my primary (source) tables in the IM column store, will my query still rewrite using my materialized views?
 RAC
 1. How is the IM column store distributed in a RAC environment?
 Transportable Tablespaces (TTS)
 1.  Is there anything special that needs to be done to use TTS when a tablespace or its objects have the physical attribute of INMEMORY?
 Wrong Results
 1. If have wrong results, how can I narrow down the problem to confirm or eliminate the IM column store as a factor?
  Automatic In-Memory, External Table Support and Greater Performance in Oracle Database 18c
 In-Memory Dynamic Scans
References

APPLIES TO:

Oracle Database Cloud Service - Version N/A and later
Oracle Database - Enterprise Edition - Version 12.1.0.2 and later
Gen 1 Exadata Cloud at Customer (Oracle Exadata Database Cloud Machine) - Version N/A and later
Oracle Cloud Infrastructure - Database Service - Version N/A and later
Oracle Database Exadata Express Cloud Service - Version N/A and later
Information in this document applies to any platform.

PURPOSE

This note references some basic information from the Oracle® Database Administrator's Guide 12c Release 1 (12.1) about the Oracle Database In-Memory (IM) column store and adds some information regarding its interaction with various data warehousing features.

QUESTIONS AND ANSWERS

NOTE: In the images and/or the document content below, the user information and data used represents fictitious data from the Oracle sample schema(s) or Public Documentation delivered with an Oracle database product. Any similarity to actual persons, living or dead, is purely coincidental and not intended in any manner.

 

Basics

1.  What patches are required for Database In-Memory?

For Database In-Memory, you should always apply the latest ProactiveBP.  Development will first require you to apply the latest proactive bundle patch (ProactiveBP) if you feel you have hit an unexpected behavior before Support can file a new bug. For a list of the latest ProactiveBPs, see Document 1937782.1 12.1.0.2Database Proactive Bundle Patches / Bundle Patches for Engineered Systems and DB In-Memory - List of Fixes in each Bundle. These ProactiveBPs include the latest PSU.

You cannot apply a ProactiveBP on top of an ORACLE_HOME that has a regular PSU applied. If you have Database In-Memory, you should apply only ProactiveBPs. Should you have an ORACLE_HOME that has a PSU already applied, you will need to completely remove all PSUs, and apply the latest ProactiveBP; apply only ProactiveBPs from then on.  For more information about ProactiveBPs, see Document 1962125.1 Oracle Database - Overview of Database Patch Delivery Methods.

2.  Where can I find information about how the IM column store works?

You can find more information about the use of the In-Memory (IM) column store in several places in the Oracle documentation set, especially

Oracle® Database Administrator's Guide 12c Release 1 (12.1), Using the In-Memory Column Store

Oracle® Database Data Warehousing Guide 12c Release 1 (12.1), About the Oracle In-Memory Column Store

Oracle® Database In-Memory - An Oracle Technical Brief October 2014 

3.  How do I enable/disable the IM column store, and what is the minimum size? 

The COMPATIBLE parameter must be set to at least 12.1.0 to enable the feature, and the RDBMS version must be at least 12.1.0.2.0.

These parameters cause the IM column store to be enabled or disabled.

  • INMEMORY_SIZE -- set this parameter to a non-zero value to identify the fixed size of the IM column store pool.   In 12.1, this pool is not dynamic and will not resize itself; you must alter the value of this parameter and bounce the database to change the size of the pool.   The memory specified by this parameter will be subtracted from the SGA, and is not managed by Automatic Memory Management (AMM).  The minimum value allowed is 100M.   A value of 0 disables the IM column store.  In 12.2, this can be dynamically increased (see the next question for details).

  • INMEMORY_QUERY -- set to ENABLE by default, when INMEMORY_SIZE is specified.  To temporarily disable all in-memory queries for the entire database, set this to DISABLE.
    The SQL hints /*+ INMEMORY */ and /*+ NO_INMEMORY */ override the value of the INMEMORY_QUERY parameter and will enable or disable the IM column store for a hinted SQL statement.

See the INMEMORY_* parameters in the RDBMS V12.1 Oracle® Database Reference for more configuration parameters.

4.  Will objects age out of the IM column store, and can I increase the size of the IM column store dynamically?

No to both questions for 12.1.  The IM column store is not a cache; if you are out of memory in the IM column store, nothing more will be populated until you manually remove an object from the IM column store to make space.  This is a static pool that subtracts its size directly from the SGA.  You must change the value of INMEMORY_SIZE and bounce the database to change the size of the IM column store.

Starting in 12.2, objects still do not age out of the IM column store, but you can increase the size of the IM column store dynamically as long as the following are met:

  • The increase is at least 128MB (you will get an ORA-2097 if it is not)
  • SPFILE is in use
  • There is enough room in the SGA (you may need to increase this first)
    • Non-RAC: SGA_TARGET = SGA_TARGET + <new increase>
    • RAC: SGA_TARGET = (SGA_TARGET + <new increase>) * 1.1

5.  What objects can I put in the IM column store?  

If INMEMORY is enabled at the tablespace level, then all tables and materialized views in the tablespace are enabled for the IM column store by default.  Or, you can populate all of an object's columns in the IM column store or a subset of the object's columns.  Similarly, for a partitioned table or materialized view, you can populate all of the partitions in the IM column store or a subset of the partitions.  You can enable the IM column store at any of the following levels.   

  • Column
  • Table
  • Materialized view
  • Tablespace
  • Partition 
  • Out of line LOBs (in RDBMS 18.1+)
  • External table (in RDBMS 18.1+)

Note the following objects are NOT put in the IM column store:

  • Objects <= 64KB, as they will waste a considerable amount of space inside the IM column store, as memory is allocated in 1MB chunks
  • Index-organized tables (IOTs)
  • Indexes
  • Cluster tables
  • Long data types (deprecated since 8i)
  • Objects owned by the SYS user and stored in the SYSTEM or SYSAUX tablespace
  • Out of line LOBs (< 18.1)
  • External tables (< 18.1)

In 12.1.0.2, the IM column store cannot be used on an Active Data Guard (ADG) Standby instance, but can be used in a Logical Standby instance, and an instance maintained using Oracle Golden Gate.
Starting in 12.2.0.1, the IM column store can be used on ADG, but only if running on either Exadata, SuperCluster, or in the Oracle Cloud.  The DISTRIBUTE FOR SERVICE value determines where the object will be distributed.

6.  When will an object benefit from being in the IM column store?

Storing a database object in the IM column store can improve performance significantly for the following types of operations performed on the database object:

  • A query that scans a large number of rows and applies filters that use operators such as the following: =<>, and IN
  • A query that selects a small number of columns from a table or materialized view with a large number of columns, such as a query that selects five columns from a table with 100 columns
  • A query that joins a small table to a large table
  • A query that aggregates data

7.  When will an object not benefit from being in the IM column store?

  • Queries with complex predicates
  • Queries that select a large number of columns
  • Queries that return a large number of rows
  • Queries with multiple large table joins

8.  How do I populate the IM column store?

The population of the IM column store occurs automatically, depending on the value of the INMEMORY subclause PRIORITY.

PopulatedUse keywords
Immediately after the database is openedINMEMORY PRIORITY CRITICAL
After all higher priority objects have been populated, if there is still room in the IM column storeINMEMORY PRIORITY HIGH, INMEMORY PRIORITY MEDIUM, or INMEMORY PRIORITY LOW
Upon first scan of the object, if there is still room in the IM column store (this is the default) INMEMORY PRIORITY NONE, or simply INMEMORY

 

9.  How should I adjust SGA_TARGET and PGA_TARGET when using the IM column store?

Additional space is required for the shared_pool when using the IM column store. Oracle Database In-Memory allocates additional locks from the shared pool and has additional overhead for the IM column store in a RAC environment. Without adjustments you receive an ORA-4031 error during the population of the In-Memory column store if the shared_pool cannot be extended further.  The following adjustments are recommended:*

Type of DatabaseSGA_TARGETPGA_AGGREGATE_TARGET
Single-instance DatabasesSGA_TARGET + INMEMORY_SIZEPGA_AGGREGATE_TARGET
RAC Databases

SGA_TARGET + (INMEMORY_SIZE * 1.1)

PGA_AGGREGATE_TARGET

 
*    
MEMORY_TARGET = SGA_TARGET + PGA_AGGREGATE_TARGET
** These are minimum guidelines.  If you are using parallel execution to reduce the possibility of spilling to TEMP on large joins and aggregations, you should also make sure pga_aggregate_target is at least as large as (parallel_max_servers * 2G). 

10. How can I estimate how much memory I need to allocate for each object I want to put in the IM column store?

You can use dbms_compression.get_compression_ratio to estimate the memory requirements for putting the object in the IM column store.  In this example below, we are using a compression type of COMP_INMEMORY_QUERY_LOW, which is the compression type that is set by default when you use the key word INMEMORY and do not specify a compression type.  You can find the other compression types here.  You can check the actual size of the object once it's in the IM column store by selecting inmemory_size from v$im_segments.

set serveroutput on
DECLARE
  l_blkcnt_cmp      binary_integer;
  l_blkcnt_uncmp    binary_integer;
  l_row_cmp         binary_integer;
  l_row_uncmp       binary_integer;
  l_cmp_ratio       number;
  l_comptype_str    varchar2(100);
BEGIN
   dbms_compression.get_compression_ratio(
--  input parameters
    scratchtbsname => upper('&ScratchTBS'),                     -- scratch tablespace
    ownname        => upper('&ownername'),                      -- owner of the table
    objname        => upper('&TableName'),                      -- table name
    subobjname     => NULL,                                     -- partition name
    comptype       => DBMS_COMPRESSION.COMP_INMEMORY_QUERY_LOW, -- compression algorithm
-- output parameters
    blkcnt_cmp   => l_blkcnt_cmp,                               -- number of compressed blocks
    blkcnt_uncmp => l_blkcnt_uncmp,                             -- number of uncompressed blocks
    row_cmp      => l_row_cmp,                                  -- number of rows in a compressed block
    row_uncmp    => l_row_uncmp,                                -- number of rows in an uncompressed block
    cmp_ratio    => l_cmp_ratio,                                -- compression ratio, which is the number of uncompressed blocks divided by the number of compressed blocks
    comptype_str => l_comptype_str                              -- compression type
);                
dbms_output.put_line('.');
dbms_output.put_line('OUTPUT: ');
dbms_output.put_line('T1 '||l_comptype_str||' ratio: '||to_char(l_cmp_ratio,'99.999'));
end;
/
--Example:
--  In this example, we see that the compression ratio is actually better than was estimated.

-- create a test table
create table t1 /*+ append */ as select * from dba_objects;


-- see what the on-disk size is
SQL> select sum(bytes), sum(blocks) from user_segments where segment_name = 'T1';

SUM(BYTES) SUM(BLOCKS)
---------- -----------
  13631488        1664


-- calculate the compression ratio
set serveroutput on
DECLARE
  l_blkcnt_cmp      binary_integer;
  l_blkcnt_uncmp    binary_integer;
  l_row_cmp         binary_integer;
  l_row_uncmp       binary_integer;
  l_cmp_ratio       number;
  l_comptype_str    varchar2(100);
BEGIN
   dbms_compression.get_compression_ratio(
--  input parameters
    scratchtbsname => upper('users'),                           -- scratch tablespace
    ownname        => upper('<USER>'),                          -- owner of the table
    objname        => upper('t1'),                              -- table name
    subobjname     => NULL,                                     -- partition name
    comptype       => DBMS_COMPRESSION.COMP_INMEMORY_QUERY_LOW, -- compression algorithm
-- output parameters
    blkcnt_cmp   => l_blkcnt_cmp,                               -- number of compressed blocks
    blkcnt_uncmp => l_blkcnt_uncmp,                             -- number of uncompressed blocks
    row_cmp      => l_row_cmp,                                  -- number of rows in a compressed block
    row_uncmp    => l_row_uncmp,                                -- number of rows in an uncompressed block
    cmp_ratio    => l_cmp_ratio,                                -- compression ratio
    comptype_str => l_comptype_str                              -- compression type
);
dbms_output.put_line('.');
dbms_output.put_line('OUTPUT: ');
dbms_output.put_line('T1 '||l_comptype_str||' ratio: '||to_char(l_cmp_ratio,'99.999'));
end;
/
.
OUTPUT:
T1 "In-memory Memcompress Query Low" ratio:   2.800

PL/SQL procedure successfully completed.


-- manually calculate the inmemory size by using the compression ratio
   -- 13631488 on-disk bytes / 2.8 compression ratio = 4868388.57 estimated bytes in IM column store


-- put the table in the IM column store
alter table T1 inmemory;
select * from t1 where 1=0;

Table altered.


-- check the actual size in the IM column store
SQL> select inmemory_size from v$im_segments where segment_name = 'T1';

INMEMORY_SIZE
-------------
      4325376


11. Is the IM column store transactionally consistent with the buffer cache?  

Yes.  The database keeps the columnar data transactionally consistent with the buffer cache.  When fetching data, Oracle Database can read either the IM column store or the database buffer cache, or both within the same query. The database transparently sends OLTP queries (such as primary key look-ups) to the buffer cache, and analytic and reporting queries to the IM column store.   Read consistency in the IM column store is managed via System Change Numbers (SCNs) just as it is without Database In-Memory enabled.

12. Is IM column store compression the same as Advanced Compression?

The IM column store has its own compression algorithms, which are completely different from those of Advanced Compression.  Advanced Compression focuses on saving space; the IM column store uses compression to improve query performance, as well as save space.  The default, MEMCOMPRESS FOR QUERY LOW, gives the best performance of all the compression methods.  See IM Column Store Compression Methods for a description of all options.  Note: Changing the compression clause of columns with an ALTER TABLE statement results in a repopulation of any existing data in the IM column store.  See the "Advanced Compression" section below for information about the IM column store's interaction with table compression.

13.. How is DML handled?

An In-Memory Column Unit (IMCU) is a chunk of rows stored in columnar form, and the number of rows may vary per IMCU.  For conventional DML, when a database block is changed, the entries in the column store are marked stale as of a given SCN.  The rowid of the data is placed in a transaction journal.  If a query scans the IM column store and the SCN is older than the query scan, the old data in the IMCU will be seen; if the SCN is newer than the query scan, the rowid in the transaction journal will be used to get the newest data from the buffer cache.  Periodically, the In-Memory Coordinator (IMCO) will ask worker processes to repopulate the stale IMCUs by building a new IMCU online.  As long as there is space in the IM column store, both versions of the IMCU will exist to prevent ORA-1555 (called double buffering).  For direct loads, if the object is marked with priority, then the extent will be brought into IMCUs at that time; if on-demand priority, the worker processes will be tasked with bringing the new data into the IMCUs when they run.  A truncate of an object marks all its related IMCUs as invalid, and the worker processes will do asynchronous garbage collection over time.

12.2 New Features

Document 2242973.1 12.2 New Feature: Database In-Memory (IM) Column Store Expressions

Document 2242985.1 12.2 New Feature: Database In-Memory (IM) FastStart

Document 2242995.1 12.2 New Feature: Database In-Memory (IM) Column Store Join Groups

Document 2211831.1 Automatic Data Optimization (ADO) Supports Database In-Memory Column Store in 12.2>)

Dynamic Resizing of INMEMORY_SIZE (see question 3. Will objects age out of the IM column store, and can I increase the size of the IM column store dynamically? above)

Database In-Memory Guide: Deploying an IM Column Store with Oracle Active DataGuard

18.1 New Features

Database SQL Tuning Guide: In-Memory Aggregation

Licensing

1. I do not have a license for the DBIM option.  How do I demonstrate it is not in use in my database in case of audit?

You can demonstrate the option is not in use by issuing the following SQL statement and seeing that the parameter has a value of 0.  You may also want to check DBA_FEATURE_USAGE_STATISTICS for historical use.

show parameter inmemory_size
select name, first_usage_date, last_usage_date from dba_feature_usage_statistics where name like 'In-%';

Monitoring

1. Is there an Advisor for the IM column store?

The In-Memory Advisor, which is licensed as part of the Database Tuning Pack, analyzes your workload and gives advice on how to configure the IM column store.  See Document 1965343.1 Oracle Database In-Memory Advisor for information about this new Advisor.

2. How can I monitor the contents of the IM column store?

Dynamic views V$IM_SEGMENTS and V$IM_USER_SEGMENTS let you see what objects are currently populated in the IM column store.  These views not only show which objects are populated in the IM column, they also indicate how the objects are distributed across a RAC cluster and whether the entire object has been populated (BYTES_NOT_POPULATED).

It is also possible to use this view to determine the compression ratio achieved for each object populated in the IM column store, assuming the objects were not compressed on disk.

SELECT v.owner, v.segment_name,
v.bytes orig_size,
v.inmemory_size in_mem_size,
v.bytes / v.inmemory_size comp_ratio
FROM v$im_segments v;

Another view, V$IM_COLUMN_LEVEL, contains details on the columns populated into the column store, as not all columns in a table need to be populated into the column store.

You may also be interested in In-Memory Central, which is part of Enterprise Manager Cloud Control 12c.  (See <Document 1986938.1> Enterprise Manager Cloud Control 12c: In-Memory Central Shows Oracle Database In-Memory Column (DBIM) Store Information.)

Advanced Compression

1. What happens if data on disk is compressed with either Basic or Advanced Compression?

If data on disk is compressed with either Basic or Advance Compression, it is read into memory uncompressed, pivoted 90 degrees, then re-compressed per the INMEMORY compression format that you chose (or defaulted).

Data Pump (non-TTS)

1. Will objects be imported with the INMEMORY attribute?

You can import database objects that are enabled for the IM column store using the TRANSFORM=INMEMORY:y option of the impdp command. With this option, Data Pump keeps the IM column store clause for all objects that have one. When the TRANSFORM=INMEMORY:n option is specified, Data Pump drops the IM column store clause from all objects that have one.

You can also use the TRANSFORM=INMEMORY_CLAUSE:string option to override the IM column store clause for a database object in the dump file during import. For example, you can use this option to change the IM column store compression for a database object being imported.

Example (note the \ characters are necessary to "escape" the double quotes so they are not stripped by the OS):

transform=inmemory_clause:\"INMEMORY MEMCOMPRESS FOR DML PRIORITY CRITICAL\"

Information Lifecycle Management - Automatic Data Optimization (ADO)

Starting in Oracle Database 12c Release 2 (12.2), ADO manages the In-Memory (IM) column store as a new data tier. You can create policies to evict objects from the IM column store when they degrade performance, and populate objects when they improve performance. ADO manages the IM column store using Heat Map statistics.   See <Document 2211831.1> "Automatic Data Optimization (ADO) Supports Database In-Memory Column Store in 12.2" for further information.

Materialized Views

1. Can I put my materialized view logs in the IM column store?  

You should put your materialized view log in the IM column store ONLY if (1) you need the refreshes to be very fast AND (2) the volume of DML isn't too high.  If the underlying table has a lot of DML going on, having the materialized view log in the IM column store may actual slow the DML down as we will have to maintain the log both in the IM column store and on disk.

-- this would put the materialized view log for <USER>.emp in the IM column store
alter table <USER>.mlog$_emp inmemory;

Multitenant

1. Should I set the INMEMORY_SIZE parameter at the PDB level, as well as at the CDB level?

Yes.  We recommend setting the cumulative INMEMORY_SIZE in the CDB, and then setting the INMEMORY_SIZE for each PDB that represents its portion of the total INMEMORY_SIZE in the CDB.  You can oversubscribe the INMEMORY_SIZE in a PDB so that all of the INMEMORY_SIZE parameters for all the PDBs are larger than the total INMEMORY_SIZE in the CDB; if you do, there is no error, but each PDB may not get the portion of the total that you want it to have.   For example you could have INMEMORY_SIZE = 1G in the CDB, and if you had 4 PDBs, each PDB could set the INMEMORY_SIZE = 250M, if you wanted it divided evenly among them.

Optimizer and Plan Operations

1. My table is in the IM column store, so why do I see primary key (PK) access instead of TABLE ACCESS IN MEMORY FULL?

A primary-key lookup will always be costed cheaper than FTS.   If your query uses a PK for access, then it may not be costed as cheaper in IM column store. 

Parallel Execution

1. Will parallel execution work with the IM column store?

Yes.  Remember, the IM column store is just a different way to store the data.  Parallel execution works just as it would with rows coming from the buffer cache or disk.

Partitioning

1. If only some of my partitions of a partitioned table are in the IM column store, will the IM column store be used? 

If FTS is the only method of scan, the execution plan will show a single TABLE ACCESS IN MEMORY FULL to use as much from IM column store as possible.  If alternative access methods exist, we can get data from other sources.  For example, an FTS might access what's in the IM column store, and an index access data in the buffer cache or on disk.  Mixed access methods will still show as TABLE ACCESS IN MEMORY FULL in an operation.

2. If I exchange partitions with a partitioned table that is INMEMORY, will the table that was exchanged into the partition be in the IM column store?

The INMEMORY clause is a physical-segment-attribute clause.  As with any other physical-segment attribute, the INMEMORY attribute has to be set on the exchange table prior to the exchange in order for it to be in effect once it is exchanged into the partition.  You can either alter the exchange table with the INMEMORY clause before exchange, or alter the table partition after the exchange.

3. If you want to be able to define at the partition level what columns will or won't be in the column store, how can this be done?  For example, column1 in partition 1 might be INMEMORY, but column1 might not be INMEMORY for partition 2.

The inmemory_column_clause is a table-level clause and is not applicable to specific partitions; the view V$IM_COLUMN_LEVEL also specifies only the table name.  Allowing this at the partition level would mean partitions couldn't be pruned at compile time, and would carry some other adverse side-effects; there are no plans to implement such a feature.   You can add or remove columns from the IM column store at the table level by using syntax such as this:

Example:
ALTER TABLE <table_name> INMEMORY NO INMEMORY(<column_name>);  -- puts the table INMEMORY for all partitions, but excludes the named column from the IM column store for all partitions

Query Rewrite

1. If I have my primary (source) tables in the IM column store, will my query still rewrite using my materialized views?

It depends on the cost.   If the materialized view is not in the IM column store, and the optimizer calculates it's faster to access the primary tables from the IM column store vs. the materialized view outside the IM column store, then it will not rewrite.  If most of your queries against a table should rewrite, you can put the materialized views in the IM column store and leave the primary table(s) as NO INMEMORY.

RAC

1. How is the IM column store distributed in a RAC environment?

If you are going to use the In-Memory (IM) column store in a RAC environment, you will want to read the Oracle Database In-Memory blog, written by experts in this area, to thoroughly understand how the IM column store behaves, and what parameters affect it.

Oracle Database In-Memory on RAC - Part 1 (revised)

Oracle Database In-Memory on RAC - Part 2

Oracle Database In-Memory on RAC - Part 3

Oracle Database In-Memory on RAC - Part 4

 For example, Part 1 of this blog says to use AutoDOP to take full advantage of the IM column store on RAC, and explains how the IM column store interacts with parallelism.  It also explains that setting PARALLEL_INSTANCE_GROUP to limit nodes on which parallelism runs will also limit the distribution of IMCUs in the IM column store to those nodes.

Each node in an Oracle RAC environment has its own IM column store. Oracle recommends that you equally size the IM column stores on each Oracle RAC node. For any Oracle RAC node that does not require an IM column store, set the INMEMORY_SIZE parameter to 0.

In RAC, the IMCUs are *distributed* among the IM column stores on ALL nodes as long as PARALLEL_INSTANCE_GROUP is not set.  That means that data that is in the IM column store on one node will not also be in the IM column store on another node (see exception for DUPLICATE parameter below*).  If PARALLEL_INSTANCE_GROUP is set, then the IMCUs and parallelism are restricted to the IM column stores on the nodes pointed to by the service name defined in this parameter.  You can control the type of distribution of the IMCUs across nodes with the DISTRIBUTION parameter.

We recommend you apply a patch for unpublished Bug 22013607 - "COST OF SERIAL IM FULL TABLE SCAN DOESN'T ACCOUNT FOR NECESSARY IO ON RAC," which addresses some cases where queries ran serially instead of in parallel, and thus could not access data in other nodes' IM column stores.

*DUPLICATE: It is possible to have completely different objects populated on every node, or to have larger objects distributed across all of the IM column stores in the cluster. It is also possible to have the same objects appear in the IM column store on every node (restricted to only engineered systems with InfiniBand (IB), since IMCUs are not shipped over the interconnect). The distribution of objects across the IM column stores in a cluster is controlled by two additional sub-clauses to the INMEMORY attribute; DISTRIBUTE and DUPLICATE (restricted).

Also see the section Overview of In-Memory Column Store with Oracle RAC in the Oracle RAC documentation for more information about parameters and distribution methods.

Caution: Normally, you will not have PARALLEL_INSTANCE_GROUP set unless you are trying to restrict both parallelism and the IM column store to a subset of nodes.  Setting PARALLEL_INSTANCE_GROUP to an invalid value will cause your IM column store to not load anything.  See note "RAC: How to control parallel execution across instances in 11g? (Doc ID 1207190.1)," which applies to 11g+, or Part 2 of the blog mentioned above. 
Note: In 12.2, we recommend using the DISTRIBUTE ... FOR SERVICE ... clause instead of the PARALLEL_INSTANCE_GROUP.  See Configuring an In-Memory Service for a Subset of Nodes: Example in the Oracle documentation for a good example.

 

Transportable Tablespaces (TTS)

1.  Is there anything special that needs to be done to use TTS when a tablespace or its objects have the physical attribute of INMEMORY?

No, there is nothing special that needs to be done if you want the objects imported to the target exactly as they were exported.  When the TTS import is done, if INMEMORY_SIZE is non-zero in the target database, the objects will be imported without error, and they will retain the physical attribute of INMEMORY, but the IM column store will not be populated until you set INMEMORY_SIZE >= 100M (100M is the minimum required).  

Wrong Results

1. If have wrong results, how can I narrow down the problem to confirm or eliminate the IM column store as a factor?

You can turn off the IM column store (and its smart scan) by doing the following.  If the wrong results persist, it is not an issue of the IM column store.

alter session set INMEMORY_QUERY = disable;

 Automatic In-Memory, External Table Support and Greater Performance in Oracle Database 18c

Visit the below link for details:

https://blogs.oracle.com/in-memory/oracle-database-18c-dbim

In-Memory Dynamic Scans

Visit the below link for more details:

https://blogs.oracle.com/in-memory/using-imds-part-2

REFERENCES

Oracle Database In-Memory on RAC - Part 1 (revised)
Oracle® Database In-Memory - An Oracle White Paper October 2014
Best Practices for DBIM - Oracle Open World 2015

NOTE:1962125.1 - Oracle Database - Overview of Database Patch Delivery Methods - 12.1.0.2 and older
NOTE:1937782.1 - 12.1.0.2 Database Proactive Bundle Patches / Bundle Patches for Engineered Systems and DB In-Memory - List of Fixes in each Bundle
Oracle Database In-Memory on RAC - Part 4
Oracle Database In-Memory on RAC - Part 2
Oracle Database In-Memory on RAC - Part 3

Oracle E-Business Suite Release 12.2 System Schema Migration

In This Document Section 1: Overview of the EBS System Schema Section 2: Requirements for Using the EBS System Schema Section 3: Migrating t...