Showing posts with label EBS. Show all posts
Showing posts with label EBS. Show all posts

Wednesday, August 21, 2024

M5 Cross Endian Platform Migration using Full Transportable Data Pump Export/Import and RMAN Incremental Backups (Doc ID 2999157.1)

 


Prerequisites

High level migration workflow

Detailed migration workflow

Migration process explained

Appendix

Cross platform database migration is the process of moving databases to a new platform, including Exadata Database Machine, Exadata Cloud@Customer, Exadata Database Service, etc. This note provides a simple, reliable, and fast migration solution with minimal downtime.

The information below will guide you in performing a cross platform (Big Endian to small Endian, vice versa, or same platform when Data Guard option is not available) database migration.

Note:
1. This procedure only supports Oracle Database 19.18 or higher on source and destination.
2. Tablespaces SYSTEM, SYSAUX, and UNDO are excluded from the migration procedure.
3. RMAN backup encryption is not supported with this procedure.
4. This procedure supports using multi-section backups (SECTION SIZE) to perform faster backups of bigfile tablespaces.
5. This procedure supports migration from non-container database to pluggable database.
6. This procedure supports migration of pluggable databases across platforms.
7. This procedure supports migration using a physical standby as the source database.
8. This procedure supports migration of TDE encrypted database.

 

If your environment cannot meet the prerequisites of the supported version in this note, the other alternative is to use V4 Reduce Transportable Tablespace Downtime using Cross Platform Incremental Backup (Doc ID 2471245.1)

Prerequisites

  • Review MOS note Data Pump Recommended Proactive Patches For 19.10 and Above (Doc ID 2819284.1) and apply all the required fixes for the 19.18 or higher database on both source and destination.
    • Data Pump recommended fixes are required to address known issues and improve performance. This is a mandatory prerequisite.
  • Evaluate whether the below RMAN 19c database fixes apply to your migration.
    • Ensure to have fix for bug 35564253 applied on your destination database.
    • If using physical standby database as a migration source, fix for bug 35675516 is required for your destination database.
    • If Transparent Data Encryption (TDE) is used in the source database, fix for bug 30817822 is required for your destination database. This fix is already included in release update 19.20.0.0 and above.
  • This procedure works with Zero Data Loss Recovery Appliance and any local storage like NFS
    • When using Zero Data Loss Recovery Appliance ensure to have the latest libra.so (ZDLRA: Where to download new sbt library (libra.so module) (Doc ID 2219812.1))
    • When using Zero Data Loss Recovery Appliance for Cross Endian migration, the backups taken will not be virtualized (i.e., stored in RMAN format). They will be replicated to a downstream ZDLRA if configured (See Appendix Section 6). 
  • The db_create_file_dest parameter must be set on the destination database.
  • Capture all RMAN and Data Pump logs for troubleshooting the migration procedure.
  • When using NFS as a backup and restore target, ensure that you have write privileges from the source database and read privileges from the destination database.
  • When using TDE, the keys should be exported from the source and imported on the destination prior to starting the migration process (See Appendix Section 5)
    • When using the Zero Data Loss Recovery Appliance as a backup and restore target for source database backups, uncomment the variable BKP_DEST_TYPE=SBT_TAPE in the properties file and comment the variable BKP_DEST_TYPE=DISK. Also, update the variable BKP_DEST_PARM with the Zero Data Loss Recovery Appliance SBT_TAPE configuration and parameters. Only one variable for the target backup destination is accepted at a given time.  Ensure that your protected database is enrolled accordingly and with a proper Recovery Window Goal, more information can be found under Configuring Recovery Appliance for Protected Database Access

High level migration process

 

  • Perform a level 0 backup on the source
  • Restore the L0 backup taken previously on the destination
  • Perform incremental backups on source and apply on the destination. Repeat this procedure until final incremental is taken, which requires minimal read-only downtime window.
  • Downtime Begins
    • Source Data Tablespaces converted to READ ONLY
    • Final Incremental backup on source
    • Metadata Tablespace Export on source
    • Apply Incremental on destination.
    • Metadata tablespace Import on destination.
  • Downtime Ends

Detailed migration workflow

 Flow

 

 

Note:
1. All steps in this procedure are run as the oracle user that is a member of the OSDBA group. OS authentication is used to connect to both the source and destination databases.
2. This procedure assumes the destination database has been created on destination platform with specific requirements ("compatible" parameter, character set, database options on both source and destination database)
3. General Limitations on Transporting Data are documented here:
a. https://docs.oracle.com/en/database/oracle/oracle-database/19/admin/transporting-data.html#GUID-28800719-6CB9-4A71-95DD-4B61AA603173.
b. Transportable Tablespace (TTS) Restrictions and Limitations: Details, Reference, and Version Where Applicable (Note 1454872.1)
c. Primary Note for Transportable Tablespaces (TTS) -- Common Questions and Issues (Note 1166564.1)
4. Cross platform is only possible with Enterprise Edition. This procedure cannot be used with Standard Edition.
5. The source and destination databases COMPATIBLE parameter must be the same, minimum 19.0.0.0.
6. RMAN configuration on the source system must not have BACKUP TYPE TO COPY. The source must have BACKUP TYPE TO BACKUPSET.

 

Migration process explained

 

The migration relies on a driver script that reads a properties file to perform tablespace backup iterations on the source database using a 'from SCN' based on previous backup.  The backup iterations are L0 (full backup), L1 (incremental backup) command and L1F (final incremental backup).  See Appendix Section 2   

For ease of use, environment variables and other important variables required for the driver script are consolidated in a properties file (See Appendix Sections 3 and 4), the list of tablespaces to be migrated has its own file (See Appendix Section 5).

For more agility and simplicity, you can set up SSH equivalence between the source and destination servers if your security policies allow.  The SSH equivalence will allow the automatic copy of the required scripts between the source and destination automatically.

Note: If SSH equivalence cannot be established, the RMAN command file generated under the log directory on the source database should be copied manually to the destination and executed after each backup is taken on the source database. If NFS is not used or the source and destination do not share the directory where the backup pieces are generated, these should be made available to the destination database, the RMAN restore cmd file should be modified to include the new path the backup pieces are copied to.

During the final backup, the export Data Pump file should be also manually copied from the source database to destination database DATA_PUMP_DIR directory. If the destination is a RAC database, ensure the export Data Pump file is copied to a location shared to all nodes.

 

As previously mentioned, when using the Zero Data Loss Recovery Appliance as a backup and restore target for source database backups, the backups will not be part of the block pool, i.e., they won’t be virtualized.  For this, the backups should be stalled from being ingested.  See Appendix Section 4.

 

During the migration process, the user can issue as many incremental backups as required.  The idea is to increase the frequency of level 1 backups, and by increasing the frequency reducing their size. Smaller backups will get backed up, converted and restored quicker.  When performing the final incremental level 1, the tablespaces to be migrated will be put in read only mode and the backup is taken. On the destination environment, the restore of the last backup should be performed and the impdp.sh script executed afterwards to plugin the tablespaces metadata.

 

The migration can be performed from a physical standby database if such exists. This removes the load on the primary database. The migration steps are similar except that at the last step, the standby database is converted to snapshot standby, the metadata export is performed, and the database is converted back to physical standby. This automation uses the Data Guard broker to perform these operations.

 

For additional guidance on migrating pluggable databases refer to Appendix Section 6.

 

The final incremental backup (L1F step) is the last step before the import. During this task the user will be asked to provide a privileged user password on the source database. This is needed to put the tablespaces being transported into read only mode, and to perform the export. On the destination, the impdp.sh will require the user to provide the system password during the import of the transportable tablespaces.

 

Step-by-Step Migration

 

1. Preparation

  1. Create a working directory on an NFS share that is shared between source and destination. The path to the NFS share should be identical on both source and destination.
    1. If an NFS share is not possible, use any local storage. Create the directory on the source host and copy it to the destination host. This needs to be repeated multiple times throughout the process. The path on the source and destination host should be identical. If not, various changes are required to the scripts which is beyond the scope of this procedure.
  2. Download DBMIG.zip (md5sum 24b5acb9e7cbe600de7547faf1e86962) to your working directory.
  3. Unzip DBMIG.zip. The file contains:
      1. dbmig_driver_m5.sh: Migration driver script.
      2. impdp.sh: Import driver script.
      3. log directory: Created at runtime. Stores the RMAN backup log, migration logs, and additional logs in addition to:
        1. rman_mig_bkp.log: The centralized migration logfile.
        2. rman_mig_bkp.lck: Lock file created at the beginning of every backup to prevent concurrent driver execution. If a driver execution aborts the lock will not be removed. The problem should be investigated and resolved prior to removing the lock.
      4. cmd directory: Created at runtime. Stores the RMAN backup and restore scripts in addition to:
        1. dbmig_driver.properties: Contains the environment variables required for the migration; see Appendix section 1 below for additional information. This file is user-modifiable.
        2. dbmig_ts_list.txt: Contains the list of tablespaces to be migrated; names are comma-separated without quotes; see Appendix section 2. This file is user-modifiable.
  4. Modify the variables in dbmig_driver.properties (in cmd directory) to reflect the environment you are using; refer to Appendix section 1 for the description of the variable.
      1. If backing up to disk (BKP_DEST_TYPE=DISK), it is recommended to store the RMAN backup on the shared NFS. Set BKP_DEST_PARM to a subdirectory in the working directory.
      2. If possible, for ease of use, store the Data Pump dump file in a subdirectory in the working directory. Change DIRECTORY_PATH of the DATA_PUMP_DIR directory object.
      3. If backing up to ZDLRA (BKP_DEST_TYPE=SBT) and you don't use a shared NFS, you can establish SSH equivalence between the source and destination servers. The script can automatically transfer RMAN restore scripts and Data Pump dump file. Set DEST_SERVER to the network name of the destination database. Otherwise, leave it blank.
  5. Update dbmig_ts_list.txt with the tablespace names you are migrating. See Appendix section 2 for details.
  6. Create a new, empty destination database (non-CDB or PDB). The destination database must be on the same or higher release. The destination may be on a higher patch level than the source.

2. Level 0 Backup

  1. On source, execute an initial level 0 (L0) backup of the source database:
    1. $ sh dbmig_driver_m5.sh L0
  2. If not using a shared NFS, copy the working directory to the destination host.
  3. On destination, set the environment to the destination database.
  4. The driver script creates a restore script. It is stored in the cmd directory:
    ls -ltra cmd/restore_L0_*  
  5. Restore the level 0 (L0) backup on the destination database using the latest restore script. Ensure the path to the libra.so and wallet is valid in the restore script.
    1. For non-CDB using backups on disk:
      $ rman target / cmdfile=<restore_cmdfile_name>           
    2. For non-CDB using ZDLRA:
      $ man target / catalog /@scaningest:<port>/<service_name>cmdfile=<restore_cmdfile_name>
    3. For PDB connect directly into PDB
      $ rman target sys@<pdb-connect-string> cmdfile=<restore_cmdfile_name>
  6. Check the RMAN log file:
    $ egrep "WARN-|ORA-" log/restore_*.log
  7. In case of warnings or errors in the log file, contact Oracle Support.

 

3. Level 1 Backup

  1. On source, execute a level 1 (L1) incremental backup of the source database:
    1. $ sh dbmig_driver_m5.sh L1
  2. If not using a shared NFS, copy the working directory to the destination host.
  3. On destination, set the environment to the destination database.
  4. The driver script creates a restore script. It is stored in the cmd directory:
    ls -ltra cmd/restore_L1_*  
  5. Restore the level 1 (L1) backup on the destination database using the latest restore script. Ensure the path to the libra.so and wallet is valid in the restore script.
    1. For non-CDB using backups on disk:
      $ rman target / cmdfile=<restore_cmdfile_name>
    2. For non-CDB using ZDLRA:
      $ man target / catalog /@scaningest:<port>/<service_name> cmdfile=<restore_cmdfile_name>
    3. For PDB connect directly into PDB
      $ rman target sys@<pdb-connect-string> cmdfile=<restore_cmdfile_name>
  6. Check the RMAN log file:
    $ egrep "WARN-|ORA-" log/restore_*.log
  7. In case of warnings or errors in the log file, contact Oracle Support.
  8. Repeat steps 1-6 at regular intervals until the final level 1 incremental backup.

 

4. Final Level 1 Backup

  1. Downtime starts now.
  2. On source, execute the final level 1 (L1) incremental backup of the source database:
    1. $ sh dbmig_driver_m5.sh L1F
    2. The script prompts for the system password. This is the password of the user specified in SYSTEM_USR in the properties file.
    3. Then, it sets the specified tablespaces in read-only mode.
    4. Then, it starts the final level 1 incremental backup.
    5. Finally, it starts Data Pump to perform a full transportable export.
  3. Review the Data Pump export log file.
  4. If not using a shared NFS, copy the working directory to the destination host.
  5. If desired, you can shut down the source database at this point. It is no longer needed.
  6. On destination, set the environment to the destination database.
  7. The driver script creates a restore script. It is stored in the cmd directory:
    ls -ltra cmd/restore_L1F_*  
  8. Restore the final level 1 (L1) backup on the destination database using the final restore script.
    1. For non-CDB using backups on disk:
      $ rman target / cmdfile=<restore_cmdfile_name>
    2. For non-CDB using ZDLRA:
      $ man target / catalog /@scaningest:<port>/<service_name> cmdfile=<restore_cmdfile_name>
    3. For PDB connect directly into PDB
      $ rman target sys@<pdb-connect-string> cmdfile=<restore_cmdfile_name>
  9. Check the RMAN log file:
    $ egrep "WARN-|ORA-" log/restore_*.log
  10. In case of warnings or errors in the log file, contact Oracle Support.

 

5. Data Pump Import

  1. On destination, edit the import driver script (impdp.sh). Use information about the destination database to fill in the variables
    1. ORACLE_HOME: The path to the Oracle home of the destination database.
    2. ORACLE_SID: SID of the destination database.
    3. ORACLE_CONNECT_STRING: A connect string directly into the destination non-CDB database or destination PDB.
    4. DATA_PUMP_PARALLEL: Data Pump parallel level for the import. Use if the destination database is Oracle Database 21c or later.
    5. DATA_PUMP_TRACE: Data Pump trace level. Set to to disable trace.
  2. The import driver script takes four parameters:
    1. expdp_dumpfile: Name of the Data Pump dump file, e.g., exp_UP19_240206134155.dmp
    2. rman_last_restore_log: Relative path to the last RMAN restore log, e.g., log/restore_L1F_UP19_240206134155.log
    3. run-mode: One of the below:
      1. test: Generates the Data Pump parameter file. Does not start Data Pump.
      2. test-readonly. Generates the Data Pump parameter file and adds TRANSPORTABLE=KEEP_READ_ONLY to the Data Pump parameter file. Review the documentation for the usage of this parameter. Does not start Data Pump.
      3. run: Generates the Data Pump parameter file. Starts Data Pump to perform the import.
      4. run-readonly: Generates the Data Pump parameter file and adds TRANSPORTABLE=KEEP_READ_ONLY to the Data Pump parameter file. Review the documentation for the usage of this parameter. Starts Data Pump to perform the import.
    1. DP_ENC_PROMPT: One of the below
      1. Y: Will prompt the user for an encryption password in case one was used for the export
      2. N: Will not prompt the user for an encryption password.
  3. Start the import driver script in test mode to verify the correctness of the generated Data Pump parameter file:
    1. $ sh impdp.sh <expdp_dumpfile> <rman_last_restore_log> test
  4. The driver script generates the parameter file in the current directory. Examine it.
    1. $ vi imp_<oracle_sid>_<timestamp>_xtts.par
  5. Do not change the generated Data Pump parameter file. If you need to change the parameter file, change the code in the driver script, so it generates the desired parameter file.
  6. Copy the Data Pump dump file to the directory path specified by the DATA_PUMP_DIR directory object in the destination database. Use the below query to determine the directory path. If destination is a PDB, be sure to connect to the PDB to execute the query:
    1. SQL> select directory_path from dba_directories where directory_name='DATA_PUMP_DIR';
  7. Start the import driver script in run mode to perform the Data Pump import:
    1. $ sh impdp.sh <expdp_dumpfile> <rman_last_restore_log> run
    2. The script starts by creating a guaranteed restore point. If the destination is a PDB, the script creates the restore point in the CDB (CDB-level). The restore point is only useful for testing in combination with TRANSPORTABLE=KEEP_READ_ONLY. Unless you use this option, then re-use of the data files is not possible
    3. Then, it starts Data Pump to perform a full transportable import. Data Pump prompts for the password of the user executing the import.
  8. Always review the Data Pump log file. Even if the Data Pump is listed as successful, you should always review the Data Pump log file for any critical errors.
  9. When determined to go live, remove the previously created restore point.

 

Note:
If a service request needs to be opened with Oracle Support for an issue related with this procedure, the following should be uploaded:

1. The whole DBMIG directory including the sub-directories from the source database
2. The working directory from the destination, this includes the RMAN restore files, trace files, log files, import data pump par file(s) and log(s)
3. From both databases, the alert log files, opach lsinventory output, and the *dm* and *dw* trace files generated in the database dump directory.

 

Appendix Section 1

dbmig_driver.properties file explained

 

Parameter

Description

Default value

Modifiable

ORACLE_HOME

The Oracle home of the source big endian database

User defined

Yes

ORACLE_SID

The SID of the source big endian database

User defined

Yes

SRC_SCAN

Source database Easy connect

User defined

Yes

MIG_PDB

Set to 1 when migrating a pluggable database, otherwise 0 for non-CDB

User defined

Yes

PDB_NAME

Pluggable database where the tablespaces are being migrated from

User defined

Yes

BKP_FROM_STDBY

When migrating using STANDBY database as source, the value is 1, otherwise 0 when migrating from the PRIMARY

User defined

Yes

SOURCE_DPDMP

Data pump file location on source

User defined

Yes

SYSTEM_USR

System privileged user which will change the tablespaces to read only and perform the export data pump.  Recommended to use SYSTEM

Inherited in the script

NO

DP_TRACE

Data Pump trace level. Use 0 to disable trace. Use 3FF0300 to full transportable tracing, see MOS Doc ID 286496.1 for details.

0

Directed by Support

DP_PARALLEL

Data Pump parallel setting. Accepted values: 1 to 999. Example: 16

1

Yes

DP_ENC_PROMPT

Specifies whether Oracle Data Pump prompts you for the encryption password.

N

Yes

BKP_DEST_TYPE

Location of backups, accepted values: DISK, SBT_TAPE. Choose DISK to backup up to local storage. Choose SBT_TAPE to use ZDLRA.

User defined

Yes

BKP_DEST_PARM

If BKP_DEST_TYPE=DISK, enter location for backup: Example: /u01/app/oracle/m5/rman

If BKP_DEST_TYPE=SBT_TAPE, enter channel configuration: Example: "'%d_%U' PARMS \"SBT_LIBRARY=<oracle_home>/lib/libra.so,SBT_PARMS=

(RA_WALLET='location=file:<oracle_home>/dbs/zdlra credential_alias=<zdlra-connect-string>')\""

User defined

Yes

CAT_CRED

If you use ZDLRA, specify connect string

Example: <scan-name>:<port>/<service>

User defined

Yes

SECTION_SIZE

Section size used in RMAN backups

64G

Yes

CHN

Number of RMAN channels allocated.

User defined

Yes

DEST_SERVER

Network name of the destination server. Leave blank if you manually transfer backups and dump files.

User defined

Yes

DEST_USER

Destination OS username.  If SSH equivalence is established between the source and destination environments, it will use passwordless SCP to copy files from the source to destination. Example: oracle

User defined

Yes

DEST_WORKDIR

Destination working directory, this is to store the RMAN restore scripts, logs, trace, import script and par files

User defined

Yes

DEST_DPDMP

The directory path used by DATA_PUMP_DIR on the destination.

Example: /u01/app/oracle/m5/data_pump_dir

User defined

Yes

WORKDIR

The working directory containing the migration script, CMD and LOG directories on source.

User defined

Yes

LOG_DIR

The log directory under the WORKDIR holding the backup log files and others on source.

Inherited in the script

NO

CMD_DIR

The cmd directory under the WORKDIR holding the backup and restore scripts on source.

Inherited in the script

NO

DT=`date +%y%m%d%H%M%S`

CMD_MKDIR=`which mkdir`

CMD_TOUCH=`which touch`

CMD_CAT=`which cat`

CMD_RM=`which rm`

CMD_AWK=`which awk`

CMD_SCP=`which scp`

Additional variables to set the date format and load OS commands.

Inherited in the script

NO

TS

The migrated tablespaces names in a list format

User defined

Yes

TSF

The migrated tablespaces filename

 

NO

SCN

File to keep track of the SCN when the backup was run

Inherited in the script

No

 

dbmig_driver.properties sample file

 

############################################################
#Source database properties
#my_M5_prop_version=2
# -
# - ORACLE_HOME Path to Oracle Home
# - ORACLE_SID SID of the source database
# - SRC_SCAN Connect string to source database via SCAN.
# If no SCAN, specify source database network name.
# Enclose in single quotes
# Example: '@myhost-scan/db1'
# Example: '@localhost/pdb1'
# - MIG_PDB Accepted values: 0, 1
# Choose 0 if source is non-CDB
# Choose 1 if source is a PDB
# - PDB_NAME If source is a PDB, specify PDB name.
# Else leave blank
# Example: PDB1
# - BKP_FROM_STDBY Accepted values: 0, 1
# Choose 0 to back up from primary database,
# or if Data Guard is not in use.
# Choose 1 to back up from standby database.
############################################################
export ORACLE_HOME=/u01/app/oracle/product/19.0.0.0/dbhome_1
export PATH=$PATH:$ORACLE_HOME/bin
export ORACLE_SID=soldb1918
export SRC_SCAN='@scan1/soldb1918'
export MIG_PDB=0
export PDB_NAME=
export BKP_FROM_STDBY=0
############################################################
#Source Data Pump settings
# - SOURCE_DPDMP Directory path of the directory DATA_PUMP_DIR
# Example: /u01/app/oracle/m5/data_pump_dir
# - SOURCE_DPIR Data Pump Directory, typically DATA_PUMP_DIR
# - SYSTEM_USR Username for Data Pump export.
# Do not use SYS AS SYSDBA
# Example: SYSTEM
# - DP_TRACE Data Pump trace level.
# Use 0 to disable trace.
# Use 3FF0300 to full transportable tracing
# See MOS Doc ID 286496.1 for details.
# - DP_PARALLEL Data Pump parallel setting.
# Accepted values: 1 to 999
# Example: 16
############################################################
export SOURCE_DPDMP=/u01/app/admin/soldb1918/dpdump
export SOURCE_DPDIR=DATA_PUMP_DIR
export SYSTEM_USR=SYSTEM
export DP_TRACE=0
export DP_PARALLEL=1
export DP_ENC_PROMPT=N
############################################################
#Source RMAN settings
# - BKP_DEST_TYPE Accepted values: DISK, SBT_TAPE
# Choose DISK to backup up to local storage
# Choose SBT_TAPE to use ZDLRA
# - BKP_DEST_PARM If BKP_DEST_TYPE=DISK, enter location for backup:
# Example: /u01/app/oracle/m5/rman
# If BKP_DEST_TYPE=SBT_TAPE, enter channel configuration:
# Example: "'%d_%U' PARMS \"SBT_LIBRARY=<oracle_home>/lib/libra.so,SBT_PARMS=(RA_WALLET='location=file:<oracle_home>/dbs/zdlra credential_alias=<zdlra-connect-string>')\""
# - CAT_CRED If you use RMAN catalog or ZDLRA, specify connect string to catalog database
# Example: <scan-name>:<port>/<service>
# - SECTION_SIZE Section size used in RMAN backups
# - CHN Number of RMAN channels allocated
############################################################
export BKP_DEST_TYPE=DISK
export BKP_DEST_PARM=/SHAREDY/DBMIGBKP/SOLDB1918
export CAT_CRED=
export SECTION_SIZE=64G
export CHN=8
############################################################
#Destination host settings
#If specified, the script transfers the RMAN backups and
#Data Pump dump file to the destination via over SSH.
#SSH equivalence is required.
# - DEST_SERVER Network name of the destination server.
# Leave blank if you manually transfer
# backups and dump files
# - DEST_USER User for SSH connection
# Example: oracle
# - DEST_WORKDIR The script working directory on destination
# Example: /u01/app/oracle/m5
# - DEST_DPDMP The directory path used by DATA_PUMP_DIR
# in destination database
# Example: /u01/app/oracle/m5/data_pump_dir
############################################################
export DEST_SERVER=srvadm01
export DEST_USER=oracle
export DEST_WORKDIR=/SHAREDY/STAGE/CROSSPLATMIG/PRIMARY
export DEST_DPDMP=/u01/app/oracle/product/19.0.0.0/dbhome_13/rdbms/log

############################################################
#Advanced settings
#Normally, you don't need to edit this section
############################################################
export WORKDIR=$PWD
export LOG_DIR=${WORKDIR}/log
export CMD_DIR=${WORKDIR}/cmd
export PATH=$PATH:$ORACLE_HOME/bin
export DT='date +%y%m%d%H%M%S'
export CMD_MKDIR='which mkdir'
export CMD_TOUCH='which touch'
export CMD_CAT='which cat'
export CMD_RM='which rm'
export CMD_AWK='which awk'
export CMD_SCP='which scp'
export CMD_CUT='which cut'
export CMD_PLATFORM='uname'
if [[ "$CMD_PLATFORM" = "Linux" ]]; then
export CMD_GREP="/usr/bin/grep"
else
if [[ "$CMD_PLATFORM" = "AIX" ]]; then
export CMD_GREP="/usr/bin/grep"
else
if [[ "$CMD_PLATFORM" = "HPUX" ]]; then
export CMD_GREP="/usr/bin/grep"
else
export CMD_GREP='which ggrep'
fi
fi
fi
export my_M5_prop_version=2

 

Appendix Section 2

Structure of the dbmig_ts_list.txt file

This file contain the names of the tablespaces to be migrated, the names should be separated by comma only.  See sample below.

$ cat dbmig_ts_list.txt

 

JSXTTS_1,JSXTTS_2,JSXTTS_3,JSXTTS_4,JSXTTS_5,JSXTTS_6,JSXTTS_7,JSXTTS_8,JSXTTS_9,JSXTTS_10

 

 

Appendix Section 3

The file impdp.sh should be copied to the destination working directory.

Export the ORACLE_HOME, ORACLE_SID and ORACLE_PDB_SID (If required) to reflect the database you are migrating to, enter the password for the user performing the import when prompted.  The script requires three parameters, the export dumpfile name which is displayed during the last L1 backup on the source, the last RMAN restore log and the run mode (test or run).  Run mode allows to perform a test run to check the import parameter file for correctness before executing a real run with the run parameter.

 

Impdp.sh runs as follows:

$ sh impdp.sh <exp.dmp> <log/res_L1F.log> [test|test-readonly|run-readonly|run] <encryption_pwd_prompt[Y|N]>

 

Run mode explained.

 

test

Runs the script and produces an imp_${ORACLE_SID}_xtts.par file without the parameter TRANSPORTABLE=KEEP_READ_ONLY, check the imp_${ORACLE_SID}_xtts.par file for correctness

$sh impdp.sh <exp.dmp> <res.log> test

test-readonly

Runs the script and produces an imp_${ORACLE_SID}_xtts.par file with the parameter TRANSPORTABLE=KEEP_READ_ONLY, check the imp_${ORACLE_SID}_xtts.par file for correctness

run

run: Runs the script, produces an imp_${ORACLE_SID}_xtts.par file without the parameter TRANSPORTABLE=KEEP_READ_ONLY, generates a guaranteed restore point and performs the import.

run-readonly

Runs the script, produces an imp_${ORACLE_SID}_xtts.par file with the parameter TRANSPORTABLE=KEEP_READ_ONLY, generates a guaranteed restore point and performs the import.

 

 

$ cat impdp.sh

#!/bin/bash

 

if [ $# -ne 4 ]; then

        echo "Please call this script using the syntax $0 "

        echo "Example: # sh impdp.sh <expdp_dumpfile> <rman_last_restore_log> [run|test|run-readonly|test-readonly] <encryption_pwd_prompt[Y|N]>"

        exit 1

fi

 

#Full path to Oracle Home

export ORACLE_HOME=/u01/app/oracle/product/19.0.0.0/dbhome_13

export PATH=$PATH:$ORACLE_HOME/bin

#SID of the destination database

export ORACLE_SID=entain1

#Connect string to destination database. If PDB, connect directly into PDB

export ORACLE_CONNECT_STRING=entain

#Data Pump directory

export DATA_PUMP_DIR=DATA_PUMP_DIR

#Data Pump parallel setting

export DATA_PUMP_PARALLEL=1

#Data Pump trace level. 0 to disable. 3FF0300 for transportable tablespace trace

export DATA_PUMP_TRACE=0

 

############################################################

#Normally, do not edit below this line

############################################################

 

export PATH=$PATH:$ORACLE_HOME/bin

export DT=`date +%y%m%d%H%M%S`

export EXPDP_DMP=${1}

export RES_LOG=${2}

export RUN_MODE=${3}

export DP_ENC_PROMPT=${4}

 

echo "userid='system@${ORACLE_CONNECT_STRING}'" > imp_${ORACLE_SID}_${DT}_xtts.par

echo "dumpfile=${EXPDP_DMP}" >> imp_${ORACLE_SID}_${DT}_xtts.par

echo "directory=${DATA_PUMP_DIR}" >> imp_${ORACLE_SID}_${DT}_xtts.par

echo "LOGTIME=ALL" >> imp_${ORACLE_SID}_${DT}_xtts.par

echo "TRACE=${DATA_PUMP_TRACE}" >> imp_${ORACLE_SID}_${DT}_xtts.par

echo "PARALLEL=${DATA_PUMP_PARALLEL}" >> imp_${ORACLE_SID}_${DT}_xtts.par

echo "LOGFILE=imp_${ORACLE_SID}_${DT}_xtts.log" >> imp_${ORACLE_SID}_${DT}_xtts.par

echo "METRICS=YES" >> imp_${ORACLE_SID}_${DT}_xtts.par

case ${DP_ENC_PROMPT} in

     "Y")

        echo "ENCRYPTION_PWD_PROMPT=YES" >> imp_${ORACLE_SID}_${DT}_xtts.par

       ;;

     "N")

        echo "ENCRYPTION_PWD_PROMPT=NO" >> imp_${ORACLE_SID}_${DT}_xtts.par

      ;;

      *)

      ;;     

esac

echo "TRANSPORT_DATAFILES=" >> imp_${ORACLE_SID}_${DT}_xtts.par

 

input=$(grep "restoring foreign" ${RES_LOG} | awk '{print $9}' | sort |  uniq)

 

IFS=$'\n' read -rd '' -a input_list <<< "$input"

output_list=()

for item in "${input_list[@]}"; do

    output_list+=("'$item',")

done

 

last_index=$(( ${#output_list[@]} - 1 ))

output_list["$last_index"]="${output_list[last_index]%,}"

 

for item in "${output_list[@]}"; do

    echo "$item" >> imp_${ORACLE_SID}_${DT}_xtts.par

done

 

 

function gen_grp {

${ORACLE_HOME}/bin/sqlplus -s /nolog << EOF

                connect / as sysdba

                set lines 333

                col time for a48

                col restore_point_time for a24

                col name for a24

                spool ${PWD}/dbmig_grp.lst

                create restore point before_imp_${DT} guarantee flashback database;

                select NAME, SCN, TIME, GUARANTEE_FLASHBACK_DATABASE, STORAGE_SIZE FROM V\$RESTORE_POINT WHERE GUARANTEE_FLASHBACK_DATABASE='YES';

                spool off;

                exit

EOF

}

 

case ${RUN_MODE} in

              "run")

                echo "Running in Run mode"

                gen_grp

                impdp parfile=imp_${ORACLE_SID}_${DT}_xtts.par

              ;;

              "run-readonly")                             

                echo "Running in Read Only, for the final run you must flashback the database to the restore point created now"

                gen_grp

                awk 'NR==3 {print "TRANSPORTABLE=KEEP_READ_ONLY"} {print}' imp_${ORACLE_SID}_${DT}_xtts.par > tmp_file && mv tmp_file imp_${ORACLE_SID}_${DT}_xtts.par

                impdp parfile=imp_${ORACLE_SID}_${DT}_xtts.par

              ;;

              "test")                            

                echo "Running in test mode, check par file for correctness"

              ;;

              "test-readonly")                            

                echo "Running in test mode with read only, check par file for correctness"

                awk 'NR==3 {print "TRANSPORTABLE=KEEP_READ_ONLY"} {print}' imp_${ORACLE_SID}_${DT}_xtts.par > tmp_file && mv tmp_file imp_${ORACLE_SID}_${DT}_xtts.par

              ;;                        

              *)

                echo "Unknown run mode, options are: run, test, run-readonly, test-readonly"

esac

 

Appendix Section 4

 

When using the Zero Data Loss Recovery Appliance a backup and restore target, the backup from the source databases should be stalled (i.e, blocked) from being indexed.

 

To do so, identify your database DB_KEY by querying the RA_DATABASE view using the DB_UNQIUE_NAME in the Zero Data Loss Recovery Appliance database.

 

As user RASYS in the Zero Data Loss Recovery Appliance database create a stall_when view like the one below:

 

SQL> create or replace view RAI_STALL_WHEN

     AS

     SELECT task_id

       FROM task

      WHERE task_type = 140

        AND db_key = <DB_KEY>;

 

As user RASYS on the Zero Data Loss Recovery Appliance database enable the stall_when parameter:

 

 SQL> EXEC DBMS_RA_INT.config(p_name=>'_stall_when', p_value=>'ON', comments=>'Enabled for Cross Endian Migration');

 

When the migration is finished, the stall_when flagged should be removed and the view dropped.  This will trigger the INDEX_BACKUP task on the ZDLRA and virtualize the backups.

 

SQL> DROP view RAI_STALL_WHEN;

SQL> EXEC DBMS_RA_INT.config(p_name=>'_stall_when', p_value=>'OFF', comments=>'Disabled for Cross Endian Migration');

 

 

Note: Prior to creating the RAI_STALL_WHEN view, verify if the _stall_when flag has already been set for any other reason. If the flag was already set, do not proceed with the migration and contact Oracle Support.

 

Note: If the compliance mode is implemented on the ZDLRA, the use of dbms_ra_int is considered a "break glass".

 

Note: If you are using block change tracking, and as backups to the ZDLRA are not being virtualized, it is important the backup performance can be impacted if number of backups (L0+ L1s) is higher than default value (8) for “_bct_bitmaps_per_file”
Reference: RMAN incremental level 1 backups are not using block change tracking file (Doc ID 1192652.1).

 

Appendix Section 5

 

When using TDE, the encryption keys should be exported from the source database keystore and imported on the destination database keystore ahead of the migration.

 

  1. On the source big endian system, export the TDE encryption keys as a user who has been granted the ADMINISTER KEY MANAGEMENT or SYSKM privilege. Protect the exported file as it contains sensitive information.

$ sqlplus / as sysdba
SQL> ADMINISTER KEY MANAGEMENT EXPORT ENCRYPTION KEYS FORCE WITH SECRET "new_password_to_encrypt_export_file" TO 'exportkeys.exp' IDENTIFIED BY "existing_keystore_password";

Keystore altered

 o Check the KEY_ID from the V$ENCRYPTION_KEYS


$ sqlplus / as sysdba
SQL> SELECT KEY_ID FROM V$ENCRYPTION_KEYS;

KEY_ID
------------------------------------------------------------------------------
AbWpl9Eq2U8Ev22+H6Ufy14AAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AVZ8DckBpk8xv+9IzzfKvoMAAAAAAAAAAAAAAAAAAAAAAAAAAAAA

 o Copy the encrypted export file ‘exportkeys.exp’ to the Linux destination:

$ scp exportkey.exp user@server:/path

 2. On the Linux destination system, import the TDE master encryption key from the file generated in step 1

$ sqlplus / as sysdba
SQL> ADMINISTER KEY MANAGEMENT IMPORT KEYS WITH SECRET "Password_used_in_step1_export" FROM 'exportkeys.exp' IDENTIFIED BY "keystore_password_used_on_detination_db" WITH BACKUP;

Keystore altered

 o Check the KEY_ID from the V$ENCRYPTION_KEYS

$ sqlplus / as sysdba
SQL> SELECT KEY_ID FROM V$ENCRYPTION_KEYS;

KEY_ID
------------------------------------------------------------------------------
AXj7ZZ4Lq08Jv4pzTBn8/DoAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AVZ8DckBpk8xv+9IzzfKvoMAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AbWpl9Eq2U8Ev22+H6Ufy14AAAAAAAAAAAAAAAAAAAAAAAAAAAAA

 o To activate the previously imported the TDE keys from the source

$ sqlplus / as sysdba
SQL> ADMINISTER KEY MANAGEMENT USE KEY <KEY_ID> USING TAG 'Activate Key on target' FORCE KEYSTORE IDENTIFIED BY "keystore_password_used_in_step1_above" WITH BACKUP;

Keystore altered

 

Appendix Section 6

This note supports migrating a pluggable database from a big-endian source container database to a small endian destination container database.

 

In the dbmig_driver.properties, set the flags MIG_PDB to 1 and PDB_NAME to the name of the pluggable database to be migrated.  The file dbmig_ts_list.txt should contain the list of tablespaces to be migrated.  During the migration process the automation creates a file called dbmig_ts_list.txt.pdb.  This file contains the name of the pluggable database appended to the name of the tablespaces to be migrated.  The content of the file is used when building the RMAN backup scripts.

 

The restore operations of L0 and L1s are the same for every migration.  For the pluggable database migration and to ensure the tablespaces being transported are plugged in the correct destination pluggable database, the connection string in the impdp.sh should be adapted accordingly.

 

During the restore operation, RMAN will create the datafiles in a ASM sub-directory on the destination which is different to the directory used by the pluggable database. If you are wanting to move the files into the correct destination of the pluggable database then mount the pluggable database and perform and RMAN ‘BACKUP AS COPY’ specifying the data diskgroup in the FORMAT clause. Once the backup is complete, you can perform an RMAN ‘SWITCH DATAFILE TO COPY’ command to update the controlfile with the new correct pluggable database directory.

Friday, April 12, 2024

Oracle E-Business Suite Release 12.2 System Schema Migration

In This Document

Section 1: Overview of the EBS System Schema

The Release 12.2 database architecture has been modernized by adoption of the Oracle E-Business Suite System Schema, EBS_SYSTEM. Prior to the introduction of the EBS_SYSTEM schema, Oracle E-Business Suite installed application objects in the Oracle Database SYS and SYSTEM schemas. Migration to the EBS System Schema obviates the need for any EBS-owned objects to reside in the SYS or SYSTEM schemas.

Key characteristics of the EBS System Schema include:

  • Creation of the EBS_SYSTEM schema and associated grant management is performed as follows:
    1. Creation of the EBS_SYSTEM schema and is performed by SYS running the adgrants.sql script (supplying the APPS account as the parameter) before applying the AD-TXK Delta 13 (or later) RUPs.
    2. Grants required by the APPS account are run automatically by the AD-TXK Delta 13 (or later) RUP installation process.
  • All EBS database objects that currently reside in the SYS or SYSTEM schemas are migrated to appropriate Oracle E-Business Suite schemas. Depending upon the EBS object type and function, the object is migrated to EBS_SYSTEMAPPS, or APPS_NE.

  • All Oracle E-Business Suite administration actions (such as running adop, adadmin and other utilities) are now performed by EBS_SYSTEM.

  • Access to the Oracle database SYS and SYSTEM and the Oracle database server operating system is no longer required for Oracle E-Business Suite system administrative functions.

  • If any grants need to be fixed after the AD-TXK Delta 13 (or later) RUPs are applied, refer to My Oracle Support Knowledge Document 2758993.1Managing Database Privileges in Oracle E-Business Suite Release 12.2 (Running adgrants.sql).

Key benefits of migrating to the EBS System Schema include support for the following features:

  • Public Oracle Database APIs
  • Least Privileges Model for database object access
  • Separation of Duties for administrators
  • Database service names for application tier database connections
  • Oracle Database Unified Auditing
  • Easier interoperability across Oracle Database releases

Diagram 1 - The modernized Oracle E-Business Suite database and its key features.

Modernized EBS Database

Each of these will be discussed in the subsections below. Be aware that numbering of the subsections is for ease of reference only, and is not intended to reflect any set order of importance.

1.1 Public Oracle Database APIs

As part of the Oracle E-Business Suite System Schema Migration, all Oracle E-Business Suite code is updated to map to public Oracle database dictionary objects and APIs. Utilizing public Oracle database APIs provides further capability to lock down EBS runtime accounts.

1.2 Least Privileges Model for Database Object Access

With the migration to the EBS_SYSTEM schema and usage of public Oracle Database APIs, runtime accounts may be constrained even further. As part of this feature, unnecessary privileges are revoked from Oracle E-Business Suite application accounts.

1.3 Separation of Duties for Administrators

Migration to the EBS_SYSTEM schema makes it possible to separate the role of the Oracle E-Business Suite system administrators from database administrators. All Oracle E-Business Suite administration actions (such as running adop, adadmin, and other utilities) will now prompt for the EBS_SYSTEM password instead of the SYSTEM password. Highly privileged operations that were previously run by the SYS or SYSTEM accounts are now run by EBS_SYSTEM.

Access to the Oracle database SYS and SYSTEM and the Oracle database server operating system is no longer required for Oracle E-Business Suite system administration functions. Database patching may be performed by the Oracle database administrator, and Oracle E-Business Suite patching may be performed by the Oracle E-Business Suite system administrator or applications database administrator (DBA).

The passwords for EBS_SYSTEM and SYSTEM must match until after the Completion Patch is successfully applied. Once the Completion Patch has been successfully applied, the password for EBS_SYSTEM should be changed to be different from the SYSTEM schema password.

1.4 Database Service Names for Application Tier Database Connections

As part of modernizing the Oracle E-Business Suite, connections from the Oracle E-Business Suite application tier to the Oracle E-Business Suite database are now performed using database service names.

1.5 Support for Oracle Database Unified Auditing

Once all requirements are met, Oracle E-Business Suite customers are now able to utilize Unified Auditing, the latest method for auditing an Oracle Database. With Unified Auditing, audit data is combined into a single audit trail.  A new schema, AUDSYS, is used for storing the Unified Audit Trail. Separation of duties is achieved with multiple database roles to audit configuration and view the audit data.

For additional information, refer to My Oracle Support Knowledge Document 2777404.1Enabling Unified Auditing in Oracle E-Business Suite Release 12.2 with Oracle Database 19c or 12c.

1.6 Streamline Database Directory Objects

Following migration to EBS System Schema (EBS_SYSTEM), the APPS schema will no longer have the ability to create database directory objects. Database directory objects are now created by the EBS_SYSTEM user.

The following four standard new database directory objects are created with the privileges shown:

Object NamePrivileges
EBS_LOG

Read/Write

EBS_TEMPRead/Write
EBS_INBOUNDRead
EBS_OUTBOUNDWrite

By default, the database directory objects are mapped to a temporary directory in the $ORACLE_HOME on the database tier file system. If a large number of files are written to this directory, your $ORACLE_HOME database tier file system may reach capacity: it is therefore recommended that you instead map the database directory objects to a secure location in a separate mount point from your $ORACLE_HOMEdatabase tier file system. For more information, refer to the Oracle Database documentation for your specific database release.

1.7 Interoperabilty Across Oracle Database Releases

Oracle E-Business Suite uptake of database releases will be made easier by EBS only referencing public database views and APIs.

Section 2: Requirements for Using the EBS System Schema

2.1 Prerequisites

To use the EBS System Schema, you need to be on Oracle E-Business Suite Release 12.2.3 or later.

2.2 Mandatory Patches and Steps

You must have available the following release update packs (RUPs) and other patches, ready to apply below in Section 3, in the order shown here.

  1. AD and TXK Delta 13 (or later) — Support for EBS System Schema Migration was introduced in the Oracle Applications DBA (AD) and Oracle E-Business Suite Technology Stack (TXK) Delta 13 release update packs. Earlier releases of AD and TXK included database objects owned by AD or TXK that resided in the Oracle Database SYS or SYSTEM schemas, and also included references to private Oracle database SYS or SYSTEM objects.

    With AD and TXK Delta 13 (or later), EBS database objects are migrated from SYS and SYSTEM into the EBS_SYSTEMAPPS, or APPS_NE schemas.  All relevant object references are updated to point to the migrated objects in the APPS or APPS_NE schemas.

    AD and TXK Delta 13 (or later) also update all code that previously referenced private Oracle Database SYS and SYSTEM objects, to reference Oracle Database approved public objects or public APIs instead.

  2. EBS Release 12.2.11 (or later), or EBS Releases 12.2.3 to 12.2.10 plus EBS System Schema Migration Consolidated Patch — Oracle E-Business Suite Releases 12.2.3 to 12.2.10 include EBS-owned database objects that reside in the database SYS or SYSTEM schemas, or reference database SYS or SYSTEM objects. The EBS System Schema Migration Consolidated Patch (also referred to as the Consolidated Patch) migrates these objects to the appropriate EBS schemas. Note that EBS Release 12.2.11 (and later) include the Consolidated Patch.

    Note: Oracle E-Business Suite Release 12.2.11 (and later) do not install EBS objects in the SYS or SYSTEM schemas, and therefore do not require the EBS System Schema Migration Consolidated Patch.

  3. Before proceeding with application of the final patch in Step 4, you must update any customizations that have been identified with mandatory compliance requirements. For guidance, refer to My Oracle Support Knowledge Document 1577661.1Developing and Deploying Customizations in Oracle E-Business Suite Release 12.2.

  4. EBS System Schema Migration Completion Patch — Although it is possible to apply AD-TXK Delta 13 and the EBS System Schema Migration Consolidated Patch without applying the EBS System Schema Migration Completion Patch immediately afterwards, the migration is not finished until the Completion Patch has been applied. This patch drops AD, TXK, and EBS product code in the database SYS or SYSTEM schemas, and removes any final restricted references to database SYS or SYSTEM objects.

    Prior to applying the Completion Patch, you must apply the following prerequisite patches:
    • AD-TXK Delta 13 (or later)
    • Either the EBS Release 12.2.11 (or later) RUP or the EBS System Schema Migration Consolidated Patch

    The Completion Patch will fail to apply if you do not apply these patches first.

    Diagram 2 - The four major requirements for EBS System Schema Migration.

    Note: The principles shown in the diagram also apply to AD and TXK RUPs later than Delta 13.


    Requirements to Complete System Schema Migration

    Note: Until the Oracle E-Business Suite System Schema Migration Completion Patch (also referred to as the Completion Patch) has been applied, the EBS_SYSTEM and Oracle Database SYSTEM schema passwords must be identical. This requirement is for backward compatibility, until all EBS code has been migrated from the SYS or SYSTEM schemas. Diagram 3 depicts when the EBS_SYSTEM and SYSTEM passwords must be identical.

    Diagram 3 - The need for the EBS_SYSTEM and SYSTEM passwords to match.

    Note: The principles shown in the diagram also apply to AD and TXK RUPs later than Delta 13.

    EBS_SYSTEM and SYSTEM Passwords Must Match

    Note: After the Completion Patch has been applied, the EBS_SYSTEM and Oracle Database SYSTEM passwords can be different, and we recommend that they are. Diagram 4 depicts when it is recommended to update the EBS_SYSTEM and SYSTEM passwords to unique values.

    Diagram 4 - The need to update the EBS_SYSTEM and SYSTEM passwords to have unique (different) values.

    Note: The principles shown in the diagram also apply to AD and TXK RUPs later than Delta 13.

    Make EBS_SYSTEM and SYSTEM Passwords Unique

Section 3: Migrating to the EBS System Schema

Migration consists of several stages, as summarized below.

By performing the applicable steps in this section, you will:

  1. Apply the latest AD-TXK RUP.
  2. Either:
    • Apply the EBS Release 12.2.11 (or later) RUP.
      or:
    • If remaining on any EBS release from 12.2.3 to 12.2.10 inclusive, apply the EBS System Schema Migration Consolidated Patch.
  3. Update any customizations reported with mandatory compliance requirements for the EBS System Schema migration.
  4. Apply the EBS System Schema Migration Completion Patch.

3.1 Upgrade Oracle E-Business Suite to Release 12.2.3 or Later

If applicable, upgrade your Oracle E-Business Suite environment to at least Release 12.2.3. We highly recommend that you upgrade to the latest available EBS Release RUP. For more information, refer to My Oracle Support Knowledge Document 1320300.1Oracle E-Business Suite Release Notes, Release 12.2.

Refer to whichever of these books is relevant to you:

3.2 Apply Latest AD-TXK RUPs

Apply the latest release update packs for AD and TXK for Release 12.2, by following the instructions in My Oracle Support Knowledge Document 1617461.1Applying the Latest AD and TXK Release Update Packs to Oracle E-Business Suite Release 12.2. AD and TXK introduced support for EBS System Schema Migration starting with R12.AD.C.Delta.13 and R12.TXK.C.Delta.13.

Note: Oracle strongly recommends that you always apply the latest version of the AD and TXK RUPs at the earliest opportunity.

3.3 Apply Either EBS Release 12.2.11 (or later), or Consolidated Patch for Releases 12.2.3 to 12.2.10

If you are are on an Oracle E-Business Suite release from 12.2.3 to 12.2.10 inclusive, we recommend upgrading to Release 12.2.11 by applying the EBS 12.2.11 (or later) RUP. If you are not currently planning to upgrade to Release 12.2.11 (or later), you must apply the EBS System Schema Migration Consolidated Patch, Patch 31817501. For instructions, refer to My Oracle Support Knowledge Document 2774309.1Applying the Oracle E-Business Suite Consolidated Patch for EBS System Schema Migration.

3.4 Remove Customization Dependencies on Database SYS or SYSTEM Objects

3.4.1 Mandatory Steps

Run the Oracle E-Business Suite System Schema Migration Compliance Checker report, ADSYSCC.sql, and remove or update any of your customizations that would give errors on running the EBS System Schema Migration Completion Patch. For detailed instructions, refer to My Oracle Support Knowledge Document 1577661.1Developing and Deploying Customizations in Oracle E-Business Suite Release 12.

3.4.2 Optional Steps

Run the Oracle E-Business Suite System Schema Migration Compliance Checker report, ADSYSCC.sql, and remove or update any of your customizations that are identified with mandatory compliance requirements and would result in invalid database objects for your customizations.

3.5 Apply EBS System Schema Migration Completion Patch

Before the Oracle E-Business Suite System Schema Migration Completion Patch ("Completion Patch") can be applied, all EBS code must have been migrated from the SYS or SYSTEM schemas to the APPS or APPS_NE schemas. To finish the migration process, the EBS System Schema Migration Completion Patch will drop all EBS code that resides in the SYS or SYSTEM accounts.

Consequently, before you apply the Completion Patch, any custom code that points to EBS-defined objects in Oracle Database system accounts must be updated to point to the appropriate migrated EBS-defined objects in EBS-defined schemas. Failure to do so will result in invalid database objects for your custom code.

Apply the Oracle E-Business Suite System Schema Migration Completion Patch (Patch 32573930) by following the steps below.

Note: If you have previously applied the Completion Patch as per the steps in this section, do not re-perform the steps.
  1. Source the run edition applications environment:

    UNIX:
  2. <INSTALL_BASE>/EBSapps.env run
    Windows:
    C:\> <INSTALL_BASE>\EBSapps.cmd run
  3. Before proceeding with application of the Completion Patch, perform a full adop cleanup by running the following command:
  4. adop phase=cleanup cleanup_mode=full
  5. Run the EBS System Schema Migration Compliance Checker, ADSYSCC.sql. When prompted, enter the APPS schema password.
    sqlplus <APPS schema name>@<TWO_TASK> @$AD_TOP/sql/ADSYSCC.sql
    This utility reports all violations of the EBS System Schema Migration Standards. Any EBS violations are fixed by both the EBS Release 12.2.11 (or later) RUP and the EBS System Schema Migration Consolidated Patch. If the report for your environment includes EBS violations after you have applied all mandatory patches (including the Completion Patch), you should log a service request with Oracle Support.

    You must fix any mandatory compliance violations listed in this report that are part of your custom code. For detailed instructions, refer to My Oracle Support Knowledge Document 1577661.1Developing and Deploying Customizations in Oracle E-Business Suite Release 12.

  6. Note: If you proceed with applying the Completion Patch when there are reported violations with custom code, you will have invalid database objects until your custom code is fixed.

    If after applying the completion patch there are reported violations with EBS code, log a service request with Oracle Support to obtain assistance.

  7. (Conditional) If you have integrated EBS with Oracle Database Vault, disable all existing Oracle Database Vault realms by navigating to the $FND_TOP/patch/115/sql directory and running the fnddbvstatus.sql script as follows:
    SQL> @fnddbvstatus.sql DISABLE
    For more information, refer to Appendix A, Management Scripts, in the following My Oracle Support knowledge documents:
    • Document 2131435.1Integrating Oracle E-Business Suite Release 12.2 with Oracle Database Vault 12c
    • Document 2617770.1, Integrating Oracle E-Business Suite Release 12.2 with Oracle Database Vault 19c

  8. Apply the Completion Patch by running the following adop command, as shown in this UNIX example:
  9. adop phase=apply patches=32573930 apply_mode=hotpatch options=forceapply,nocopyportion,nogenerateportion
    Note: You may encounter errors if you attempt to apply the Completion Patch and have specified different passwords for the EBS_SYSTEM and SYSTEM schemas.To apply the completion patch successfully, the SYSTEM and EBS_SYSTEM passwords must be the same.

  10. Run the adsysmigclnup.sql cleanup script on the database tier by performing substeps a to d below. This will drop AD objects from the SYS schema and revoke the associated privileges.

    1. Create <ORACLE_HOME>/appsutil/admin on the database server.

    2. Copy <AD_TOP>/patch/115/sql/adsysmigclnup.sql to the database tier <ORACLE_HOME>/appsutil/admin directory using the appropriate command for your system:

      UNIX:
    3. cp $AD_TOP/patch/115/sql/adsysmigclnup.sql $ORACLE_HOME/appsutil/admin
      Windows:
      C:\> copy %AD_TOP%\patch\115\sql\adsysmigclnup.sql %ORACLE_HOME%\appsutil\admin
    4. Set the environment to point to the $ORACLE_HOME on the database server.

    5. Note: If the database is multitenant, source using the following steps as applicable for your environment.

      UNIX:
      $ source $ORACLE_HOME/<CDB_NAME>_<NODE_NAME>.env
      export ORACLE_PDB_SID=<EBS_PDB_Name>
      Windows:
      C:\> %ORACLE_HOME%\<EBS_PDB_Name>_<NODE_NAME>.cmd
    6. Use SQL*Plus to run the cleanup script as follows.

      UNIX:
      sqlplus /nolog
      SQL> @$ORACLE_HOME/appsutil/admin/adsysmigclnup.sql <APPS schema name>
      Windows:
      C:\> sqlplus /nolog
      SQL> @%ORACLE_HOME%\appsutil\admin\adsysmigclnup.sql <APPS schema name>
      Note: If you find new invalid objects after performing the above steps, compile the invalid objects by running $ORACLE_HOME/rdbms/admin/utlrp.sql as the SYS user.

      Note: If you have Oracle Database Vault enabled, you may see an error message like the following in the report produced after the cleanup script runs:

      revoke READ on SYS.DUAL from "DVSYS" ERROR ORA-01031: insufficient privileges
      Disconnected from Oracle Database 19c Enterprise Edition Release 19.0.0.0.0 - Production

      Such errors can be safely ignored.

    7. (Conditional) If you are using a multitenant database on UNIX and set ORACLE_PDB_SID in step c, you must unset it now:
      unset ORACLE_PDB_SID
  11. After you have applied the Completion Patch, you should change the EBS_SYSTEM password and SYSTEM passwords by running substeps a to c below. We recommend that you set the two passwords to unique values (i.e. different from each other).

    1. On the application tier, source the environment using the appropriate command for your system:

      UNIX:
      $ . <INSTALL_BASE>/EBSapps.env run
      Windows:
      C:\> <INSTALL_BASE>\EBSapps.cmd run
      Still on the application tier, change the EBS_SYSTEM schema password to be different from that of the SYSTEM schema password, and provide the new password only to DBAs who specifically require access to the SYSTEM account:
      sqlplus EBS_SYSTEM/<EBS_SYSTEM PASSWORD>
      SQL> alter user EBS_SYSTEM identified by <NEW PASSWORD>;
    2. On the database tier, set the environment using the appropriate command for your system:

      UNIX:
      . $ORACLE_HOME/<CDB SID>_<HOST>.env
      Windows:
      C:\> %ORACLE_HOME%\<CDB_SID>_<HOST>.cmd
    3. Still on the database tier, change the SYSTEM schema password, and provide the new password only to DBAs who specifically require access to the SYSTEM account.
    4. sqlplus SYSTEM/<SYSTEM PASSWORD>
      SQL> alter user SYSTEM identified by <NEW PASSWORD>
      ;
      Note: If your EBS instance is integrated with Oracle Database Vault, connect as the <DV_ACCTMGR> user instead of the SYSTEM user.


  12. Validate adop by performing substeps (a) and (b) below.

    1. Source the run edition applications environment.

      UNIX:
      $ . <INSTALL_BASE>/EBSapps.env run
      Windows:
      C:\> <INSTALL_BASE>\EBSapps.cmd run
    2. Run the command:
      adop -validate
  13. (Conditional) Integrate Oracle E-Business Suite with Oracle Database Vault using EBS_SYSTEM.

    If you have previously integrated EBS with Oracle Database Vault, follow subsection (a) below. If you have not previously performed such an integration but would like to do so now, follow subsection (b).

    1. Existing Integration with Oracle Database Vault

      1. Enable Oracle Database Vault by navigating to the $FND_TOP/patch/115/sql directory and running the following command:
        SQL> @fnddbvstatus.sql ENABLE
      2. Download and apply Patch 33267691:R12.AD.C.

      3. Follow Section 3: Integrate Oracle Database Vault with Oracle E-Business Suite Release 12.2 in My Oracle Support knowledge Document 2727580.1Integrating Oracle E-Business Suite Release 12.2 with Oracle Database Vault using EBS_SYSTEM.

    2. New Integration with Oracle Database Vault

      Follow the instructions in My Oracle Support Knowledge Document 2727580.1Integrating Oracle E-Business Suite Release 12.2 with Oracle Database Vault using EBS_SYSTEM

 Oracle E-Business Suite Release 12.2 System Schema Migration (Doc ID 2755875.1)

Database Options/Management Packs Usage Reporting for Oracle Databases 11.2 and later (Doc ID 1317265.1)

  Database Options/Management Packs Usage Report You can determine whether an option is currently in use in a database by running options_pa...