Showing posts with label 19c. Show all posts
Showing posts with label 19c. Show all posts

Wednesday, August 21, 2024

M5 Cross Endian Platform Migration using Full Transportable Data Pump Export/Import and RMAN Incremental Backups (Doc ID 2999157.1)

 


Prerequisites

High level migration workflow

Detailed migration workflow

Migration process explained

Appendix

Cross platform database migration is the process of moving databases to a new platform, including Exadata Database Machine, Exadata Cloud@Customer, Exadata Database Service, etc. This note provides a simple, reliable, and fast migration solution with minimal downtime.

The information below will guide you in performing a cross platform (Big Endian to small Endian, vice versa, or same platform when Data Guard option is not available) database migration.

Note:
1. This procedure only supports Oracle Database 19.18 or higher on source and destination.
2. Tablespaces SYSTEM, SYSAUX, and UNDO are excluded from the migration procedure.
3. RMAN backup encryption is not supported with this procedure.
4. This procedure supports using multi-section backups (SECTION SIZE) to perform faster backups of bigfile tablespaces.
5. This procedure supports migration from non-container database to pluggable database.
6. This procedure supports migration of pluggable databases across platforms.
7. This procedure supports migration using a physical standby as the source database.
8. This procedure supports migration of TDE encrypted database.

 

If your environment cannot meet the prerequisites of the supported version in this note, the other alternative is to use V4 Reduce Transportable Tablespace Downtime using Cross Platform Incremental Backup (Doc ID 2471245.1)

Prerequisites

  • Review MOS note Data Pump Recommended Proactive Patches For 19.10 and Above (Doc ID 2819284.1) and apply all the required fixes for the 19.18 or higher database on both source and destination.
    • Data Pump recommended fixes are required to address known issues and improve performance. This is a mandatory prerequisite.
  • Evaluate whether the below RMAN 19c database fixes apply to your migration.
    • Ensure to have fix for bug 35564253 applied on your destination database.
    • If using physical standby database as a migration source, fix for bug 35675516 is required for your destination database.
    • If Transparent Data Encryption (TDE) is used in the source database, fix for bug 30817822 is required for your destination database. This fix is already included in release update 19.20.0.0 and above.
  • This procedure works with Zero Data Loss Recovery Appliance and any local storage like NFS
    • When using Zero Data Loss Recovery Appliance ensure to have the latest libra.so (ZDLRA: Where to download new sbt library (libra.so module) (Doc ID 2219812.1))
    • When using Zero Data Loss Recovery Appliance for Cross Endian migration, the backups taken will not be virtualized (i.e., stored in RMAN format). They will be replicated to a downstream ZDLRA if configured (See Appendix Section 6). 
  • The db_create_file_dest parameter must be set on the destination database.
  • Capture all RMAN and Data Pump logs for troubleshooting the migration procedure.
  • When using NFS as a backup and restore target, ensure that you have write privileges from the source database and read privileges from the destination database.
  • When using TDE, the keys should be exported from the source and imported on the destination prior to starting the migration process (See Appendix Section 5)
    • When using the Zero Data Loss Recovery Appliance as a backup and restore target for source database backups, uncomment the variable BKP_DEST_TYPE=SBT_TAPE in the properties file and comment the variable BKP_DEST_TYPE=DISK. Also, update the variable BKP_DEST_PARM with the Zero Data Loss Recovery Appliance SBT_TAPE configuration and parameters. Only one variable for the target backup destination is accepted at a given time.  Ensure that your protected database is enrolled accordingly and with a proper Recovery Window Goal, more information can be found under Configuring Recovery Appliance for Protected Database Access

High level migration process

 

  • Perform a level 0 backup on the source
  • Restore the L0 backup taken previously on the destination
  • Perform incremental backups on source and apply on the destination. Repeat this procedure until final incremental is taken, which requires minimal read-only downtime window.
  • Downtime Begins
    • Source Data Tablespaces converted to READ ONLY
    • Final Incremental backup on source
    • Metadata Tablespace Export on source
    • Apply Incremental on destination.
    • Metadata tablespace Import on destination.
  • Downtime Ends

Detailed migration workflow

 Flow

 

 

Note:
1. All steps in this procedure are run as the oracle user that is a member of the OSDBA group. OS authentication is used to connect to both the source and destination databases.
2. This procedure assumes the destination database has been created on destination platform with specific requirements ("compatible" parameter, character set, database options on both source and destination database)
3. General Limitations on Transporting Data are documented here:
a. https://docs.oracle.com/en/database/oracle/oracle-database/19/admin/transporting-data.html#GUID-28800719-6CB9-4A71-95DD-4B61AA603173.
b. Transportable Tablespace (TTS) Restrictions and Limitations: Details, Reference, and Version Where Applicable (Note 1454872.1)
c. Primary Note for Transportable Tablespaces (TTS) -- Common Questions and Issues (Note 1166564.1)
4. Cross platform is only possible with Enterprise Edition. This procedure cannot be used with Standard Edition.
5. The source and destination databases COMPATIBLE parameter must be the same, minimum 19.0.0.0.
6. RMAN configuration on the source system must not have BACKUP TYPE TO COPY. The source must have BACKUP TYPE TO BACKUPSET.

 

Migration process explained

 

The migration relies on a driver script that reads a properties file to perform tablespace backup iterations on the source database using a 'from SCN' based on previous backup.  The backup iterations are L0 (full backup), L1 (incremental backup) command and L1F (final incremental backup).  See Appendix Section 2   

For ease of use, environment variables and other important variables required for the driver script are consolidated in a properties file (See Appendix Sections 3 and 4), the list of tablespaces to be migrated has its own file (See Appendix Section 5).

For more agility and simplicity, you can set up SSH equivalence between the source and destination servers if your security policies allow.  The SSH equivalence will allow the automatic copy of the required scripts between the source and destination automatically.

Note: If SSH equivalence cannot be established, the RMAN command file generated under the log directory on the source database should be copied manually to the destination and executed after each backup is taken on the source database. If NFS is not used or the source and destination do not share the directory where the backup pieces are generated, these should be made available to the destination database, the RMAN restore cmd file should be modified to include the new path the backup pieces are copied to.

During the final backup, the export Data Pump file should be also manually copied from the source database to destination database DATA_PUMP_DIR directory. If the destination is a RAC database, ensure the export Data Pump file is copied to a location shared to all nodes.

 

As previously mentioned, when using the Zero Data Loss Recovery Appliance as a backup and restore target for source database backups, the backups will not be part of the block pool, i.e., they won’t be virtualized.  For this, the backups should be stalled from being ingested.  See Appendix Section 4.

 

During the migration process, the user can issue as many incremental backups as required.  The idea is to increase the frequency of level 1 backups, and by increasing the frequency reducing their size. Smaller backups will get backed up, converted and restored quicker.  When performing the final incremental level 1, the tablespaces to be migrated will be put in read only mode and the backup is taken. On the destination environment, the restore of the last backup should be performed and the impdp.sh script executed afterwards to plugin the tablespaces metadata.

 

The migration can be performed from a physical standby database if such exists. This removes the load on the primary database. The migration steps are similar except that at the last step, the standby database is converted to snapshot standby, the metadata export is performed, and the database is converted back to physical standby. This automation uses the Data Guard broker to perform these operations.

 

For additional guidance on migrating pluggable databases refer to Appendix Section 6.

 

The final incremental backup (L1F step) is the last step before the import. During this task the user will be asked to provide a privileged user password on the source database. This is needed to put the tablespaces being transported into read only mode, and to perform the export. On the destination, the impdp.sh will require the user to provide the system password during the import of the transportable tablespaces.

 

Step-by-Step Migration

 

1. Preparation

  1. Create a working directory on an NFS share that is shared between source and destination. The path to the NFS share should be identical on both source and destination.
    1. If an NFS share is not possible, use any local storage. Create the directory on the source host and copy it to the destination host. This needs to be repeated multiple times throughout the process. The path on the source and destination host should be identical. If not, various changes are required to the scripts which is beyond the scope of this procedure.
  2. Download DBMIG.zip (md5sum 24b5acb9e7cbe600de7547faf1e86962) to your working directory.
  3. Unzip DBMIG.zip. The file contains:
      1. dbmig_driver_m5.sh: Migration driver script.
      2. impdp.sh: Import driver script.
      3. log directory: Created at runtime. Stores the RMAN backup log, migration logs, and additional logs in addition to:
        1. rman_mig_bkp.log: The centralized migration logfile.
        2. rman_mig_bkp.lck: Lock file created at the beginning of every backup to prevent concurrent driver execution. If a driver execution aborts the lock will not be removed. The problem should be investigated and resolved prior to removing the lock.
      4. cmd directory: Created at runtime. Stores the RMAN backup and restore scripts in addition to:
        1. dbmig_driver.properties: Contains the environment variables required for the migration; see Appendix section 1 below for additional information. This file is user-modifiable.
        2. dbmig_ts_list.txt: Contains the list of tablespaces to be migrated; names are comma-separated without quotes; see Appendix section 2. This file is user-modifiable.
  4. Modify the variables in dbmig_driver.properties (in cmd directory) to reflect the environment you are using; refer to Appendix section 1 for the description of the variable.
      1. If backing up to disk (BKP_DEST_TYPE=DISK), it is recommended to store the RMAN backup on the shared NFS. Set BKP_DEST_PARM to a subdirectory in the working directory.
      2. If possible, for ease of use, store the Data Pump dump file in a subdirectory in the working directory. Change DIRECTORY_PATH of the DATA_PUMP_DIR directory object.
      3. If backing up to ZDLRA (BKP_DEST_TYPE=SBT) and you don't use a shared NFS, you can establish SSH equivalence between the source and destination servers. The script can automatically transfer RMAN restore scripts and Data Pump dump file. Set DEST_SERVER to the network name of the destination database. Otherwise, leave it blank.
  5. Update dbmig_ts_list.txt with the tablespace names you are migrating. See Appendix section 2 for details.
  6. Create a new, empty destination database (non-CDB or PDB). The destination database must be on the same or higher release. The destination may be on a higher patch level than the source.

2. Level 0 Backup

  1. On source, execute an initial level 0 (L0) backup of the source database:
    1. $ sh dbmig_driver_m5.sh L0
  2. If not using a shared NFS, copy the working directory to the destination host.
  3. On destination, set the environment to the destination database.
  4. The driver script creates a restore script. It is stored in the cmd directory:
    ls -ltra cmd/restore_L0_*  
  5. Restore the level 0 (L0) backup on the destination database using the latest restore script. Ensure the path to the libra.so and wallet is valid in the restore script.
    1. For non-CDB using backups on disk:
      $ rman target / cmdfile=<restore_cmdfile_name>           
    2. For non-CDB using ZDLRA:
      $ man target / catalog /@scaningest:<port>/<service_name>cmdfile=<restore_cmdfile_name>
    3. For PDB connect directly into PDB
      $ rman target sys@<pdb-connect-string> cmdfile=<restore_cmdfile_name>
  6. Check the RMAN log file:
    $ egrep "WARN-|ORA-" log/restore_*.log
  7. In case of warnings or errors in the log file, contact Oracle Support.

 

3. Level 1 Backup

  1. On source, execute a level 1 (L1) incremental backup of the source database:
    1. $ sh dbmig_driver_m5.sh L1
  2. If not using a shared NFS, copy the working directory to the destination host.
  3. On destination, set the environment to the destination database.
  4. The driver script creates a restore script. It is stored in the cmd directory:
    ls -ltra cmd/restore_L1_*  
  5. Restore the level 1 (L1) backup on the destination database using the latest restore script. Ensure the path to the libra.so and wallet is valid in the restore script.
    1. For non-CDB using backups on disk:
      $ rman target / cmdfile=<restore_cmdfile_name>
    2. For non-CDB using ZDLRA:
      $ man target / catalog /@scaningest:<port>/<service_name> cmdfile=<restore_cmdfile_name>
    3. For PDB connect directly into PDB
      $ rman target sys@<pdb-connect-string> cmdfile=<restore_cmdfile_name>
  6. Check the RMAN log file:
    $ egrep "WARN-|ORA-" log/restore_*.log
  7. In case of warnings or errors in the log file, contact Oracle Support.
  8. Repeat steps 1-6 at regular intervals until the final level 1 incremental backup.

 

4. Final Level 1 Backup

  1. Downtime starts now.
  2. On source, execute the final level 1 (L1) incremental backup of the source database:
    1. $ sh dbmig_driver_m5.sh L1F
    2. The script prompts for the system password. This is the password of the user specified in SYSTEM_USR in the properties file.
    3. Then, it sets the specified tablespaces in read-only mode.
    4. Then, it starts the final level 1 incremental backup.
    5. Finally, it starts Data Pump to perform a full transportable export.
  3. Review the Data Pump export log file.
  4. If not using a shared NFS, copy the working directory to the destination host.
  5. If desired, you can shut down the source database at this point. It is no longer needed.
  6. On destination, set the environment to the destination database.
  7. The driver script creates a restore script. It is stored in the cmd directory:
    ls -ltra cmd/restore_L1F_*  
  8. Restore the final level 1 (L1) backup on the destination database using the final restore script.
    1. For non-CDB using backups on disk:
      $ rman target / cmdfile=<restore_cmdfile_name>
    2. For non-CDB using ZDLRA:
      $ man target / catalog /@scaningest:<port>/<service_name> cmdfile=<restore_cmdfile_name>
    3. For PDB connect directly into PDB
      $ rman target sys@<pdb-connect-string> cmdfile=<restore_cmdfile_name>
  9. Check the RMAN log file:
    $ egrep "WARN-|ORA-" log/restore_*.log
  10. In case of warnings or errors in the log file, contact Oracle Support.

 

5. Data Pump Import

  1. On destination, edit the import driver script (impdp.sh). Use information about the destination database to fill in the variables
    1. ORACLE_HOME: The path to the Oracle home of the destination database.
    2. ORACLE_SID: SID of the destination database.
    3. ORACLE_CONNECT_STRING: A connect string directly into the destination non-CDB database or destination PDB.
    4. DATA_PUMP_PARALLEL: Data Pump parallel level for the import. Use if the destination database is Oracle Database 21c or later.
    5. DATA_PUMP_TRACE: Data Pump trace level. Set to to disable trace.
  2. The import driver script takes four parameters:
    1. expdp_dumpfile: Name of the Data Pump dump file, e.g., exp_UP19_240206134155.dmp
    2. rman_last_restore_log: Relative path to the last RMAN restore log, e.g., log/restore_L1F_UP19_240206134155.log
    3. run-mode: One of the below:
      1. test: Generates the Data Pump parameter file. Does not start Data Pump.
      2. test-readonly. Generates the Data Pump parameter file and adds TRANSPORTABLE=KEEP_READ_ONLY to the Data Pump parameter file. Review the documentation for the usage of this parameter. Does not start Data Pump.
      3. run: Generates the Data Pump parameter file. Starts Data Pump to perform the import.
      4. run-readonly: Generates the Data Pump parameter file and adds TRANSPORTABLE=KEEP_READ_ONLY to the Data Pump parameter file. Review the documentation for the usage of this parameter. Starts Data Pump to perform the import.
    1. DP_ENC_PROMPT: One of the below
      1. Y: Will prompt the user for an encryption password in case one was used for the export
      2. N: Will not prompt the user for an encryption password.
  3. Start the import driver script in test mode to verify the correctness of the generated Data Pump parameter file:
    1. $ sh impdp.sh <expdp_dumpfile> <rman_last_restore_log> test
  4. The driver script generates the parameter file in the current directory. Examine it.
    1. $ vi imp_<oracle_sid>_<timestamp>_xtts.par
  5. Do not change the generated Data Pump parameter file. If you need to change the parameter file, change the code in the driver script, so it generates the desired parameter file.
  6. Copy the Data Pump dump file to the directory path specified by the DATA_PUMP_DIR directory object in the destination database. Use the below query to determine the directory path. If destination is a PDB, be sure to connect to the PDB to execute the query:
    1. SQL> select directory_path from dba_directories where directory_name='DATA_PUMP_DIR';
  7. Start the import driver script in run mode to perform the Data Pump import:
    1. $ sh impdp.sh <expdp_dumpfile> <rman_last_restore_log> run
    2. The script starts by creating a guaranteed restore point. If the destination is a PDB, the script creates the restore point in the CDB (CDB-level). The restore point is only useful for testing in combination with TRANSPORTABLE=KEEP_READ_ONLY. Unless you use this option, then re-use of the data files is not possible
    3. Then, it starts Data Pump to perform a full transportable import. Data Pump prompts for the password of the user executing the import.
  8. Always review the Data Pump log file. Even if the Data Pump is listed as successful, you should always review the Data Pump log file for any critical errors.
  9. When determined to go live, remove the previously created restore point.

 

Note:
If a service request needs to be opened with Oracle Support for an issue related with this procedure, the following should be uploaded:

1. The whole DBMIG directory including the sub-directories from the source database
2. The working directory from the destination, this includes the RMAN restore files, trace files, log files, import data pump par file(s) and log(s)
3. From both databases, the alert log files, opach lsinventory output, and the *dm* and *dw* trace files generated in the database dump directory.

 

Appendix Section 1

dbmig_driver.properties file explained

 

Parameter

Description

Default value

Modifiable

ORACLE_HOME

The Oracle home of the source big endian database

User defined

Yes

ORACLE_SID

The SID of the source big endian database

User defined

Yes

SRC_SCAN

Source database Easy connect

User defined

Yes

MIG_PDB

Set to 1 when migrating a pluggable database, otherwise 0 for non-CDB

User defined

Yes

PDB_NAME

Pluggable database where the tablespaces are being migrated from

User defined

Yes

BKP_FROM_STDBY

When migrating using STANDBY database as source, the value is 1, otherwise 0 when migrating from the PRIMARY

User defined

Yes

SOURCE_DPDMP

Data pump file location on source

User defined

Yes

SYSTEM_USR

System privileged user which will change the tablespaces to read only and perform the export data pump.  Recommended to use SYSTEM

Inherited in the script

NO

DP_TRACE

Data Pump trace level. Use 0 to disable trace. Use 3FF0300 to full transportable tracing, see MOS Doc ID 286496.1 for details.

0

Directed by Support

DP_PARALLEL

Data Pump parallel setting. Accepted values: 1 to 999. Example: 16

1

Yes

DP_ENC_PROMPT

Specifies whether Oracle Data Pump prompts you for the encryption password.

N

Yes

BKP_DEST_TYPE

Location of backups, accepted values: DISK, SBT_TAPE. Choose DISK to backup up to local storage. Choose SBT_TAPE to use ZDLRA.

User defined

Yes

BKP_DEST_PARM

If BKP_DEST_TYPE=DISK, enter location for backup: Example: /u01/app/oracle/m5/rman

If BKP_DEST_TYPE=SBT_TAPE, enter channel configuration: Example: "'%d_%U' PARMS \"SBT_LIBRARY=<oracle_home>/lib/libra.so,SBT_PARMS=

(RA_WALLET='location=file:<oracle_home>/dbs/zdlra credential_alias=<zdlra-connect-string>')\""

User defined

Yes

CAT_CRED

If you use ZDLRA, specify connect string

Example: <scan-name>:<port>/<service>

User defined

Yes

SECTION_SIZE

Section size used in RMAN backups

64G

Yes

CHN

Number of RMAN channels allocated.

User defined

Yes

DEST_SERVER

Network name of the destination server. Leave blank if you manually transfer backups and dump files.

User defined

Yes

DEST_USER

Destination OS username.  If SSH equivalence is established between the source and destination environments, it will use passwordless SCP to copy files from the source to destination. Example: oracle

User defined

Yes

DEST_WORKDIR

Destination working directory, this is to store the RMAN restore scripts, logs, trace, import script and par files

User defined

Yes

DEST_DPDMP

The directory path used by DATA_PUMP_DIR on the destination.

Example: /u01/app/oracle/m5/data_pump_dir

User defined

Yes

WORKDIR

The working directory containing the migration script, CMD and LOG directories on source.

User defined

Yes

LOG_DIR

The log directory under the WORKDIR holding the backup log files and others on source.

Inherited in the script

NO

CMD_DIR

The cmd directory under the WORKDIR holding the backup and restore scripts on source.

Inherited in the script

NO

DT=`date +%y%m%d%H%M%S`

CMD_MKDIR=`which mkdir`

CMD_TOUCH=`which touch`

CMD_CAT=`which cat`

CMD_RM=`which rm`

CMD_AWK=`which awk`

CMD_SCP=`which scp`

Additional variables to set the date format and load OS commands.

Inherited in the script

NO

TS

The migrated tablespaces names in a list format

User defined

Yes

TSF

The migrated tablespaces filename

 

NO

SCN

File to keep track of the SCN when the backup was run

Inherited in the script

No

 

dbmig_driver.properties sample file

 

############################################################
#Source database properties
#my_M5_prop_version=2
# -
# - ORACLE_HOME Path to Oracle Home
# - ORACLE_SID SID of the source database
# - SRC_SCAN Connect string to source database via SCAN.
# If no SCAN, specify source database network name.
# Enclose in single quotes
# Example: '@myhost-scan/db1'
# Example: '@localhost/pdb1'
# - MIG_PDB Accepted values: 0, 1
# Choose 0 if source is non-CDB
# Choose 1 if source is a PDB
# - PDB_NAME If source is a PDB, specify PDB name.
# Else leave blank
# Example: PDB1
# - BKP_FROM_STDBY Accepted values: 0, 1
# Choose 0 to back up from primary database,
# or if Data Guard is not in use.
# Choose 1 to back up from standby database.
############################################################
export ORACLE_HOME=/u01/app/oracle/product/19.0.0.0/dbhome_1
export PATH=$PATH:$ORACLE_HOME/bin
export ORACLE_SID=soldb1918
export SRC_SCAN='@scan1/soldb1918'
export MIG_PDB=0
export PDB_NAME=
export BKP_FROM_STDBY=0
############################################################
#Source Data Pump settings
# - SOURCE_DPDMP Directory path of the directory DATA_PUMP_DIR
# Example: /u01/app/oracle/m5/data_pump_dir
# - SOURCE_DPIR Data Pump Directory, typically DATA_PUMP_DIR
# - SYSTEM_USR Username for Data Pump export.
# Do not use SYS AS SYSDBA
# Example: SYSTEM
# - DP_TRACE Data Pump trace level.
# Use 0 to disable trace.
# Use 3FF0300 to full transportable tracing
# See MOS Doc ID 286496.1 for details.
# - DP_PARALLEL Data Pump parallel setting.
# Accepted values: 1 to 999
# Example: 16
############################################################
export SOURCE_DPDMP=/u01/app/admin/soldb1918/dpdump
export SOURCE_DPDIR=DATA_PUMP_DIR
export SYSTEM_USR=SYSTEM
export DP_TRACE=0
export DP_PARALLEL=1
export DP_ENC_PROMPT=N
############################################################
#Source RMAN settings
# - BKP_DEST_TYPE Accepted values: DISK, SBT_TAPE
# Choose DISK to backup up to local storage
# Choose SBT_TAPE to use ZDLRA
# - BKP_DEST_PARM If BKP_DEST_TYPE=DISK, enter location for backup:
# Example: /u01/app/oracle/m5/rman
# If BKP_DEST_TYPE=SBT_TAPE, enter channel configuration:
# Example: "'%d_%U' PARMS \"SBT_LIBRARY=<oracle_home>/lib/libra.so,SBT_PARMS=(RA_WALLET='location=file:<oracle_home>/dbs/zdlra credential_alias=<zdlra-connect-string>')\""
# - CAT_CRED If you use RMAN catalog or ZDLRA, specify connect string to catalog database
# Example: <scan-name>:<port>/<service>
# - SECTION_SIZE Section size used in RMAN backups
# - CHN Number of RMAN channels allocated
############################################################
export BKP_DEST_TYPE=DISK
export BKP_DEST_PARM=/SHAREDY/DBMIGBKP/SOLDB1918
export CAT_CRED=
export SECTION_SIZE=64G
export CHN=8
############################################################
#Destination host settings
#If specified, the script transfers the RMAN backups and
#Data Pump dump file to the destination via over SSH.
#SSH equivalence is required.
# - DEST_SERVER Network name of the destination server.
# Leave blank if you manually transfer
# backups and dump files
# - DEST_USER User for SSH connection
# Example: oracle
# - DEST_WORKDIR The script working directory on destination
# Example: /u01/app/oracle/m5
# - DEST_DPDMP The directory path used by DATA_PUMP_DIR
# in destination database
# Example: /u01/app/oracle/m5/data_pump_dir
############################################################
export DEST_SERVER=srvadm01
export DEST_USER=oracle
export DEST_WORKDIR=/SHAREDY/STAGE/CROSSPLATMIG/PRIMARY
export DEST_DPDMP=/u01/app/oracle/product/19.0.0.0/dbhome_13/rdbms/log

############################################################
#Advanced settings
#Normally, you don't need to edit this section
############################################################
export WORKDIR=$PWD
export LOG_DIR=${WORKDIR}/log
export CMD_DIR=${WORKDIR}/cmd
export PATH=$PATH:$ORACLE_HOME/bin
export DT='date +%y%m%d%H%M%S'
export CMD_MKDIR='which mkdir'
export CMD_TOUCH='which touch'
export CMD_CAT='which cat'
export CMD_RM='which rm'
export CMD_AWK='which awk'
export CMD_SCP='which scp'
export CMD_CUT='which cut'
export CMD_PLATFORM='uname'
if [[ "$CMD_PLATFORM" = "Linux" ]]; then
export CMD_GREP="/usr/bin/grep"
else
if [[ "$CMD_PLATFORM" = "AIX" ]]; then
export CMD_GREP="/usr/bin/grep"
else
if [[ "$CMD_PLATFORM" = "HPUX" ]]; then
export CMD_GREP="/usr/bin/grep"
else
export CMD_GREP='which ggrep'
fi
fi
fi
export my_M5_prop_version=2

 

Appendix Section 2

Structure of the dbmig_ts_list.txt file

This file contain the names of the tablespaces to be migrated, the names should be separated by comma only.  See sample below.

$ cat dbmig_ts_list.txt

 

JSXTTS_1,JSXTTS_2,JSXTTS_3,JSXTTS_4,JSXTTS_5,JSXTTS_6,JSXTTS_7,JSXTTS_8,JSXTTS_9,JSXTTS_10

 

 

Appendix Section 3

The file impdp.sh should be copied to the destination working directory.

Export the ORACLE_HOME, ORACLE_SID and ORACLE_PDB_SID (If required) to reflect the database you are migrating to, enter the password for the user performing the import when prompted.  The script requires three parameters, the export dumpfile name which is displayed during the last L1 backup on the source, the last RMAN restore log and the run mode (test or run).  Run mode allows to perform a test run to check the import parameter file for correctness before executing a real run with the run parameter.

 

Impdp.sh runs as follows:

$ sh impdp.sh <exp.dmp> <log/res_L1F.log> [test|test-readonly|run-readonly|run] <encryption_pwd_prompt[Y|N]>

 

Run mode explained.

 

test

Runs the script and produces an imp_${ORACLE_SID}_xtts.par file without the parameter TRANSPORTABLE=KEEP_READ_ONLY, check the imp_${ORACLE_SID}_xtts.par file for correctness

$sh impdp.sh <exp.dmp> <res.log> test

test-readonly

Runs the script and produces an imp_${ORACLE_SID}_xtts.par file with the parameter TRANSPORTABLE=KEEP_READ_ONLY, check the imp_${ORACLE_SID}_xtts.par file for correctness

run

run: Runs the script, produces an imp_${ORACLE_SID}_xtts.par file without the parameter TRANSPORTABLE=KEEP_READ_ONLY, generates a guaranteed restore point and performs the import.

run-readonly

Runs the script, produces an imp_${ORACLE_SID}_xtts.par file with the parameter TRANSPORTABLE=KEEP_READ_ONLY, generates a guaranteed restore point and performs the import.

 

 

$ cat impdp.sh

#!/bin/bash

 

if [ $# -ne 4 ]; then

        echo "Please call this script using the syntax $0 "

        echo "Example: # sh impdp.sh <expdp_dumpfile> <rman_last_restore_log> [run|test|run-readonly|test-readonly] <encryption_pwd_prompt[Y|N]>"

        exit 1

fi

 

#Full path to Oracle Home

export ORACLE_HOME=/u01/app/oracle/product/19.0.0.0/dbhome_13

export PATH=$PATH:$ORACLE_HOME/bin

#SID of the destination database

export ORACLE_SID=entain1

#Connect string to destination database. If PDB, connect directly into PDB

export ORACLE_CONNECT_STRING=entain

#Data Pump directory

export DATA_PUMP_DIR=DATA_PUMP_DIR

#Data Pump parallel setting

export DATA_PUMP_PARALLEL=1

#Data Pump trace level. 0 to disable. 3FF0300 for transportable tablespace trace

export DATA_PUMP_TRACE=0

 

############################################################

#Normally, do not edit below this line

############################################################

 

export PATH=$PATH:$ORACLE_HOME/bin

export DT=`date +%y%m%d%H%M%S`

export EXPDP_DMP=${1}

export RES_LOG=${2}

export RUN_MODE=${3}

export DP_ENC_PROMPT=${4}

 

echo "userid='system@${ORACLE_CONNECT_STRING}'" > imp_${ORACLE_SID}_${DT}_xtts.par

echo "dumpfile=${EXPDP_DMP}" >> imp_${ORACLE_SID}_${DT}_xtts.par

echo "directory=${DATA_PUMP_DIR}" >> imp_${ORACLE_SID}_${DT}_xtts.par

echo "LOGTIME=ALL" >> imp_${ORACLE_SID}_${DT}_xtts.par

echo "TRACE=${DATA_PUMP_TRACE}" >> imp_${ORACLE_SID}_${DT}_xtts.par

echo "PARALLEL=${DATA_PUMP_PARALLEL}" >> imp_${ORACLE_SID}_${DT}_xtts.par

echo "LOGFILE=imp_${ORACLE_SID}_${DT}_xtts.log" >> imp_${ORACLE_SID}_${DT}_xtts.par

echo "METRICS=YES" >> imp_${ORACLE_SID}_${DT}_xtts.par

case ${DP_ENC_PROMPT} in

     "Y")

        echo "ENCRYPTION_PWD_PROMPT=YES" >> imp_${ORACLE_SID}_${DT}_xtts.par

       ;;

     "N")

        echo "ENCRYPTION_PWD_PROMPT=NO" >> imp_${ORACLE_SID}_${DT}_xtts.par

      ;;

      *)

      ;;     

esac

echo "TRANSPORT_DATAFILES=" >> imp_${ORACLE_SID}_${DT}_xtts.par

 

input=$(grep "restoring foreign" ${RES_LOG} | awk '{print $9}' | sort |  uniq)

 

IFS=$'\n' read -rd '' -a input_list <<< "$input"

output_list=()

for item in "${input_list[@]}"; do

    output_list+=("'$item',")

done

 

last_index=$(( ${#output_list[@]} - 1 ))

output_list["$last_index"]="${output_list[last_index]%,}"

 

for item in "${output_list[@]}"; do

    echo "$item" >> imp_${ORACLE_SID}_${DT}_xtts.par

done

 

 

function gen_grp {

${ORACLE_HOME}/bin/sqlplus -s /nolog << EOF

                connect / as sysdba

                set lines 333

                col time for a48

                col restore_point_time for a24

                col name for a24

                spool ${PWD}/dbmig_grp.lst

                create restore point before_imp_${DT} guarantee flashback database;

                select NAME, SCN, TIME, GUARANTEE_FLASHBACK_DATABASE, STORAGE_SIZE FROM V\$RESTORE_POINT WHERE GUARANTEE_FLASHBACK_DATABASE='YES';

                spool off;

                exit

EOF

}

 

case ${RUN_MODE} in

              "run")

                echo "Running in Run mode"

                gen_grp

                impdp parfile=imp_${ORACLE_SID}_${DT}_xtts.par

              ;;

              "run-readonly")                             

                echo "Running in Read Only, for the final run you must flashback the database to the restore point created now"

                gen_grp

                awk 'NR==3 {print "TRANSPORTABLE=KEEP_READ_ONLY"} {print}' imp_${ORACLE_SID}_${DT}_xtts.par > tmp_file && mv tmp_file imp_${ORACLE_SID}_${DT}_xtts.par

                impdp parfile=imp_${ORACLE_SID}_${DT}_xtts.par

              ;;

              "test")                            

                echo "Running in test mode, check par file for correctness"

              ;;

              "test-readonly")                            

                echo "Running in test mode with read only, check par file for correctness"

                awk 'NR==3 {print "TRANSPORTABLE=KEEP_READ_ONLY"} {print}' imp_${ORACLE_SID}_${DT}_xtts.par > tmp_file && mv tmp_file imp_${ORACLE_SID}_${DT}_xtts.par

              ;;                        

              *)

                echo "Unknown run mode, options are: run, test, run-readonly, test-readonly"

esac

 

Appendix Section 4

 

When using the Zero Data Loss Recovery Appliance a backup and restore target, the backup from the source databases should be stalled (i.e, blocked) from being indexed.

 

To do so, identify your database DB_KEY by querying the RA_DATABASE view using the DB_UNQIUE_NAME in the Zero Data Loss Recovery Appliance database.

 

As user RASYS in the Zero Data Loss Recovery Appliance database create a stall_when view like the one below:

 

SQL> create or replace view RAI_STALL_WHEN

     AS

     SELECT task_id

       FROM task

      WHERE task_type = 140

        AND db_key = <DB_KEY>;

 

As user RASYS on the Zero Data Loss Recovery Appliance database enable the stall_when parameter:

 

 SQL> EXEC DBMS_RA_INT.config(p_name=>'_stall_when', p_value=>'ON', comments=>'Enabled for Cross Endian Migration');

 

When the migration is finished, the stall_when flagged should be removed and the view dropped.  This will trigger the INDEX_BACKUP task on the ZDLRA and virtualize the backups.

 

SQL> DROP view RAI_STALL_WHEN;

SQL> EXEC DBMS_RA_INT.config(p_name=>'_stall_when', p_value=>'OFF', comments=>'Disabled for Cross Endian Migration');

 

 

Note: Prior to creating the RAI_STALL_WHEN view, verify if the _stall_when flag has already been set for any other reason. If the flag was already set, do not proceed with the migration and contact Oracle Support.

 

Note: If the compliance mode is implemented on the ZDLRA, the use of dbms_ra_int is considered a "break glass".

 

Note: If you are using block change tracking, and as backups to the ZDLRA are not being virtualized, it is important the backup performance can be impacted if number of backups (L0+ L1s) is higher than default value (8) for “_bct_bitmaps_per_file”
Reference: RMAN incremental level 1 backups are not using block change tracking file (Doc ID 1192652.1).

 

Appendix Section 5

 

When using TDE, the encryption keys should be exported from the source database keystore and imported on the destination database keystore ahead of the migration.

 

  1. On the source big endian system, export the TDE encryption keys as a user who has been granted the ADMINISTER KEY MANAGEMENT or SYSKM privilege. Protect the exported file as it contains sensitive information.

$ sqlplus / as sysdba
SQL> ADMINISTER KEY MANAGEMENT EXPORT ENCRYPTION KEYS FORCE WITH SECRET "new_password_to_encrypt_export_file" TO 'exportkeys.exp' IDENTIFIED BY "existing_keystore_password";

Keystore altered

 o Check the KEY_ID from the V$ENCRYPTION_KEYS


$ sqlplus / as sysdba
SQL> SELECT KEY_ID FROM V$ENCRYPTION_KEYS;

KEY_ID
------------------------------------------------------------------------------
AbWpl9Eq2U8Ev22+H6Ufy14AAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AVZ8DckBpk8xv+9IzzfKvoMAAAAAAAAAAAAAAAAAAAAAAAAAAAAA

 o Copy the encrypted export file ‘exportkeys.exp’ to the Linux destination:

$ scp exportkey.exp user@server:/path

 2. On the Linux destination system, import the TDE master encryption key from the file generated in step 1

$ sqlplus / as sysdba
SQL> ADMINISTER KEY MANAGEMENT IMPORT KEYS WITH SECRET "Password_used_in_step1_export" FROM 'exportkeys.exp' IDENTIFIED BY "keystore_password_used_on_detination_db" WITH BACKUP;

Keystore altered

 o Check the KEY_ID from the V$ENCRYPTION_KEYS

$ sqlplus / as sysdba
SQL> SELECT KEY_ID FROM V$ENCRYPTION_KEYS;

KEY_ID
------------------------------------------------------------------------------
AXj7ZZ4Lq08Jv4pzTBn8/DoAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AVZ8DckBpk8xv+9IzzfKvoMAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AbWpl9Eq2U8Ev22+H6Ufy14AAAAAAAAAAAAAAAAAAAAAAAAAAAAA

 o To activate the previously imported the TDE keys from the source

$ sqlplus / as sysdba
SQL> ADMINISTER KEY MANAGEMENT USE KEY <KEY_ID> USING TAG 'Activate Key on target' FORCE KEYSTORE IDENTIFIED BY "keystore_password_used_in_step1_above" WITH BACKUP;

Keystore altered

 

Appendix Section 6

This note supports migrating a pluggable database from a big-endian source container database to a small endian destination container database.

 

In the dbmig_driver.properties, set the flags MIG_PDB to 1 and PDB_NAME to the name of the pluggable database to be migrated.  The file dbmig_ts_list.txt should contain the list of tablespaces to be migrated.  During the migration process the automation creates a file called dbmig_ts_list.txt.pdb.  This file contains the name of the pluggable database appended to the name of the tablespaces to be migrated.  The content of the file is used when building the RMAN backup scripts.

 

The restore operations of L0 and L1s are the same for every migration.  For the pluggable database migration and to ensure the tablespaces being transported are plugged in the correct destination pluggable database, the connection string in the impdp.sh should be adapted accordingly.

 

During the restore operation, RMAN will create the datafiles in a ASM sub-directory on the destination which is different to the directory used by the pluggable database. If you are wanting to move the files into the correct destination of the pluggable database then mount the pluggable database and perform and RMAN ‘BACKUP AS COPY’ specifying the data diskgroup in the FORMAT clause. Once the backup is complete, you can perform an RMAN ‘SWITCH DATAFILE TO COPY’ command to update the controlfile with the new correct pluggable database directory.

Tuesday, March 19, 2024

19c Database with Oracle E-Business Suite R12 Known Issues & Solutions

 The purpose of this document is to present known issues and solutions for the 19c Database Upgrade with Oracle E-Business Suite (EBS).

Important Information Regarding the Upgrade to Oracle Database 19c

  1. When upgrading your Oracle E-Business Suite to Oracle Database 19c, your database will be converted to a multitenant architecture, which will consist of a Container Database (CDB) with a single Pluggable Database. Only multitenant architecture database are supported for Oracle E-Business Suite with Oracle Database 19c.
  2. During the upgrade, you will also perform steps to migrate directories defined for PL/SQL File I/O to database directory objects. This requirement is due to the desupport in Oracle Database 19c of the UTL_FILE_DIR database initialization parameter.
  3. Oracle 19c Database Release Update Information for Oracle E-Business Suite:
    • Oracle Database Release Update 19.3 as well as Release Update 19.5 and later are certified.
    • Oracle recommends that you upgrade to the latest Oracle Database Release Update that is certified with Oracle E-Business Suite. Applying the latest Release Update will ensure that you have the security-related fixes and high-priority non-security fixes. See My Oracle Support Knowledge Document 2285040.1, Release Update Introduction and FAQ, for more details.
    • You can upgrade directly from an earlier Oracle Database version to Oracle Database Release Update 19.3 or Release Update 19.6 or later.
    • You cannot upgrade directly to Oracle Database Release Update 19.5. You must first upgrade from your earlier Oracle Database version to Oracle Database 19.3, and then apply Oracle Database Release Update 19.5.

 E-Business Suite 19c Database Upgrade Analyzer (Doc ID 2704990.1

The E-Business Suite 19c Database Upgrade Analyzer is a self-service health-check script that reviews E-Business R12 19c Database Upgrade related data, analyzes current configurations and settings for the environment and provides feedback and recommendations on the progress of your 19c database upgrade and any potential missing or failed upgrade steps. Analyzers may also be used as data collection tools by Support Engineers. Your application data is not altered in any way when you run this analyzer.

----------------------------------------
NOTE: In the images below and/or the attached document, user details / company name / address / email / telephone number represent a fictitious sample (based upon made up data used in the Oracle Demo Vision instance). Any similarity to actual persons, living or dead, is purely coincidental and not intended in any manner.
----------------------------------------
NOTE: Make sure to review Doc ID 2567105.1 which contains commonly asked questions and answers about using Oracle E-Business Suite with the multitenant architecture.

 

QUESTIONS AND ANSWERS

 

Questions

QuestionsAnswer
Which listener.ora should be used to start database listener ?

$ORACLE_HOME/network/admin/listener.ora should be used. Use $ORACLE_HOME/appsutil/scripts/<pdbname>_<hostname>/adcdblnctl.sh script : Example  adcdblnctl.sh start <CDBNAME>

How to know which listener.ora is used ?

Run lsnrctl status CDBNAME command look for value of "Listener Parameter File" in the output. Example :

Listener Parameter File   /u01/oracle/db/tech/19.3/network/admin/listener.ora

 

Problems

 

Database Issues

IssueSolution
ORA-65064: system ts doesn't exist

An ORA-65064 might have been raised when we attempted to plug in a legacy database into a CDB and the legacy database was created under a pre-V8 database release and then moved to later releases using the migrate
utility

Cause: When you plug that database into a Multitenant environment you will see an ORA-65064 and the tracefile will have a message that reads:

    kpdbcPlugVerifyFiles: ORA-65064: system ts doesn't exist

Solution:

The Multitenant architecture has a fundamental assumption that any plugged in database will have a file from the SYSTEM tablespace with a relative file
number of 1.

You may be seeing this issue if the database was created under v7 or earlier,
was upgraded to later releases using the migrate utility and an ORA-65064 is
raised during the plug-in operation and where the tracefile indicates that we
could not find the system tablespace.

Checking the XML file generated using dbms_pdb.describe() should also show that there is no SYSTEM tablespace with a relative file number
(<rfn>..</rfn>)
of 1.

WORKAROUND:
Use exp/imp or datapump to rebuild the database first and then perform the
plug in operation.

ORA-7445 [qkacco] when running a Complex Query with Nested Joins

ORA-07445: exception encountered: core dump [qkacco()+91] [SIGSEGV] [ADDR:0x0] [PC:0x289204B] [Address not mapped to object] []

Solution:

ORA-7445 [qkacco] when Running a Complex Query with Nested Joins (Doc ID 2664281.1)


Do not set optimizer_adaptive_plans=false at database level as a workaround. Instead use SQL-Profile, hints , "alter session" commands, etc  to change setting at session level for problematic SQL. Wait for database patch . Apply patch for database Bug 30621255

ORA-01034 while running txkPostPDBCreationTasks.pl script

Starting Context Generation Tool at Tue May 19 13:24:17 CDT 2020
Using GenerateContext version 120.20.12010000.16

Connecting to the cgdev2 database instance...

Connection parameters values:
Database server hostname ==> databasehostname.xx.yy.com
Database listener port ==> 1521
Database SID ==> PDBNAME
Database schema name ==> apps

Could not Connect to the Database : ORA-01034: ORACLE not available
ORA-27101: shared memory realm does not exist
Linux-x86_64 Error: 2: No such file or directory
Additional information: 4376
Additional information: 226357863

 Database listener.ora parameter should have following setting

USE_SID_AS_SERVICE_<CDBNAME>=ON

Without this setting GenerateContext.class file fails to connect to database because it uses JDBC connect string hostname:port:SID notation. But this is changes in 19c and it is hostname:port/SID. Notice / instead of :  . Setting USE_SID_AS_SERVICE_<CDBNAME>=ON solves this issue

Invalid APPS database user credentials errors when running txkPostPDBCreationTasks.pl

Post PDB script failing with following errors :

*******FATAL ERROR*******
PROGRAM : (.../appsutil/bin/txkPostPDBCreationTasks.pl)
TIME : Sat May 16 16:47:09 2020
FUNCTION: main::validateAppsSchemaCredentials [ Level 1 ]
ERRORMSG: Invalid APPS database user credentials.
*******FATAL ERROR*******
PROGRAM : (/.../appsutil/bin/txkPostPDBCreationTasks.pl)
TIME : Sat May 16 16:47:09 2020
FUNCTION: main::validateAppsSchemaCredentials [ Level 1 ]
ERRORMSG: Invalid APPS database user credentials.

Database parameter instance_name was set to PDB_NAME . It should set to CDB_NAME.

PDB services are not registering with database listener 

 

Make sure the database parameter local_listener is not set at PDB level . This value should be inherited from CDB. After the change run "alter system register;" command so that services will be registered with the listener.

 

High "row cache mutex" waits are seen in AWR reports due to SQLs against v$session/v$process

Example SQLs :

0u11juyazvjf5
SELECT PR.SPID , SE.SID , SE.PROGRAM , SE.MODULE FROM V$SESSION SE ,
V$PROCESS PR WHERE SE.SID = (SELECT SID FROM V$MYSTAT WHERE ROWNUM = 1)
AND SE.PADDR = PR.ADDR

0y6qmpuvm4w9u


select sid from v$session, v$process where audsid=
sys_context('userenv', 'sessionid') and addr=paddr

1wr32ytknh1u9

 

SELECT P.PID, S.PROCESS, P.SERIAL#, P.SPID FROM V$PROCESS P,
V$SESSION S WHERE S.AUDSID = USERENV('SESSIONID') AND S.PADDR = P.ADDR

0y7y17bcappxk

 

SELECT P.PID, P.SERIAL#, P.SPID FROM V$PROCESS P, V$SESSION S WHERE
S.AUDSID = USERENV('SESSIONID') AND S.PADDR = P.ADDR

 Solution is to apply database Patch 31903523 . Until then contention can
be reduced as follows :

- Disable EBS auditing : "Sign-On:Audit Level" .

- IF MWA is used , then apply MWA patch 17288009 to reduce # of executions of SQL 0y6qmpuvm4w9u

- Turn off SQL tracing

Running ADZDDBCC.sql in 19c database is taking time for Section 26.

 Before running report set parameter _px_cdb_view_enabled to false .

alter session set "_px_cdb_view_enabled"=false ;
Concurrent programs are failing with ora-20100: Temporary file creation for FND_FILE failed errors.

This can be reproduced as follows in SQL-Plus :

SQL> exec FND_FILE.PUT_LINE(FND_FILE.LOG, 'THIS IS A TEST');
BEGIN FND_FILE.PUT_LINE(FND_FILE.LOG, 'THIS IS A TEST'); END;

*
ERROR at line 1:
ORA-20100: ORA-20100: Temporary file creation for FND_FILE failed.
Directory &FILE_DIR is invalid, contact your system administrator.
Action: Check that the directory is a valid directory with write permissions on
the database server node and is listed under utl_file_dir para
ORA-06512: at "APPS.FND_FILE", line 319
ORA-06512: at "APPS.FND_FILE", line 364
ORA-06512: at "APPS.FND_FILE", line 421
ORA-06512: at line 1

 There are three reasons for that failure :

1> utl_file_dir parameter is not set . Run following SQL to check

select value from apps.v$parameter where name = 'utl_file_dir';

 

2> Event 10946 should be set as explained in Note 2525754.1

alter system set events '10946 trace name context forever, level 8454144';
alter system set event='10946 trace name context forever, level 8454144' scope=spfile;

3> First value from above SQL should match to value of $APPLPTMP environment parameter

Running Materialized View refresh programs are running longer and locks are seen in the database

This is caused by RDBMS Bug 30402386 and explained in Note 2716307.1

Apply database patch. Until then set following hidden parameter as a temporary workaround :

ALTER SYSTEM SET “_mv_refresh_shrink_log”=FALSE;
Database alert.log reports parsing errors for FND_SESSION_MANAGEMENT package

2021-03-09T05:07:08.548176-08:00
mypdb(3):WARNING: too many parse errors, count=621 SQL hash=0x8a3bb1a5
mypdb(3):PARSE ERROR: ospid=24007, error=6550 for statement:
2021-03-09T05:07:08.549382-08:00
mypdb(3):declare PRAGMA AUTONOMOUS_TRANSACTION;begin
fnd_signon.audit_client_ip(:1,:2); commit;exception when others then
rollback;end;
mypdb(3):Additional information: hd=0x61fa9b68 phd=0x6daec960 flg=0x100476
cisid=65 sid=65 ciuid=65 uid=65 sqlid=bb9xkdf53rcd5
mypdb(3):...Current username=APPS
mypdb(3):...Application: e::bes:oracle.apps.icx.security.session.created
Action: /


mypdb(3):WARNING: too many parse errors, count=336 SQL hash=0xe2823bd1
mypdb(3):PARSE ERROR: ospid=16624, error=6550 for statement:
2021-03-08T04:16:36.045025-08:00
mypdb(3):BEGIN :1 := FND_SESSION_MANAGEMENT.getHttpOnly(:2); END;
mypdb(3):Additional information: hd=0xc12e4490 phd=0x84b82418 flg=0x100476
cisid=65 sid=65 ciuid=65 uid=65 sqlid=7kcmdtbj84fyj
mypdb(3):...Current username=APPS
mypdb(3):...Application: e::bes:oracle.apps.icx.security.session.created
Action: /

Errors are safe to ignore since they will not be reported to end users. Apply E-Business Suite Patch 32429191:R12.FND.C and Patch 33429261:R12.FND.C .

 
txkCreatePDB.pl Completes with PDB Plugged in with "Open No Master Key"
For an existing 19c container database, Plugin a EBS PDB in container database, Using txkCreatePDB.pl the wallet status for newly created PDB shows "OPEN_NO_MASTER_KEY" and the PDB is in Restricted Mode 
===================
SQL> select * from v$encryption_wallet;

WRL_TYPE WRL_PARAMETER STATUS WALLET_TYPE WALLET_OR FULLY_BAC CON_ID
-------------------------------------------------------------------------------------------------------------- -------------------- --------- ---------
FILE <Location> OPEN_NO_MASTER_KEY AUTOLOGIN SINGLE UNDEFINED 0

This Applies to EBS 12.1.3 & All 12.2 Versions which supports 19c Database including  OCI/On-Premises/Exadata and any other platform.

1. This can be caused due to passing a wrong value in secret key while plugging the PDB using txkCreatePDB.pl.

2. The Script should stop and fail without plugging the PDB when a wrong argument is been passed for secret key during the execution of txkCreatePDB.pl, hence there is an Enhancement request raised with the Product Development for the same "Enh 33566579"

3. To overcome this Situation, please apply the workaround given in the following Note 

Manually Re-import the Secret Key in E-Business Suite Database On Premises & Cloud (Doc ID 2826133.1)

 

Bug 33405295 - DBMS_STATS.IMPORT_DATABASE_STATS FAILING WITH ERROR OPERATION NOT ALLOWED FROM WITHIN A PLUGGABLE DATABASE

Please follow the below action plan to solve the import stats error issue : 

 

Gather schema stats on the source, as below:

sqlplus apps/<apps>
1. exec fnd_stats.gather_schema_statistics(schemaname=>'ALL',estimate_percent=>dbms_stats.auto_sample_size,options=>'GATHER AUTO');

2. Backed up Statistics into FND_STATTAB, following Doc ID 1581549.1 - Section 7.10.2.1 - Exporting Statistics - (using the FNDSTATS SQL provided within the note).

3. Then follow Doc ID: 2615617.1, as below:

Section 5.9 - Export the Applications database instance
Section 6.4 - Import the Applications database instance
Section 7.1 - Reset Advanced Queues
Section 7.2 - Run adgrants.sql
Section 7.3 - Gather statistics and fixed objects statistics
Section 7.4 - Run post PDB script

4. Upgraded FND_STATTAB Statistics Table:

sqlplus system/<system_pw>@MOSCOW
exec dbms_stats.upgrade_stat_table('APPLSYS','FND_STATTAB');

5. Restored Statistics from FND_STATTAB following 1581549.1 - Section 7.10.2.2 Importing Statistics - (using the FNDSTATS SQL provided within the note).

 

Bug 33427856 - Gathering statistics for indexes on large tables in 19c may be time-consuming (Doc ID 33427856.8)

Apply Patch 33427856 or use workarounds until then

Massive Performance Degradation On Statistics For Large Partitioned Tables After Upgrade To 19c (Doc ID 2611336.1)

 

OCI Database Issues

IssueSolution
Failure in opening the PDB, after renaming it during refresh

Core Dump during DBMS_SERVICE.START_SERVICE using case sensitive name for PDB

DB System Platform: ExaCS

DB Version: 19.10

Solution:

Apply Patch 31143870

ORA-65092: system privilege granted with a different scope to 'OEM_MONITOR'

Failure in Datapatch with error:

ERROR at line 1:
ORA-65092: system privilege granted with a different scope to 'OEM_MONITOR'
ORA-06512: at line 6
ORA-06512: at line 2

DB System Platform: VM DB System

DB Version: 19.8

Solution:

Apply Patch 31796208 and rerun datapatch

 

Forms Issues

IssueSolution
ORA-1017 when opening Forms and ORA-1017 Invalid username/password; logon denied FRM-41352 failed to create a new session

 

 

Cause: Misconfiguration with the listener and registration of services:

Solution:

1. In 19c database service_name parameter should be set to container name

     Connect to the CDB and run the following commands:

        alter System set SERVICE_NAMES='<CDBNAME>' SCOPE=BOTH;
        alter system register;

2. Verify that the PDB database is registered with the  listener

      Connect to PDB and check value of local_listener parameter. It should be not be set at PDB level and should be same as CDB value.

3. PDB should have at a minimum of these services in 12.1 and 12.2.

  select name from v$active_services;

12.1:

  ebs_<PDB NAME>
 <PDB NAME>

12.2:

  ebs_<PDB NAME>
  <PDB NAME>
  <PDB_NAME>_ebs_patch

4. Listener should be started using CDB environment file

      cd $ORACLE_HOME

        .CDB_HOSTNAME.env

         lsnrcrt stop CDBNAME

       lsnrctl start CDBNAME

5. You should see something similar to the following:

$ lsnrctl status

LSNRCTL for Linux: Version 19.0.0.0.0 - Production on 29-APR-2020 16:32:48

Copyright (c) 1991, 2019, Oracle. All rights reserved.

Connecting to (ADDRESS=(PROTOCOL=tcp)(HOST=)(PORT=1521))
STATUS of the LISTENER
------------------------
Alias VISCDB
Version TNSLSNR for Linux: Version 19.0.0.0.0 - Production
Start Date 22-APR-2020 21:27:31
Uptime 6 days 19 hr. 5 min. 17 sec
Trace Level off
Security ON: Local OS Authentication
SNMP OFF
Listener Parameter File /u01/oracle/db/tech/19.3/network/admin/listener.ora
Listener Log File /u01/oracle/db/tech/19.3/admin/PDB_HOSTNAME/diag/tnslsnr/HOSTNAME/cdb/alert/log.xml
Listening Endpoints Summary...
(DESCRIPTION=(ADDRESS=(PROTOCOL=tcp)(HOST=HOSTNAME.<HOSTNAME>)(PORT=1521)))
Services Summary...
Service "2809223196ec2af8e053a740d20a4db6" has 1 instance(s).
Instance "CDB", status READY, has 1 handler(s) for this service...
Service "PDB" has 2 instance(s). -----> PDB service. Started when PDB is started
Instance "PDB", status UNKNOWN, has 1 handler(s) for this service...
Instance "CDB", status READY, has 1 handler(s) for this service...
Service "PDB_ebs_patch" has 1 instance(s). ----> Created during post PDB steps , needed for ADOP patching
Instance "CDB", status READY, has 1 handler(s) for this service...
Service "CDB" has 1 instance(s). ---> CDB service , not used by EBS but needed for connections to CDB
Instance "CDB", status READY, has 1 handler(s) for this service...
Service "CDBXDB" has 1 instance(s). --> optional XDB service
Instance "CDB", status READY, has 1 handler(s) for this service...
Service "ebs_PDB" has 1 instance(s). ---> Created during post PDB steps , needed for EBS
Instance "CDB", status READY, has 1 handler(s) for this service...
The command completed successfully

APP-FND-00706: Routine Detected a data error

 

 Cause: Cursor leaking in the database. The following file version has the bug code:

$Header fdfdds.lc 120.23.12020000.20 2019/04/25 09:45:45 <USER> ship $

Solution: After Upgrading To 12.2.9 Forms Fail: "Error APP-FND-00706: Routine Detected A Data Error" (Doc ID 2629037.1)

ORA-04025: maximum allowed library object lock allocated for SELECT NLS_CHARSET_NAME

 Following errors are seen in oacore log files during load testing

oacore.default_group.2:292946577:-1:-1:GUEST(6):266001490:Thread[AJPRequestHandler-HTTPThreadGroup-6,5,HTTPThreadGroup]:1587728106:172.22.180.117:6160782:0:1341:ERROR:[fnd.common.ErrorStack]:java.sql.SQLException: ORA-20001: Oracle error -4025: ORA-04025: maximum allowed library object lock allocated for SELECT NLS_CHARSET_NAME(:B1 ) FROM SYS.DUAL
ORA-06512: at "SYS.STANDARD", line 270
ORA-06512: at "APPS.FND_GLOBAL", line 1219
 has been detected in fnd_global.set_nls.
ORA-06512: at "APPS.APP_EXCEPTION", line 72
ORA-06512: at "APPS.FND_GLOBAL", line 280
ORA-06512: at "APPS.FND_GLOBAL", line 1584
ORA-06512: at "APPS.FND_GLOBAL", line 2017
ORA-06512: at "APPS.FND_GLOBAL", line 2652
ORA-06512: at "APPS.FND_GLOBAL", line 2804
ORA-06512: at "APPS.FND_SESSION_MANAGEMENT", line 999
ORA-06512: at "APPS.FND_SESSION_MANAGEMENT", line 1304
ORA-06512: at "APPS.FND_AOLJ_UTIL", line 433
ORA-06512: at "APPS.FND_AOLJ_UTIL", line 329
ORA-06512: at line 1

    at oracle.jdbc.driver.T4CTTIoer.processError(T4CTTIoer.java:445)
    at oracle.jdbc.driver.T4CTTIoer.processError(T4CTTIoer.java:396)
    at oracle.jdbc.driver.T4C8Oall.processError(T4C8Oall.java:879)
    at oracle.jdbc.driver.T4CTTIfun.receive(T4CTTIfun.java:450)
    at oracle.jdbc.driver.T4CTTIfun.doRPC(T4CTTIfun.java:192)
    at oracle.jdbc.driver.T4C8Oall.doOALL(T4C8Oall.java:531)
    at oracle.jdbc.driver.T4CCallableStatement.doOall8(T4CCallableStatement.java:204)

Apply database <Patch 31820859> : ORA-4025 DUE TO 65535 ACTIVE LOCKS LIMIT REACHED ON SELECT NLS_CHARSET_ID

 

FRM-40735: ON-ERROR Trigger raised unhandled exception ORA-01001

 

 

Solution: Using Oracle 19c RAC Multitenant (Single PDB) with Oracle E-Business Suite Release 12.1 (Doc ID 2530680.1)

 

ORA-65011: when opening forms after upgrading EBS Apps database 19c

 Similar to to ORA-1017 errors in 12.2

 

Cause: Issue is related with service names

           E-Business Suite 12.1 database should only have one service name defined

Solution:

service_names --> parameter should point to CDB only
connect to PDB and run following SQL
select name from v$active_services;
this should return PDB name . If nothing is returned, then shutdown EBS services,, shutdown DB listener, then close PDB , bounce CDB, start DB listener, start EBS services and re-check.
Service should be automatically started when PDB is opened

Login Issues

IssueSolution
Intermittent login issues after 19c upgrade.

The end users will see the following login issues
- AppsLocalLogin.jsp will have a copyright date of 2014
- See login page not with style sheets
- The system has encountered an error
- Uncaught exception

Apply database Patch 31820859

 

Upgrade Issues

IssueSolution
Running noncdb_to_pdb.sql fails with ORA-01219 : DATABASE OR PLUGGABLE DATABASE NOT OPEN: QUERIES ALLOWED ON FIXED TABLES OR VIEWS ONLY.

When a non-CDB, having configured WALLET_ROOT and TDE_CONFIGURATION
is cloned or described/plugged as a PDB, opening the PDB (the PDB
created from non-CDB) fails with ORA-01219 : DATABASE OR PLUGGABLE
DATABASE NOT OPEN: QUERIES ALLOWED ON FIXED TABLES OR VIEWS ONLY

Cause:

Following Bug has been identified in EBS Database TDE configuration. This issue occurs while performing database upgrade to 19c.

BUG 31494095 - NONCDB_TO_PDB.SQL FAILS WITH ORA-01219: DATABASE OR PLUGGABLE DATABASE NOT OPEN: QUERIES ALLOWED ON FIXED TABLES OR VIEWS ONLY

Solution:

Patch 31494095

Autoconfig Run Fails With Error 'tackTrace: java.lang.NullPointerException at oracle.apps.ad.context.GenerateContext.getPDBCDB' Due To Missing 'SELECT_CATALOG_ROLE' In Adgrants.sql

When upgrading the database from 11.2.0.4 to 19c with EBS version R12 using Doc ID 2580629.1, Autoconfig failed with the following error:

"Autoconfig could not create context file."

 

R12 E-Business Suite Upgrade To 19c Autoconfig Run Fails With Error 'tackTrace: java.lang.NullPointerException at oracle.apps.ad.context.GenerateContext.getPDBCDB' Due To Missing 'SELECT_CATALOG_ROLE' In Adgrants.sql (Doc ID 2672303.1)

Autoconfig is running slower after 19c upgrade.

Slow SQL was
SELECT UO.object_name, aa.object_name procedure_name, aa.ARGUMENT_NAME
FROM user_objects uo, all_arguments aa
WHERE uo.object_name = 'FND_GSM_UTIL'
AND aa.object_id = uo.object_id
AND aa.object_name = 'UPLOAD_CONTEXT_FILE'
AND object_type = 'PACKAGE'
ORDER BY 1, 2, 3

SQL is modified to select from user_arguments instead of all_arguments. Fix is in Patch 31138017:R12.TXK.C

Refer to 12.2 E-Business Technology Stack Autoconfig Performance Issue After Upgrade to 19c Database (Doc ID 2684666.1) for more details.

Running Autoconfig errors after 19c upgrade on application tier - Unable to read UTL_FILE_DIR parameter

ERROR DESCRIPTION:
*******FATAL ERROR*******
PROGRAM : (/fs2/EBSapps/appl/fnd/12.0.0/patch/115/bin/txkCfgUtlfileDir.pl)
TIME :
FUNCTION: main::getUtlFileDirParam [ Level 1 ]
ERRORMSG: Unable to read UTL_FILE_DIR parameter.

ERRORCODE = 1 ERRORCODE_END
.end std out.

The following select also fails to return results:

select value from v$parameter where name='utl_file_dir';

 

R12 E-Business Suite Technology Stack Post 19c Upgrade, Running Autoconfig Script txkCfgUtlfileDir.pl Fails With Fatal Error In FUNCTION: main::getUtlFileDirParam ERRORMSG: Unable to read UTL_FILE_DIR parameter (Doc ID 2685022.1)

Unable to call FND_LDAP_WRAPPER.CREATE USER  - ORA-06508: PL/SQL: could not find program unit being called

If you upgraded to 19c prior to June, you may need to apply Patch 31088182. While creating a user in EBS 12.2.4, and after migrating to DB19c, you can run into this problem:
Unable to call FND_LDAP_WRAPPER.CREATE USER due to the following reason:
   ORA-06508: PL/SQL: could not find program unit being called.
   (USER_NAME=<USER>) 


References :

NOTE:2567105.1 - FAQ: Oracle E-Business Suite and the Oracle Multitenant Architecture

NOTE:2826133.1 - Manually Re-import the Secret Key in E-Business Suite Database On Premises & Cloud
NOTE:2655839.2 - Information Center: Oracle Database 19c with Oracle E-Business Suite 12.2 and 12.1
NOTE:2580629.1 - Interoperability Notes: Oracle E-Business Suite Release 12.1 with Oracle Database 19c
NOTE:2683670.1 - Previous Oracle Database 19c Release Update Database Patch Lists for Oracle E-Business Suite
NOTE:2716307.1 - Refresh Collection Snapshots Running Long After DB Upgrade
NOTE:1594274.1 - Oracle E-Business Suite Release 12.2: Consolidated List of Oracle Database Patches and Bug Fixes
NOTE:2525754.1 - Using UTL_FILE_DIR or Database Directories for PL/SQL File I/O in Oracle E-Business Suite Releases 12.1 and 12.2

NOTE:2611336.1 - Massive Performance Degradation On Statistics For Large Partitioned Tables After Upgrade To 19c 

19c Database with Oracle E-Business Suite R12 Known Issues & Solutions (Doc ID 2662860.1)

Database Options/Management Packs Usage Reporting for Oracle Databases 11.2 and later (Doc ID 1317265.1)

  Database Options/Management Packs Usage Report You can determine whether an option is currently in use in a database by running options_pa...