Wednesday, August 21, 2024

M5 Cross Endian Platform Migration using Full Transportable Data Pump Export/Import and RMAN Incremental Backups (Doc ID 2999157.1)

 


Prerequisites

High level migration workflow

Detailed migration workflow

Migration process explained

Appendix

Cross platform database migration is the process of moving databases to a new platform, including Exadata Database Machine, Exadata Cloud@Customer, Exadata Database Service, etc. This note provides a simple, reliable, and fast migration solution with minimal downtime.

The information below will guide you in performing a cross platform (Big Endian to small Endian, vice versa, or same platform when Data Guard option is not available) database migration.

Note:
1. This procedure only supports Oracle Database 19.18 or higher on source and destination.
2. Tablespaces SYSTEM, SYSAUX, and UNDO are excluded from the migration procedure.
3. RMAN backup encryption is not supported with this procedure.
4. This procedure supports using multi-section backups (SECTION SIZE) to perform faster backups of bigfile tablespaces.
5. This procedure supports migration from non-container database to pluggable database.
6. This procedure supports migration of pluggable databases across platforms.
7. This procedure supports migration using a physical standby as the source database.
8. This procedure supports migration of TDE encrypted database.

 

If your environment cannot meet the prerequisites of the supported version in this note, the other alternative is to use V4 Reduce Transportable Tablespace Downtime using Cross Platform Incremental Backup (Doc ID 2471245.1)

Prerequisites

  • Review MOS note Data Pump Recommended Proactive Patches For 19.10 and Above (Doc ID 2819284.1) and apply all the required fixes for the 19.18 or higher database on both source and destination.
    • Data Pump recommended fixes are required to address known issues and improve performance. This is a mandatory prerequisite.
  • Evaluate whether the below RMAN 19c database fixes apply to your migration.
    • Ensure to have fix for bug 35564253 applied on your destination database.
    • If using physical standby database as a migration source, fix for bug 35675516 is required for your destination database.
    • If Transparent Data Encryption (TDE) is used in the source database, fix for bug 30817822 is required for your destination database. This fix is already included in release update 19.20.0.0 and above.
  • This procedure works with Zero Data Loss Recovery Appliance and any local storage like NFS
    • When using Zero Data Loss Recovery Appliance ensure to have the latest libra.so (ZDLRA: Where to download new sbt library (libra.so module) (Doc ID 2219812.1))
    • When using Zero Data Loss Recovery Appliance for Cross Endian migration, the backups taken will not be virtualized (i.e., stored in RMAN format). They will be replicated to a downstream ZDLRA if configured (See Appendix Section 6). 
  • The db_create_file_dest parameter must be set on the destination database.
  • Capture all RMAN and Data Pump logs for troubleshooting the migration procedure.
  • When using NFS as a backup and restore target, ensure that you have write privileges from the source database and read privileges from the destination database.
  • When using TDE, the keys should be exported from the source and imported on the destination prior to starting the migration process (See Appendix Section 5)
    • When using the Zero Data Loss Recovery Appliance as a backup and restore target for source database backups, uncomment the variable BKP_DEST_TYPE=SBT_TAPE in the properties file and comment the variable BKP_DEST_TYPE=DISK. Also, update the variable BKP_DEST_PARM with the Zero Data Loss Recovery Appliance SBT_TAPE configuration and parameters. Only one variable for the target backup destination is accepted at a given time.  Ensure that your protected database is enrolled accordingly and with a proper Recovery Window Goal, more information can be found under Configuring Recovery Appliance for Protected Database Access

High level migration process

 

  • Perform a level 0 backup on the source
  • Restore the L0 backup taken previously on the destination
  • Perform incremental backups on source and apply on the destination. Repeat this procedure until final incremental is taken, which requires minimal read-only downtime window.
  • Downtime Begins
    • Source Data Tablespaces converted to READ ONLY
    • Final Incremental backup on source
    • Metadata Tablespace Export on source
    • Apply Incremental on destination.
    • Metadata tablespace Import on destination.
  • Downtime Ends

Detailed migration workflow

 Flow

 

 

Note:
1. All steps in this procedure are run as the oracle user that is a member of the OSDBA group. OS authentication is used to connect to both the source and destination databases.
2. This procedure assumes the destination database has been created on destination platform with specific requirements ("compatible" parameter, character set, database options on both source and destination database)
3. General Limitations on Transporting Data are documented here:
a. https://docs.oracle.com/en/database/oracle/oracle-database/19/admin/transporting-data.html#GUID-28800719-6CB9-4A71-95DD-4B61AA603173.
b. Transportable Tablespace (TTS) Restrictions and Limitations: Details, Reference, and Version Where Applicable (Note 1454872.1)
c. Primary Note for Transportable Tablespaces (TTS) -- Common Questions and Issues (Note 1166564.1)
4. Cross platform is only possible with Enterprise Edition. This procedure cannot be used with Standard Edition.
5. The source and destination databases COMPATIBLE parameter must be the same, minimum 19.0.0.0.
6. RMAN configuration on the source system must not have BACKUP TYPE TO COPY. The source must have BACKUP TYPE TO BACKUPSET.

 

Migration process explained

 

The migration relies on a driver script that reads a properties file to perform tablespace backup iterations on the source database using a 'from SCN' based on previous backup.  The backup iterations are L0 (full backup), L1 (incremental backup) command and L1F (final incremental backup).  See Appendix Section 2   

For ease of use, environment variables and other important variables required for the driver script are consolidated in a properties file (See Appendix Sections 3 and 4), the list of tablespaces to be migrated has its own file (See Appendix Section 5).

For more agility and simplicity, you can set up SSH equivalence between the source and destination servers if your security policies allow.  The SSH equivalence will allow the automatic copy of the required scripts between the source and destination automatically.

Note: If SSH equivalence cannot be established, the RMAN command file generated under the log directory on the source database should be copied manually to the destination and executed after each backup is taken on the source database. If NFS is not used or the source and destination do not share the directory where the backup pieces are generated, these should be made available to the destination database, the RMAN restore cmd file should be modified to include the new path the backup pieces are copied to.

During the final backup, the export Data Pump file should be also manually copied from the source database to destination database DATA_PUMP_DIR directory. If the destination is a RAC database, ensure the export Data Pump file is copied to a location shared to all nodes.

 

As previously mentioned, when using the Zero Data Loss Recovery Appliance as a backup and restore target for source database backups, the backups will not be part of the block pool, i.e., they won’t be virtualized.  For this, the backups should be stalled from being ingested.  See Appendix Section 4.

 

During the migration process, the user can issue as many incremental backups as required.  The idea is to increase the frequency of level 1 backups, and by increasing the frequency reducing their size. Smaller backups will get backed up, converted and restored quicker.  When performing the final incremental level 1, the tablespaces to be migrated will be put in read only mode and the backup is taken. On the destination environment, the restore of the last backup should be performed and the impdp.sh script executed afterwards to plugin the tablespaces metadata.

 

The migration can be performed from a physical standby database if such exists. This removes the load on the primary database. The migration steps are similar except that at the last step, the standby database is converted to snapshot standby, the metadata export is performed, and the database is converted back to physical standby. This automation uses the Data Guard broker to perform these operations.

 

For additional guidance on migrating pluggable databases refer to Appendix Section 6.

 

The final incremental backup (L1F step) is the last step before the import. During this task the user will be asked to provide a privileged user password on the source database. This is needed to put the tablespaces being transported into read only mode, and to perform the export. On the destination, the impdp.sh will require the user to provide the system password during the import of the transportable tablespaces.

 

Step-by-Step Migration

 

1. Preparation

  1. Create a working directory on an NFS share that is shared between source and destination. The path to the NFS share should be identical on both source and destination.
    1. If an NFS share is not possible, use any local storage. Create the directory on the source host and copy it to the destination host. This needs to be repeated multiple times throughout the process. The path on the source and destination host should be identical. If not, various changes are required to the scripts which is beyond the scope of this procedure.
  2. Download DBMIG.zip (md5sum 24b5acb9e7cbe600de7547faf1e86962) to your working directory.
  3. Unzip DBMIG.zip. The file contains:
      1. dbmig_driver_m5.sh: Migration driver script.
      2. impdp.sh: Import driver script.
      3. log directory: Created at runtime. Stores the RMAN backup log, migration logs, and additional logs in addition to:
        1. rman_mig_bkp.log: The centralized migration logfile.
        2. rman_mig_bkp.lck: Lock file created at the beginning of every backup to prevent concurrent driver execution. If a driver execution aborts the lock will not be removed. The problem should be investigated and resolved prior to removing the lock.
      4. cmd directory: Created at runtime. Stores the RMAN backup and restore scripts in addition to:
        1. dbmig_driver.properties: Contains the environment variables required for the migration; see Appendix section 1 below for additional information. This file is user-modifiable.
        2. dbmig_ts_list.txt: Contains the list of tablespaces to be migrated; names are comma-separated without quotes; see Appendix section 2. This file is user-modifiable.
  4. Modify the variables in dbmig_driver.properties (in cmd directory) to reflect the environment you are using; refer to Appendix section 1 for the description of the variable.
      1. If backing up to disk (BKP_DEST_TYPE=DISK), it is recommended to store the RMAN backup on the shared NFS. Set BKP_DEST_PARM to a subdirectory in the working directory.
      2. If possible, for ease of use, store the Data Pump dump file in a subdirectory in the working directory. Change DIRECTORY_PATH of the DATA_PUMP_DIR directory object.
      3. If backing up to ZDLRA (BKP_DEST_TYPE=SBT) and you don't use a shared NFS, you can establish SSH equivalence between the source and destination servers. The script can automatically transfer RMAN restore scripts and Data Pump dump file. Set DEST_SERVER to the network name of the destination database. Otherwise, leave it blank.
  5. Update dbmig_ts_list.txt with the tablespace names you are migrating. See Appendix section 2 for details.
  6. Create a new, empty destination database (non-CDB or PDB). The destination database must be on the same or higher release. The destination may be on a higher patch level than the source.

2. Level 0 Backup

  1. On source, execute an initial level 0 (L0) backup of the source database:
    1. $ sh dbmig_driver_m5.sh L0
  2. If not using a shared NFS, copy the working directory to the destination host.
  3. On destination, set the environment to the destination database.
  4. The driver script creates a restore script. It is stored in the cmd directory:
    ls -ltra cmd/restore_L0_*  
  5. Restore the level 0 (L0) backup on the destination database using the latest restore script. Ensure the path to the libra.so and wallet is valid in the restore script.
    1. For non-CDB using backups on disk:
      $ rman target / cmdfile=<restore_cmdfile_name>           
    2. For non-CDB using ZDLRA:
      $ man target / catalog /@scaningest:<port>/<service_name>cmdfile=<restore_cmdfile_name>
    3. For PDB connect directly into PDB
      $ rman target sys@<pdb-connect-string> cmdfile=<restore_cmdfile_name>
  6. Check the RMAN log file:
    $ egrep "WARN-|ORA-" log/restore_*.log
  7. In case of warnings or errors in the log file, contact Oracle Support.

 

3. Level 1 Backup

  1. On source, execute a level 1 (L1) incremental backup of the source database:
    1. $ sh dbmig_driver_m5.sh L1
  2. If not using a shared NFS, copy the working directory to the destination host.
  3. On destination, set the environment to the destination database.
  4. The driver script creates a restore script. It is stored in the cmd directory:
    ls -ltra cmd/restore_L1_*  
  5. Restore the level 1 (L1) backup on the destination database using the latest restore script. Ensure the path to the libra.so and wallet is valid in the restore script.
    1. For non-CDB using backups on disk:
      $ rman target / cmdfile=<restore_cmdfile_name>
    2. For non-CDB using ZDLRA:
      $ man target / catalog /@scaningest:<port>/<service_name> cmdfile=<restore_cmdfile_name>
    3. For PDB connect directly into PDB
      $ rman target sys@<pdb-connect-string> cmdfile=<restore_cmdfile_name>
  6. Check the RMAN log file:
    $ egrep "WARN-|ORA-" log/restore_*.log
  7. In case of warnings or errors in the log file, contact Oracle Support.
  8. Repeat steps 1-6 at regular intervals until the final level 1 incremental backup.

 

4. Final Level 1 Backup

  1. Downtime starts now.
  2. On source, execute the final level 1 (L1) incremental backup of the source database:
    1. $ sh dbmig_driver_m5.sh L1F
    2. The script prompts for the system password. This is the password of the user specified in SYSTEM_USR in the properties file.
    3. Then, it sets the specified tablespaces in read-only mode.
    4. Then, it starts the final level 1 incremental backup.
    5. Finally, it starts Data Pump to perform a full transportable export.
  3. Review the Data Pump export log file.
  4. If not using a shared NFS, copy the working directory to the destination host.
  5. If desired, you can shut down the source database at this point. It is no longer needed.
  6. On destination, set the environment to the destination database.
  7. The driver script creates a restore script. It is stored in the cmd directory:
    ls -ltra cmd/restore_L1F_*  
  8. Restore the final level 1 (L1) backup on the destination database using the final restore script.
    1. For non-CDB using backups on disk:
      $ rman target / cmdfile=<restore_cmdfile_name>
    2. For non-CDB using ZDLRA:
      $ man target / catalog /@scaningest:<port>/<service_name> cmdfile=<restore_cmdfile_name>
    3. For PDB connect directly into PDB
      $ rman target sys@<pdb-connect-string> cmdfile=<restore_cmdfile_name>
  9. Check the RMAN log file:
    $ egrep "WARN-|ORA-" log/restore_*.log
  10. In case of warnings or errors in the log file, contact Oracle Support.

 

5. Data Pump Import

  1. On destination, edit the import driver script (impdp.sh). Use information about the destination database to fill in the variables
    1. ORACLE_HOME: The path to the Oracle home of the destination database.
    2. ORACLE_SID: SID of the destination database.
    3. ORACLE_CONNECT_STRING: A connect string directly into the destination non-CDB database or destination PDB.
    4. DATA_PUMP_PARALLEL: Data Pump parallel level for the import. Use if the destination database is Oracle Database 21c or later.
    5. DATA_PUMP_TRACE: Data Pump trace level. Set to to disable trace.
  2. The import driver script takes four parameters:
    1. expdp_dumpfile: Name of the Data Pump dump file, e.g., exp_UP19_240206134155.dmp
    2. rman_last_restore_log: Relative path to the last RMAN restore log, e.g., log/restore_L1F_UP19_240206134155.log
    3. run-mode: One of the below:
      1. test: Generates the Data Pump parameter file. Does not start Data Pump.
      2. test-readonly. Generates the Data Pump parameter file and adds TRANSPORTABLE=KEEP_READ_ONLY to the Data Pump parameter file. Review the documentation for the usage of this parameter. Does not start Data Pump.
      3. run: Generates the Data Pump parameter file. Starts Data Pump to perform the import.
      4. run-readonly: Generates the Data Pump parameter file and adds TRANSPORTABLE=KEEP_READ_ONLY to the Data Pump parameter file. Review the documentation for the usage of this parameter. Starts Data Pump to perform the import.
    1. DP_ENC_PROMPT: One of the below
      1. Y: Will prompt the user for an encryption password in case one was used for the export
      2. N: Will not prompt the user for an encryption password.
  3. Start the import driver script in test mode to verify the correctness of the generated Data Pump parameter file:
    1. $ sh impdp.sh <expdp_dumpfile> <rman_last_restore_log> test
  4. The driver script generates the parameter file in the current directory. Examine it.
    1. $ vi imp_<oracle_sid>_<timestamp>_xtts.par
  5. Do not change the generated Data Pump parameter file. If you need to change the parameter file, change the code in the driver script, so it generates the desired parameter file.
  6. Copy the Data Pump dump file to the directory path specified by the DATA_PUMP_DIR directory object in the destination database. Use the below query to determine the directory path. If destination is a PDB, be sure to connect to the PDB to execute the query:
    1. SQL> select directory_path from dba_directories where directory_name='DATA_PUMP_DIR';
  7. Start the import driver script in run mode to perform the Data Pump import:
    1. $ sh impdp.sh <expdp_dumpfile> <rman_last_restore_log> run
    2. The script starts by creating a guaranteed restore point. If the destination is a PDB, the script creates the restore point in the CDB (CDB-level). The restore point is only useful for testing in combination with TRANSPORTABLE=KEEP_READ_ONLY. Unless you use this option, then re-use of the data files is not possible
    3. Then, it starts Data Pump to perform a full transportable import. Data Pump prompts for the password of the user executing the import.
  8. Always review the Data Pump log file. Even if the Data Pump is listed as successful, you should always review the Data Pump log file for any critical errors.
  9. When determined to go live, remove the previously created restore point.

 

Note:
If a service request needs to be opened with Oracle Support for an issue related with this procedure, the following should be uploaded:

1. The whole DBMIG directory including the sub-directories from the source database
2. The working directory from the destination, this includes the RMAN restore files, trace files, log files, import data pump par file(s) and log(s)
3. From both databases, the alert log files, opach lsinventory output, and the *dm* and *dw* trace files generated in the database dump directory.

 

Appendix Section 1

dbmig_driver.properties file explained

 

Parameter

Description

Default value

Modifiable

ORACLE_HOME

The Oracle home of the source big endian database

User defined

Yes

ORACLE_SID

The SID of the source big endian database

User defined

Yes

SRC_SCAN

Source database Easy connect

User defined

Yes

MIG_PDB

Set to 1 when migrating a pluggable database, otherwise 0 for non-CDB

User defined

Yes

PDB_NAME

Pluggable database where the tablespaces are being migrated from

User defined

Yes

BKP_FROM_STDBY

When migrating using STANDBY database as source, the value is 1, otherwise 0 when migrating from the PRIMARY

User defined

Yes

SOURCE_DPDMP

Data pump file location on source

User defined

Yes

SYSTEM_USR

System privileged user which will change the tablespaces to read only and perform the export data pump.  Recommended to use SYSTEM

Inherited in the script

NO

DP_TRACE

Data Pump trace level. Use 0 to disable trace. Use 3FF0300 to full transportable tracing, see MOS Doc ID 286496.1 for details.

0

Directed by Support

DP_PARALLEL

Data Pump parallel setting. Accepted values: 1 to 999. Example: 16

1

Yes

DP_ENC_PROMPT

Specifies whether Oracle Data Pump prompts you for the encryption password.

N

Yes

BKP_DEST_TYPE

Location of backups, accepted values: DISK, SBT_TAPE. Choose DISK to backup up to local storage. Choose SBT_TAPE to use ZDLRA.

User defined

Yes

BKP_DEST_PARM

If BKP_DEST_TYPE=DISK, enter location for backup: Example: /u01/app/oracle/m5/rman

If BKP_DEST_TYPE=SBT_TAPE, enter channel configuration: Example: "'%d_%U' PARMS \"SBT_LIBRARY=<oracle_home>/lib/libra.so,SBT_PARMS=

(RA_WALLET='location=file:<oracle_home>/dbs/zdlra credential_alias=<zdlra-connect-string>')\""

User defined

Yes

CAT_CRED

If you use ZDLRA, specify connect string

Example: <scan-name>:<port>/<service>

User defined

Yes

SECTION_SIZE

Section size used in RMAN backups

64G

Yes

CHN

Number of RMAN channels allocated.

User defined

Yes

DEST_SERVER

Network name of the destination server. Leave blank if you manually transfer backups and dump files.

User defined

Yes

DEST_USER

Destination OS username.  If SSH equivalence is established between the source and destination environments, it will use passwordless SCP to copy files from the source to destination. Example: oracle

User defined

Yes

DEST_WORKDIR

Destination working directory, this is to store the RMAN restore scripts, logs, trace, import script and par files

User defined

Yes

DEST_DPDMP

The directory path used by DATA_PUMP_DIR on the destination.

Example: /u01/app/oracle/m5/data_pump_dir

User defined

Yes

WORKDIR

The working directory containing the migration script, CMD and LOG directories on source.

User defined

Yes

LOG_DIR

The log directory under the WORKDIR holding the backup log files and others on source.

Inherited in the script

NO

CMD_DIR

The cmd directory under the WORKDIR holding the backup and restore scripts on source.

Inherited in the script

NO

DT=`date +%y%m%d%H%M%S`

CMD_MKDIR=`which mkdir`

CMD_TOUCH=`which touch`

CMD_CAT=`which cat`

CMD_RM=`which rm`

CMD_AWK=`which awk`

CMD_SCP=`which scp`

Additional variables to set the date format and load OS commands.

Inherited in the script

NO

TS

The migrated tablespaces names in a list format

User defined

Yes

TSF

The migrated tablespaces filename

 

NO

SCN

File to keep track of the SCN when the backup was run

Inherited in the script

No

 

dbmig_driver.properties sample file

 

############################################################
#Source database properties
#my_M5_prop_version=2
# -
# - ORACLE_HOME Path to Oracle Home
# - ORACLE_SID SID of the source database
# - SRC_SCAN Connect string to source database via SCAN.
# If no SCAN, specify source database network name.
# Enclose in single quotes
# Example: '@myhost-scan/db1'
# Example: '@localhost/pdb1'
# - MIG_PDB Accepted values: 0, 1
# Choose 0 if source is non-CDB
# Choose 1 if source is a PDB
# - PDB_NAME If source is a PDB, specify PDB name.
# Else leave blank
# Example: PDB1
# - BKP_FROM_STDBY Accepted values: 0, 1
# Choose 0 to back up from primary database,
# or if Data Guard is not in use.
# Choose 1 to back up from standby database.
############################################################
export ORACLE_HOME=/u01/app/oracle/product/19.0.0.0/dbhome_1
export PATH=$PATH:$ORACLE_HOME/bin
export ORACLE_SID=soldb1918
export SRC_SCAN='@scan1/soldb1918'
export MIG_PDB=0
export PDB_NAME=
export BKP_FROM_STDBY=0
############################################################
#Source Data Pump settings
# - SOURCE_DPDMP Directory path of the directory DATA_PUMP_DIR
# Example: /u01/app/oracle/m5/data_pump_dir
# - SOURCE_DPIR Data Pump Directory, typically DATA_PUMP_DIR
# - SYSTEM_USR Username for Data Pump export.
# Do not use SYS AS SYSDBA
# Example: SYSTEM
# - DP_TRACE Data Pump trace level.
# Use 0 to disable trace.
# Use 3FF0300 to full transportable tracing
# See MOS Doc ID 286496.1 for details.
# - DP_PARALLEL Data Pump parallel setting.
# Accepted values: 1 to 999
# Example: 16
############################################################
export SOURCE_DPDMP=/u01/app/admin/soldb1918/dpdump
export SOURCE_DPDIR=DATA_PUMP_DIR
export SYSTEM_USR=SYSTEM
export DP_TRACE=0
export DP_PARALLEL=1
export DP_ENC_PROMPT=N
############################################################
#Source RMAN settings
# - BKP_DEST_TYPE Accepted values: DISK, SBT_TAPE
# Choose DISK to backup up to local storage
# Choose SBT_TAPE to use ZDLRA
# - BKP_DEST_PARM If BKP_DEST_TYPE=DISK, enter location for backup:
# Example: /u01/app/oracle/m5/rman
# If BKP_DEST_TYPE=SBT_TAPE, enter channel configuration:
# Example: "'%d_%U' PARMS \"SBT_LIBRARY=<oracle_home>/lib/libra.so,SBT_PARMS=(RA_WALLET='location=file:<oracle_home>/dbs/zdlra credential_alias=<zdlra-connect-string>')\""
# - CAT_CRED If you use RMAN catalog or ZDLRA, specify connect string to catalog database
# Example: <scan-name>:<port>/<service>
# - SECTION_SIZE Section size used in RMAN backups
# - CHN Number of RMAN channels allocated
############################################################
export BKP_DEST_TYPE=DISK
export BKP_DEST_PARM=/SHAREDY/DBMIGBKP/SOLDB1918
export CAT_CRED=
export SECTION_SIZE=64G
export CHN=8
############################################################
#Destination host settings
#If specified, the script transfers the RMAN backups and
#Data Pump dump file to the destination via over SSH.
#SSH equivalence is required.
# - DEST_SERVER Network name of the destination server.
# Leave blank if you manually transfer
# backups and dump files
# - DEST_USER User for SSH connection
# Example: oracle
# - DEST_WORKDIR The script working directory on destination
# Example: /u01/app/oracle/m5
# - DEST_DPDMP The directory path used by DATA_PUMP_DIR
# in destination database
# Example: /u01/app/oracle/m5/data_pump_dir
############################################################
export DEST_SERVER=srvadm01
export DEST_USER=oracle
export DEST_WORKDIR=/SHAREDY/STAGE/CROSSPLATMIG/PRIMARY
export DEST_DPDMP=/u01/app/oracle/product/19.0.0.0/dbhome_13/rdbms/log

############################################################
#Advanced settings
#Normally, you don't need to edit this section
############################################################
export WORKDIR=$PWD
export LOG_DIR=${WORKDIR}/log
export CMD_DIR=${WORKDIR}/cmd
export PATH=$PATH:$ORACLE_HOME/bin
export DT='date +%y%m%d%H%M%S'
export CMD_MKDIR='which mkdir'
export CMD_TOUCH='which touch'
export CMD_CAT='which cat'
export CMD_RM='which rm'
export CMD_AWK='which awk'
export CMD_SCP='which scp'
export CMD_CUT='which cut'
export CMD_PLATFORM='uname'
if [[ "$CMD_PLATFORM" = "Linux" ]]; then
export CMD_GREP="/usr/bin/grep"
else
if [[ "$CMD_PLATFORM" = "AIX" ]]; then
export CMD_GREP="/usr/bin/grep"
else
if [[ "$CMD_PLATFORM" = "HPUX" ]]; then
export CMD_GREP="/usr/bin/grep"
else
export CMD_GREP='which ggrep'
fi
fi
fi
export my_M5_prop_version=2

 

Appendix Section 2

Structure of the dbmig_ts_list.txt file

This file contain the names of the tablespaces to be migrated, the names should be separated by comma only.  See sample below.

$ cat dbmig_ts_list.txt

 

JSXTTS_1,JSXTTS_2,JSXTTS_3,JSXTTS_4,JSXTTS_5,JSXTTS_6,JSXTTS_7,JSXTTS_8,JSXTTS_9,JSXTTS_10

 

 

Appendix Section 3

The file impdp.sh should be copied to the destination working directory.

Export the ORACLE_HOME, ORACLE_SID and ORACLE_PDB_SID (If required) to reflect the database you are migrating to, enter the password for the user performing the import when prompted.  The script requires three parameters, the export dumpfile name which is displayed during the last L1 backup on the source, the last RMAN restore log and the run mode (test or run).  Run mode allows to perform a test run to check the import parameter file for correctness before executing a real run with the run parameter.

 

Impdp.sh runs as follows:

$ sh impdp.sh <exp.dmp> <log/res_L1F.log> [test|test-readonly|run-readonly|run] <encryption_pwd_prompt[Y|N]>

 

Run mode explained.

 

test

Runs the script and produces an imp_${ORACLE_SID}_xtts.par file without the parameter TRANSPORTABLE=KEEP_READ_ONLY, check the imp_${ORACLE_SID}_xtts.par file for correctness

$sh impdp.sh <exp.dmp> <res.log> test

test-readonly

Runs the script and produces an imp_${ORACLE_SID}_xtts.par file with the parameter TRANSPORTABLE=KEEP_READ_ONLY, check the imp_${ORACLE_SID}_xtts.par file for correctness

run

run: Runs the script, produces an imp_${ORACLE_SID}_xtts.par file without the parameter TRANSPORTABLE=KEEP_READ_ONLY, generates a guaranteed restore point and performs the import.

run-readonly

Runs the script, produces an imp_${ORACLE_SID}_xtts.par file with the parameter TRANSPORTABLE=KEEP_READ_ONLY, generates a guaranteed restore point and performs the import.

 

 

$ cat impdp.sh

#!/bin/bash

 

if [ $# -ne 4 ]; then

        echo "Please call this script using the syntax $0 "

        echo "Example: # sh impdp.sh <expdp_dumpfile> <rman_last_restore_log> [run|test|run-readonly|test-readonly] <encryption_pwd_prompt[Y|N]>"

        exit 1

fi

 

#Full path to Oracle Home

export ORACLE_HOME=/u01/app/oracle/product/19.0.0.0/dbhome_13

export PATH=$PATH:$ORACLE_HOME/bin

#SID of the destination database

export ORACLE_SID=entain1

#Connect string to destination database. If PDB, connect directly into PDB

export ORACLE_CONNECT_STRING=entain

#Data Pump directory

export DATA_PUMP_DIR=DATA_PUMP_DIR

#Data Pump parallel setting

export DATA_PUMP_PARALLEL=1

#Data Pump trace level. 0 to disable. 3FF0300 for transportable tablespace trace

export DATA_PUMP_TRACE=0

 

############################################################

#Normally, do not edit below this line

############################################################

 

export PATH=$PATH:$ORACLE_HOME/bin

export DT=`date +%y%m%d%H%M%S`

export EXPDP_DMP=${1}

export RES_LOG=${2}

export RUN_MODE=${3}

export DP_ENC_PROMPT=${4}

 

echo "userid='system@${ORACLE_CONNECT_STRING}'" > imp_${ORACLE_SID}_${DT}_xtts.par

echo "dumpfile=${EXPDP_DMP}" >> imp_${ORACLE_SID}_${DT}_xtts.par

echo "directory=${DATA_PUMP_DIR}" >> imp_${ORACLE_SID}_${DT}_xtts.par

echo "LOGTIME=ALL" >> imp_${ORACLE_SID}_${DT}_xtts.par

echo "TRACE=${DATA_PUMP_TRACE}" >> imp_${ORACLE_SID}_${DT}_xtts.par

echo "PARALLEL=${DATA_PUMP_PARALLEL}" >> imp_${ORACLE_SID}_${DT}_xtts.par

echo "LOGFILE=imp_${ORACLE_SID}_${DT}_xtts.log" >> imp_${ORACLE_SID}_${DT}_xtts.par

echo "METRICS=YES" >> imp_${ORACLE_SID}_${DT}_xtts.par

case ${DP_ENC_PROMPT} in

     "Y")

        echo "ENCRYPTION_PWD_PROMPT=YES" >> imp_${ORACLE_SID}_${DT}_xtts.par

       ;;

     "N")

        echo "ENCRYPTION_PWD_PROMPT=NO" >> imp_${ORACLE_SID}_${DT}_xtts.par

      ;;

      *)

      ;;     

esac

echo "TRANSPORT_DATAFILES=" >> imp_${ORACLE_SID}_${DT}_xtts.par

 

input=$(grep "restoring foreign" ${RES_LOG} | awk '{print $9}' | sort |  uniq)

 

IFS=$'\n' read -rd '' -a input_list <<< "$input"

output_list=()

for item in "${input_list[@]}"; do

    output_list+=("'$item',")

done

 

last_index=$(( ${#output_list[@]} - 1 ))

output_list["$last_index"]="${output_list[last_index]%,}"

 

for item in "${output_list[@]}"; do

    echo "$item" >> imp_${ORACLE_SID}_${DT}_xtts.par

done

 

 

function gen_grp {

${ORACLE_HOME}/bin/sqlplus -s /nolog << EOF

                connect / as sysdba

                set lines 333

                col time for a48

                col restore_point_time for a24

                col name for a24

                spool ${PWD}/dbmig_grp.lst

                create restore point before_imp_${DT} guarantee flashback database;

                select NAME, SCN, TIME, GUARANTEE_FLASHBACK_DATABASE, STORAGE_SIZE FROM V\$RESTORE_POINT WHERE GUARANTEE_FLASHBACK_DATABASE='YES';

                spool off;

                exit

EOF

}

 

case ${RUN_MODE} in

              "run")

                echo "Running in Run mode"

                gen_grp

                impdp parfile=imp_${ORACLE_SID}_${DT}_xtts.par

              ;;

              "run-readonly")                             

                echo "Running in Read Only, for the final run you must flashback the database to the restore point created now"

                gen_grp

                awk 'NR==3 {print "TRANSPORTABLE=KEEP_READ_ONLY"} {print}' imp_${ORACLE_SID}_${DT}_xtts.par > tmp_file && mv tmp_file imp_${ORACLE_SID}_${DT}_xtts.par

                impdp parfile=imp_${ORACLE_SID}_${DT}_xtts.par

              ;;

              "test")                            

                echo "Running in test mode, check par file for correctness"

              ;;

              "test-readonly")                            

                echo "Running in test mode with read only, check par file for correctness"

                awk 'NR==3 {print "TRANSPORTABLE=KEEP_READ_ONLY"} {print}' imp_${ORACLE_SID}_${DT}_xtts.par > tmp_file && mv tmp_file imp_${ORACLE_SID}_${DT}_xtts.par

              ;;                        

              *)

                echo "Unknown run mode, options are: run, test, run-readonly, test-readonly"

esac

 

Appendix Section 4

 

When using the Zero Data Loss Recovery Appliance a backup and restore target, the backup from the source databases should be stalled (i.e, blocked) from being indexed.

 

To do so, identify your database DB_KEY by querying the RA_DATABASE view using the DB_UNQIUE_NAME in the Zero Data Loss Recovery Appliance database.

 

As user RASYS in the Zero Data Loss Recovery Appliance database create a stall_when view like the one below:

 

SQL> create or replace view RAI_STALL_WHEN

     AS

     SELECT task_id

       FROM task

      WHERE task_type = 140

        AND db_key = <DB_KEY>;

 

As user RASYS on the Zero Data Loss Recovery Appliance database enable the stall_when parameter:

 

 SQL> EXEC DBMS_RA_INT.config(p_name=>'_stall_when', p_value=>'ON', comments=>'Enabled for Cross Endian Migration');

 

When the migration is finished, the stall_when flagged should be removed and the view dropped.  This will trigger the INDEX_BACKUP task on the ZDLRA and virtualize the backups.

 

SQL> DROP view RAI_STALL_WHEN;

SQL> EXEC DBMS_RA_INT.config(p_name=>'_stall_when', p_value=>'OFF', comments=>'Disabled for Cross Endian Migration');

 

 

Note: Prior to creating the RAI_STALL_WHEN view, verify if the _stall_when flag has already been set for any other reason. If the flag was already set, do not proceed with the migration and contact Oracle Support.

 

Note: If the compliance mode is implemented on the ZDLRA, the use of dbms_ra_int is considered a "break glass".

 

Note: If you are using block change tracking, and as backups to the ZDLRA are not being virtualized, it is important the backup performance can be impacted if number of backups (L0+ L1s) is higher than default value (8) for “_bct_bitmaps_per_file”
Reference: RMAN incremental level 1 backups are not using block change tracking file (Doc ID 1192652.1).

 

Appendix Section 5

 

When using TDE, the encryption keys should be exported from the source database keystore and imported on the destination database keystore ahead of the migration.

 

  1. On the source big endian system, export the TDE encryption keys as a user who has been granted the ADMINISTER KEY MANAGEMENT or SYSKM privilege. Protect the exported file as it contains sensitive information.

$ sqlplus / as sysdba
SQL> ADMINISTER KEY MANAGEMENT EXPORT ENCRYPTION KEYS FORCE WITH SECRET "new_password_to_encrypt_export_file" TO 'exportkeys.exp' IDENTIFIED BY "existing_keystore_password";

Keystore altered

 o Check the KEY_ID from the V$ENCRYPTION_KEYS


$ sqlplus / as sysdba
SQL> SELECT KEY_ID FROM V$ENCRYPTION_KEYS;

KEY_ID
------------------------------------------------------------------------------
AbWpl9Eq2U8Ev22+H6Ufy14AAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AVZ8DckBpk8xv+9IzzfKvoMAAAAAAAAAAAAAAAAAAAAAAAAAAAAA

 o Copy the encrypted export file ‘exportkeys.exp’ to the Linux destination:

$ scp exportkey.exp user@server:/path

 2. On the Linux destination system, import the TDE master encryption key from the file generated in step 1

$ sqlplus / as sysdba
SQL> ADMINISTER KEY MANAGEMENT IMPORT KEYS WITH SECRET "Password_used_in_step1_export" FROM 'exportkeys.exp' IDENTIFIED BY "keystore_password_used_on_detination_db" WITH BACKUP;

Keystore altered

 o Check the KEY_ID from the V$ENCRYPTION_KEYS

$ sqlplus / as sysdba
SQL> SELECT KEY_ID FROM V$ENCRYPTION_KEYS;

KEY_ID
------------------------------------------------------------------------------
AXj7ZZ4Lq08Jv4pzTBn8/DoAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AVZ8DckBpk8xv+9IzzfKvoMAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AbWpl9Eq2U8Ev22+H6Ufy14AAAAAAAAAAAAAAAAAAAAAAAAAAAAA

 o To activate the previously imported the TDE keys from the source

$ sqlplus / as sysdba
SQL> ADMINISTER KEY MANAGEMENT USE KEY <KEY_ID> USING TAG 'Activate Key on target' FORCE KEYSTORE IDENTIFIED BY "keystore_password_used_in_step1_above" WITH BACKUP;

Keystore altered

 

Appendix Section 6

This note supports migrating a pluggable database from a big-endian source container database to a small endian destination container database.

 

In the dbmig_driver.properties, set the flags MIG_PDB to 1 and PDB_NAME to the name of the pluggable database to be migrated.  The file dbmig_ts_list.txt should contain the list of tablespaces to be migrated.  During the migration process the automation creates a file called dbmig_ts_list.txt.pdb.  This file contains the name of the pluggable database appended to the name of the tablespaces to be migrated.  The content of the file is used when building the RMAN backup scripts.

 

The restore operations of L0 and L1s are the same for every migration.  For the pluggable database migration and to ensure the tablespaces being transported are plugged in the correct destination pluggable database, the connection string in the impdp.sh should be adapted accordingly.

 

During the restore operation, RMAN will create the datafiles in a ASM sub-directory on the destination which is different to the directory used by the pluggable database. If you are wanting to move the files into the correct destination of the pluggable database then mount the pluggable database and perform and RMAN ‘BACKUP AS COPY’ specifying the data diskgroup in the FORMAT clause. Once the backup is complete, you can perform an RMAN ‘SWITCH DATAFILE TO COPY’ command to update the controlfile with the new correct pluggable database directory.

No comments:

Post a Comment

Database Options/Management Packs Usage Reporting for Oracle Databases 11.2 and later (Doc ID 1317265.1)

  Database Options/Management Packs Usage Report You can determine whether an option is currently in use in a database by running options_pa...