Wednesday, August 21, 2024

V4 Reduce Transportable Tablespace Downtime using Cross Platform Incremental Backup (Doc ID 2471245.1)

 

Purpose

NOTE:  If the source and destination are running Oracle Database 19.18 or higher, see the new version of this procedure:

M5 Cross Endian Platform Migration using Full Transportable Data Pump Export/Import and RMAN Incremental Backups (Doc ID 2999157.1)


This article covers the steps needed to use V4 Cross Platform Transportable Tablespaces (XTTS) with RMAN incremental backups to migrate data between systems that have different endian formats, with the least amount of application down time.

The first step will be to copy a full backup from the source to the destination. Then, by using a series of incremental backups, each smaller than the last, the data at the destination system can be brought nearly current with the source system, before any downtime is required. This procedure requires down time only during the final incremental backup, and the meta-data export/import.

This document describes the V4 procedures for Cross Platform Incremental Backup which can be used with 11.2.0.3 and higher. This new procedure is simplified version of previous XTTs versions. This version has the following differences:

  • this procedure uses simplified commands. One command (--backup) for the source and one command (--restore) for the destination.
  • this procedure works for multi-tenant environment, including transporting tablespace from non-CDB to CDB or visa versa. TTS restrictions may apply.
  • this procedure requires only one file to be copied between the source's and destination's $TMPDIR (res.txt).
  • this procedure will automatically resolve added datafiles with no additional intervention.
  • this procedure allows for multiple incremental backups taken off the source without running the recovery. After which, recovery will be of all the incremental backups in the destination at once.

NOTE:
Exadata databases migrating from Solaris (big endian) to Linux (little endian) or vice versa may face block corruptions due to invalid conversion of HCC tables.
ALERT: Possible Block Corruption After Migrating From Solaris to Linux Exadata using HCC (Doc ID 2888288.1)

 

NOTE: There are a reported issues with multiple incremental backup recovery and large number of datafiles. The recover command creation may cause backups to not be found (ORA-19625) and/or the recovery attempts to apply the backups in the wrong order resulting in:

ORA-19638: file /<path>/<datafile name> is not current enough to apply this incremental backup
ORA-19642: start SCN of incremental backup is <scn>
ORA-19641: backup datafile checkpoint is SCN <scn> time MM/DD/YYYY HH:MM:SS
ORA-19640: datafile checkpoint is SCN <scn> time MM/DD/YYYY HH:MM:SS

Review the following for details and workaround:
V4 XTTs: Restore Returns Errors (ORA-19625 or ORA-19641) With Large Number of Datafiles (Note 2689397.1)


Although it is strongly recommended to use V4 of XTTs, the earlier versions of this procedure is available:

11G - Reduce Transportable Tablespace Downtime using Cross Platform Incremental Backup (Note 1389592.1)

The Cross Platform Incremental Backup feature does not affect the amount of time it takes to perform other actions for XTTS, such as metadata export and import. Hence, databases that have very large amounts of metadata (DDL) will see limited benefit from Cross Platform Incremental Backup because migration in these environments is typically dominated by metadata operations, not datafile transfer and conversion.

 

NOTE: Only those database objects that are physically located in the tablespace(s) being transported will be copied to the destination system. Other objects, such as users, pl/sql objects, sequences, views etc., located in the SYSTEM tablespace will not be transported. You will need to pre-create the users and copy such objects to the destination system, possibly using data pump.

The following may help:

Oracle Database 12c: Full Transportable Export/Import

and/or

MAA paper Platform Migration Using Transportable Tablespaces: Oracle Database.

The high-level steps for Cross Platform Incremental Backup are:

1. Initial setup

2. Prepare phase (source data remains online)

    1. Backup (level=0) of tablespaces to be transported
    2. Transfer backup and other necessary setup files to destination system
    3. Restore datafiles on destination system endian format

3. Roll Forward phase (source data remains online - Repeat this phase as many times as necessary to catch destination datafile copies up to source database)

    1. Create incremental backup on source system
    2. Transfer incremental backup and other necessary setup files to destination system
    3. Convert incremental backup to destination system endian format and apply the backup to the destination datafile copies
    4. Repeat steps until ready to transport the tablespace.
NOTE: In Version 4, added files will automatically be added in the destination with no additional intervention required. I.e., if a datafile is added to the tablespace OR a new tablespace name is added to the xtt.properties file.

4. Transport phase (source data is READ ONLY)

    1. Alter the tablespaces in the source database to READ ONLY
    2. Repeat the Roll Forward phase one final time
      • This step makes destination datafile copies consistent with source database and generates necessary export.
      • Time for this step is significantly shorter than traditional XTTS method when dealing with large data because the incremental backup size is smaller.
    3. Import metadata of objects in the tablespaces into destination database using Data Pump
    4. Alter the tablespaces in the destination database to READ WRITE

Scope

The source system may be any platform provided the prerequisites referenced and listed below for both platform and database are met.

If you are migrating from a little endian platform to Oracle Linux, then the migration method that should receive first consideration is Data Guard. See Note 413484.1 for details about heterogeneous platform support for Data Guard between your current little endian platform and Oracle Linux.

Details

NOTE: If the source and destination are running Oracle Database 19.18 or higher, see the new version available:

M5 Cross Endian Platform Migration using Full Transportable Data Pump Export/Import and RMAN Incremental Backups (Doc ID 2999157.1)

 

NOTE: Before proceeding with this procedure, review the following notes to understand all the restrictions and issues associated with the transportable tablespace feature (TTS). All such restrictions/limitations of TTS apply.

Transportable Tablespace (TTS) Restrictions and Limitations: Details, Reference, and Version Where Applicable (Note 1454872.1)
Primary Note for Transportable Tablespaces (TTS) -- Common Questions and Issues (Note 1166564.1)

Overview

This document provides a procedural example of transporting two tablespaces called TS1 and TS2 from an Oracle Solaris SPARC system to Oracle Linux, incorporating Oracle's Cross Platform Incremental Backup capability to reduce downtime.

After performing the initial setup, moving the data is performed as follows:

Prepare
During the prepare phase, a level =0 backup of the tablespaces' datafiles is taken on the source. The backups are transferred to the destination, datafiles are restored and converted to the destination endian format.

Roll Forward
During the roll forward phase, the datafiles restored during the prepare phase are rolled forward using incremental backups taken from the source database. By performing this phase multiple times, each successive incremental backup becomes smaller and faster to apply, allowing the data at the destination system to be brought almost current with the source system. The application being migrated is fully accessible during the Roll Forward phase.

Transport
During the transport phase, the tablespaces being transported are put into READ ONLY mode, a final incremental backup is taken from the source database, backups are transferred to destination and applied to the destination datafiles. At this point, the destination datafile copies consistent with source database and the application being migrated cannot receive any further updates. The tablespaces then are TTS-exported from the source database and TTS-imported into the destination database. Finally, the tablespaces are made READ WRITE for full access on the destination database.

Cross Platform Incremental Backup Supporting Scripts

The Cross Platform Incremental Backup core functionality was delivered in Oracle Database 11.2.0.3 and later. Some of the features in the prior versions (such as using DBMS_FILE_TRANSFER) are not available in this Version 4. If you need such functionality, use the standard procedure for 11g outlined in Note 1389592.1. The procedures outlined in this note applies to both Oracle 11.2.0.3 and later , 12c and higher. See the Requirements and Recommendations section for details. In addition, a set of supporting scripts in the file rman_xttconvert_VER4.3.zip is attached to this document that are used to manage the procedure required to perform XTTS with Cross Platform Incremental Backup using Version 4.

The two primary supporting scripts files are the following:

  • Perl script xttdriver.pl script that is run to perform the main steps of the XTTS with Cross Platform Incremental Backup procedure.
  • Parameter file xtt.properties: the file which stores site-specific configuration.

Prerequisites

The following prerequisites must be met before starting this procedure:

  • The limitations and considerations for transportable tablespaces must still be followed. They are defined in the following manuals:
    • Oracle Database Administrator's Guide
    • Oracle Database Utilities
  • In addition to the limitations and considerations for transportable tablespaces, the following conditions must be met:
    • The current version does NOT support Windows as either source or destination.
    • Cross platform is only possible with Enterprise Edition. This procedure cannot be used with Standard Edition.
    • The source database's COMPATIBLE parameter must not be greater than the destination database's COMPATIBLE parameter.
    • The source database must be in ARCHIVELOG mode.
    • Before running XTTs scripts, set NLS_LANG=AMERICAN_AMERICA.WE8ISO8859P1. Other NLS_LANG settings may cause errors.
    • RMAN on the source system must not have DEVICE TYPE DISK configured with COMPRESSED.
    • RMAN on the source system must not have BACKUP TYPE TO COPY. The source must have BACKUP TYPE TO BACKUPSET.
    • RMAN on the source system must not have default channel configured to type SBT. I.e., this procedure can only be used with DISK channels.
    • RMAN on the source system must not have ANY channel configuration limitations. For example, MAXSETSIZE, MAXPIECESIZE, etc.
    • The set of tablespaces being moved must all be online, and contain no offline data files. Tablespaces must be READ WRITE.
    • Tablespaces that are READ ONLY may be moved with the normal XTTS method. There is no need to incorporate Cross Platform Incremental Backups to move tablespaces that are always READ ONLY.
    • Although preferred destination system is Linux (either 64-bit Oracle Linux or a certified version of RedHat Linux), this procedure can be used with other Unix based operating systems. However, any non-Linux operating system must be running 12.1.0.1 or higher in both destination and source.
    • The Oracle version of source must be lower or equal to destination. Therefore, this procedure can be used as an upgrade method. Transportable tablespace restrictions WILL apply.
    • Minimum version for source and destination is 11.2.0.3.
    • ASM can only be used for final location of datafiles in destination, backups cannot be placed on ASM with this version.
    • The backup location of destination MUST be a device with read/write privileges. I.e., cannot be a READONLY device. This can cause ORA-19624 on the backupset conversion.
    • The source and target database must use a compatible character set and national character set.
    • The tablespace must be in READ WRITE at the first backup, (level 0) otherwise, the following errors will occur:
      RMAN> DECLARE
      2> *
      3> ERROR at line 1:
      4> ORA-20001: TABLESPACE(S) IS READONLY OR,
      5> OFFLINE JUST CONVERT, COPY
      6> ORA-06512: at line 284
      7>
      8>
      9>
      RMAN-00571: ===========================================================
      RMAN-00569: =============== ERROR MESSAGE STACK FOLLOWS ===============
      RMAN-00571: ===========================================================
      RMAN-00558: error encountered while parsing input commands
      RMAN-01006: error signaled during parse
      RMAN-02002: unexpected end of input file reached
  • All steps in this procedure are run as the oracle user that is a member of the OSDBA group. OS authentication is used to connect to both the source and destination databases.
  • Although NOT recommended, a standby database can be used for this procedure. See Using XTTs in a Data Guard Environment (Doc ID 2853054.1)
  • A Snapshot Standby database is NOT supported for this procedure.
  • No datafiles should be dropped from the tablespaces being migrated once the migration begins. This can especially be a problem if the datafile number is reassigned to an added datafile to a migrated tablespace.

Whole Database Migration

If Cross Platform Incremental Backups will be used to reduce downtime for a whole database migration, then the steps in this document can be combined with the XTTS guidance provided in the MAA paper Platform Migration Using Transportable Tablespaces: Oracle Database.

Troubleshooting

Debug mode enables additional screen output and causes all RMAN executions to be performed with the debug command line option. To enable debug mode, set the environment variable XTTDEBUG=1 before running xttdriver.pl OR
run xttdriver.pl with the --debug flag. This flag allows for 3 levels, -d [1/2/3] with 3 generating the most information.

As it is recommended to run these steps in debug, so necessary information is available for diagnosis, the commands listed will include the debug option.

Known Issues

  1. If your source database is running 12c or higher, un-comment the usermantransport parameter in the xtt.properties. This parameter should ONLY be used for an 12c and higher source.
    i.e.,:
    usermantransport=1 <== remove the '#'

    Regardless of the destination version, the value of this parameter in the destination's xtt.properties must be the SAME as the source.

  2. If using ASM in both source and destination, see XTTS Creates Alias on Destination when Source and Destination use ASM (Note 2351123.1)
  3. If using ASM for datafiles, an error deleting file system file using ASMCMD can be ignored. (unpublished Bug 29268792, currently open)
  4. The existence of a GLOGIN.sql, in either source or destination, can cause syntax errors.
  5. For other known issues, see other issues within Known Issues for Cross Platform Transportable Tablespaces XTTS (Note 2311677.1).
  6. You cannot use this procedure for a TDE tablespace.
  7. This procedure has only been tested with English language.
  8. Be aware of open bug 30777480 which can cause an issue if the CDB's tablespace number (TS#) is the same as the tablespace id (TS#) belonging to a tablespaces being transported.
    You may receive an error RMAN-20201 'datafile not found in the recovery catalog' on the datafile number.
  9. Backup creation parallelism is defined by RMAN configuration for DEVICE TYPE DISK PARALLELISM. For incremental backups (after Level 0), parallelism is on the tablespace level.
  10. If the target database character set is not compatible with source database character set the following error may occur at tablespace plug-in (phase 5):

    ORA-39123: Data Pump transportable tablespace job aborted
    ORA-19736: can not plug a tablespace into a database using a different national character set

 


V4 Transport Tablespaces with Reduced Downtime using Cross Platform Incremental Backup

The XTTS with Cross Platform Incremental Backups procedure is divided into the following phases:

  • Phase 1 - Initial Setup phase
  • Phase 2 - Prepare phase
  • Phase 3 - Roll Forward phase
  • Phase 4 - Final Incremental Backup
  • Phase 5 - Transport Phase: Import Metadata
  • Phase 6 - Validate the Transported Data
  • Phase 7 - Cleanup

Conventions Used in This Document

  • All command examples use bash shell syntax.
  • Commands prefaced by the shell prompt string [oracle@source]$ indicate commands run as the oracle user on the source system.
  • Commands prefaced by the shell prompt string [oracle@dest]$ indicate commands run as the oracle user on the destination system.

Phase 1 - Initial Setup

Perform the following steps to configure the environment to use Cross Platform Incremental Backups:

Step 1.1 - Install the destination database software and create the destination database

Install the desired Oracle Database software on the destination system that will run the destination database.

Identify (or create) a database on the destination system to transport the tablespace(s) into and create the schema users required for the tablespace transport. I.e., users who own the objects within the tablespaces being transported.

Per generic TTS requirement, ensure that the schema users required for the tablespace transport exist in the destination database.
Step 1.2 - Identify tablespaces to be transported

Identify the tablespace(s) in the source database that will be transported. Tablespaces TS1 and TS2 will be used in the examples in this document. As indicated above, the limitations and considerations for transportable tablespaces must still be followed.

Step 1.3 - Install xttconvert scripts on the source system

On the source system, as the oracle software owner, download and extract the supporting scripts attached as rman_xttconvert_VER4.3.zip to this document.

[oracle@source]$ pwd
/home/oracle/xtt

[oracle@source]$ unzip rman_xttconvert_VER4.3.zip
Archive: rman_xttconvert_VER4.3.zip
inflating: xtt.newproperties
inflating: xtt.properties
inflating: xttcnvrtbkupdest.sql
inflating: xttdbopen.sql
inflating: xttdriver.pl
inflating: xttprep.tmpl
extracting: xttstartupnomount.sql

Step 1.4 - Create necessary directories
  1. On source:
    • Location of backups as defined by src_scratch_location parameter in the xtt.properties file. NOTE: Only one location is allowed for this parameter.

  2. On destination:
    • Location of backups as defined by the dest_scratch_location parameter in the xtt.properties file.
    • Location for datafiles on destination, as defined by dest_datafile_location parameter in the xtt.properties file. NOTE: Only one location is allowed for this parameter.
Step 1.5 - Configure xtt.properties on the source system

Edit the xtt.properties file on the source system with your site-specific configuration. For more information about the parameters in the xtt.properties file, refer to the Description of Parameters in Configuration File xtt.properties section in the Appendix below. For this procedure, only the following parameters are mandatory. Others are optional and/or available for use.

    • tablespaces
    • platformid
    • src_scratch_location
    • dest_scratch_location
    • dest_datafile_location
    • usermantransport=1 - It is recommended this be set if the source database is running 12c or higher. This causes new 12c (and higher) functionality to be used when this parameter is set.
      NOTE: regardless of the destination version, the value of this parameter in the destination's xtt.properties must be the SAME as the source.
Step 1.6 - Copy xttconvert scripts and xtt.properties to the destination system

As the oracle software owner copy all xttconvert scripts and the modified xtt.properties file to the destination system.

[oracle@source]$ scp -r /home/oracle/xtt oracle@dest:/home/oracle/xtt
Step 1.7 - Set TMPDIR environment variable

In the shell environment on both source and destination systems, set environment variable TMPDIR to the location where the supporting scripts exist. Use this shell to run the Perl script xttdriver.pl as shown in the steps below. If TMPDIR is not set, output files are created in and input files are expected to be in /tmp.

[oracle@source]$ export TMPDIR=/home/oracle/xtt

[oracle@dest]$ export TMPDIR=/home/oracle/xtt

Phase 2 - Prepare Phase

During the Prepare phase, datafiles of the tablespaces to be transported are backed up on source, backups transferred to the destination system and restored by the xttdriver.pl script.

NOTE: For large number of files, if you wish to use dbms_file_transfer you will not be able to use V4, you will have to use 11G - Reduce Transportable Tablespace Downtime using Cross Platform Incremental Backup (Note 1389592.1). It has been found to be the fastest method for transferring datafiles to destination. This method can be used by ALL Oracle version migrations wanting to use dbms_file_transfer.
Step 2.1 - Run the backup on the source system

On the source system, logged in as the oracle user with the environment (ORACLE_HOME and ORACLE_SID environment variables) pointing to the source database, run the backup as follows:

[oracle@source]$ $ORACLE_HOME/perl/bin/perl xttdriver.pl --backup --debug 3
Step 2.2 - Transfer the following files to the destination system
  • Backups created from source src_scratch_location to destination dest_scratch_location. NOTE: Only one location is allowed for these parameters.
  • The res.txt file from source $TMPDIR to destination $TMPDIR:

In the example below, scp is used to transfer the level=0 backup created by the previous step from the source system to the destination system.

[oracle@source]$ scp /src_scratch_location/* oracle@dest:/dest_scratch
[oracle@source]$ scp res.txt oracle@dest:/home/oracle/xtt
Step 2.3 - Restore the datafiles on the destination system

On the destination system, logged in as the oracle user with the environment (ORACLE_HOME and ORACLE_SID environment variables) pointing to the destination database, run the restore as follows:

[oracle@dest]$ $ORACLE_HOME/perl/bin/perl xttdriver.pl --restore --debug 3

Datafiles will be placed on the destination system in the defined dest_datafile_location. NOTE: Only one location is allowed for this parameter.

Phase 3 - Roll Forward Phase

During this phase an incremental backup is created from the source database, transferred to the destination system, converted to the destination system endian format, then applied to the converted destination datafile copies to roll them forward. This phase may be run multiple times. Each successive incremental backup should take less time than the prior incremental backup, and will bring the destination datafile copies more current with the source database. The data being transported (source) is fully accessible during this phase.

NOTE: Multiple backups can be executed against the source without applying them to the destination. The backup files and the res.txt must be copied before the '--restore' is executed at the destination.

NOTE: The script will shutdown and startup, in NOMOUNT, the destination database before the --restore.
Step 3.1 - Create an incremental backup of the tablespaces being transported on the source system

On the source system, logged in as the oracle user with the environment (ORACLE_HOME and ORACLE_SID environment variables) pointing to the source database, run the create incremental step as follows:

[oracle@source]$ $ORACLE_HOME/perl/bin/perl xttdriver.pl --backup --debug 3

This step will create an incremental backup for all tablespaces listed in xtt.properties.

Step 3.2 - Transfer incremental backups and res.txt to the destination system

Transfer the incremental backup(s) (between src_scratch_location and dest_scratch_location) and the res.txt (between the $TMPDIRs) from the source to the destination. The list of incremental backup files from current backup can be
found in the incrbackups.txt file on the source system.

[oracle@source]$ scp `cat incrbackups.txt` oracle@dest:/dest_scratch_location
[oracle@source]$ scp res.txt oracle@dest:/home/oracle/xtt

If the src_scratch_location on the source system and the dest_scratch_location on the destination system refer to the same NFS storage location, then the backups do not need to be copied as they are available in the expected location on the destination system.

However, the res.txt file MUST be copied after the LAST incremental backup before it can be applied on destination (step 3.3).
Step 3.3 - Apply the incremental backup to the datafile copies on the destination system

On the destination system, logged in as the oracle user with the environment (ORACLE_HOME and ORACLE_SID environment variables) pointing to the destination database, run the roll forward datafiles step as follows:

[oracle@dest]$ $ORACLE_HOME/perl/bin/perl xttdriver.pl --restore --debug 3

The roll forward step connects to destination database and applies the incremental backups on the tablespaces' datafiles for each tablespace being transported.


NOTE: Although multiple backups can be executed against the source without being applied on the destination, the res.txt must be copied after the last backup and before the '--restore' is executed at the destination.
Step 3.4 - Repeat the roll forward phase 3 (3.1 - 3.3) or proceed to phase 4, final incremental backup

At this point there are two choices:

  1. If you need to bring the files at the destination database closer in sync with the production system, then repeat the Roll Forward phase, starting with step 3.1.
  2. If the files at the destination database are as close as desired to the source database, then proceed to the Transport phase.

Phase 4 - Final Incremental Backup

During this phase the source data is made READ ONLY and the destination datafiles are made consistent with the source database by creating and applying a final incremental backup. After the destination datafiles are made consistent, the normal transportable tablespace steps are performed to export object metadata from the source database and import it into the destination database. The data being transported is accessible only in READ ONLY mode until the end of this phase.

There are two options for this phase if you are running 12c and higher. If you are running 11g, only option #1 is available.

OPTION #1 Take the final backup

Option1.A: Alter source tablespace(s) to READ ONLY in the source database
On the source system, logged in as the oracle user with the environment (ORACLE_HOME and ORACLE_SID environment variables) pointing to the source database, alter the tablespaces being transported to READ ONLY.

system@source/prod SQL> alter tablespace TS1 read only;

Tablespace altered.

system@source/prod SQL> alter tablespace TS2 read only;

Tablespace altered.
Option1.B: Create the final incremental backup of the tablespaces being transported on the source system

On the source system, logged in as the oracle user with the environment (ORACLE_HOME and ORACLE_SID environment variables) pointing to the source database, run the backup as follows:

[oracle@source]$ $ORACLE_HOME/perl/bin/perl xttdriver.pl --backup --debug 3

 

NOTE: As the tablespaces are in READ ONLY mode, the following warning received can be ignored:

####################################################################
Warning:
------
Warnings found in executing /home/oracle/convert_source/backup_Nov9_Fri_09_08_26_213//xttpreparenextiter.sql
####################################################################
Prepare newscn for Tablespaces: 'SECOND'
DECLARE*
ERROR at line 1:
ORA-20001: TABLESPACE(S) IS READONLY OR,
OFFLINE JUST CONVERT, COPY
ORA-06512: at line 284

NOTE: This error should NOT be ignored if the tablespaces are in READ ONLY at the first (LEVEL=0) backup, see prerequisites.
Option1.C: Transfer incremental backups and res.txt to the destination system

[oracle@source]$ scp 'cat incrbackups.txt' oracle@dest:/dest_scratch_location
[oracle@source]$ scp res.txt oracle@dest:/home/oracle/xtt

Option1.D: Apply last incremental backup to destination datafiles

The final incremental backup must be applied to the destination datafiles:

[oracle@dest]$ $ORACLE_HOME/perl/bin/perl xttdriver.pl --restore --debug 3

This step will apply the last incremental backup to the datafiles on the destination.

OPTION #2. Take last incremental backup with new 12c feature, which also runs the export

This option is only available with 12c and higher. The only difference with this option is that the XTTs command, --bkpexport, will take BOTH the last incremental backup AND export the meta data.

Option2.A: Alter source tablespace(s) READ ONLY in the source database

On the source system, logged in as the oracle user with the environment (ORACLE_HOME and ORACLE_SID environment variables) pointing to the source database, alter the tablespaces being transported to READ ONLY.

system@source/prod SQL> alter tablespace TS1 read only;

Tablespace altered.

system@source/prod SQL> alter tablespace TS2 read only;

Tablespace altered.

Option2.B: Create the final incremental backup and export file for transfer to the destination

The final incremental backup is created with the "--bkpexport" option, then transferred to the destination.

[oracle@source]$ $ORACLE_HOME/perl/bin/perl xttdriver.pl --bkpexport
[oracle@source]$ scp `cat incrbackups.txt` oracle@dest:/stageondest
[oracle@source]$ scp xttplan.txt oracle@dest:/home/oracle/xtt
[oracle@source]$ scp tsbkupmap.txt oracle@dest:/home/oracle/xtt
[oracle@source]$ scp incrbackups.txt oracle@dest:/home/oracle/xtt

Option2.C: Apply last incremental backup to destination datafiles

The final incremental backup must be applied to the destination datafiles using "--resincrdmp"

[oracle@dest]$ $ORACLE_HOME/perl/bin/perl xttdriver.pl --resincrdmp

This step will apply the last incremental backup to the datafiles. In addition, it will restore the export dump file from the backupset and generate a script file, xttplugin.txt. This script can be used in phase 5 Option 2 below.

At this point, the restored export file can be used in Phase 5 step option1.C (skip step option1.A)
OR
the xttplugin.txt can be used for Phase 5 Option #2 (skip step option2.B).

Phase 5 - Transport Phase: Export Metadata and Plug-in Tablespaces into Destination Database

NOTE: Be sure the destination database has the necessary objects to allow the import to succeed. This includes pre-creating the owners of the tables in the tablespace being plugged in. See information on Transportable Tablespace and the guidance provided in the MAA paper Platform Migration Using Transportable Tablespaces: Oracle Database 11g.

Alternatively, if using 12c and higher, see Oracle Database 12c: Full Transportable Export/Import

During this phase, you need an export of the metadata of the tablespaces from the source and plug in the tablespace(s) into the destination.

There are two options, run metadata export and plug in the tablespaces using this file OR run the import command across sqlnet"

OPTION #1. Run metadata export and import using export file

Option1.A: Run datapump export on source database

Perform the tablespace transport by running transportable mode Data Pump export on the source database to export the object metadata being transported into a dump file. The below example assumes a directory (DATA_PUMP_DIR) already exists in the source. For example:

[oracle@source]$ cat exp.par

dumpfile=xttdump.dmp
directory=DATA_PUMP_DIR
exclude=TABLE_STATISTICS,INDEX_STATISTICS
transport_tablespaces=TS1,TS2
transport_full_check=yes
logfile=tts_export.log

[oracle@source]$ expdp system/manager parfile=exp.par

 

NOTE:  Be aware of:
EXCLUDE=STATISTICS Or EXCLUDE=INDEX_STATISTICS During Datapump Import Still Analyzes The Indexes (Doc ID 793585.1)

 


Refer to the following manuals for details:

Oracle Database Administrator's Guide
Oracle Database Utilities

Option1.B: Transfer the export file to destination directory used by datapump

Option1.C: Run datapump import using the export file on destination to plug in the tablespaces. For example:
[oracle@dest]$ cat manual_imp.par
dumpfile= xttdump.dmp
directory=DATAPUMP
transport_datafiles='/dest_datafile_location/TS1.dbf','/dest_datafile_location/TS2.dbf'
logfile=tts_import.log

[oracle@dest]$ impdp system/oracle parfile=manual_imp.par

OPTION #2. Run import across sqlnet

Option2.A: Create datapump directory and grant privilege

Datapump will look for the export file in the specified directory. Either copy the '.dmp' file in your existing data pump directory or create a new directory pointing to the or copy the '.dmp' in an existing datapump directory

SYS@DESTDB> create directory dpump_tts as '/home/oracle/destination/convert';

The directory must be granted to use who will do the import:

SYS@DESTDB> GRANT READ, WRITE ON DIRECTORY dpump_tts TO system;
Option2.B: Generate new xttplugin.txt for network import

On the destination system, logged in as the oracle user with the environment (ORACLE_HOME and ORACLE_SID environment variables) pointing to the destination database, run the generate Data Pump TTS command step as follows:

[oracle@dest]$ $ORACLE_HOME/perl/bin/perl xttdriver.pl -e

This will generate a sample Data Pump network_link transportable import command in the file xttplugin.txt with the transportable tablespaces parameters TRANSPORT_TABLESPACES and TRANSPORT_DATAFILES correctly set. In addition, a datapump export file will also be created.

NOTE: This command will overwrite the previous xttplugin.txt which is needed by Step5A.
Option2.C: Create a database link on destination database

Connecting to the destination database, create a database link connecting to the source database. For example:

SQL@dest> create public database link ttslink connect to system identified by <password> using '<tns_to_source>';

Verify the database link can properly access the source system:

SQL@dest> select name from v$database@ttslink;
Option2.D: Modify and execute Impdp command

A separate export or dump file is not required. To perform the tablespace transport with this command, then you must edit the import command file xttplugin.txt (generated in step 5B.1) and replace import parameters DIRECTORY, LOGFILE, and NETWORK_LINK with site-specific values.

The following is an example network mode transportable import command:

[oracle@dest]$ impdp directory=DATA_PUMP_DIR logfile=tts_imp.log network_link=ttslink \
transport_full_check=yes \
transport_tablespaces=TS1,TS2 \
transport_datafiles='+DATA/prod/datafile/ts1.285.771686721', \
'+DATA/prod/datafile/ts2.286.771686723', \
'+DATA/prod/datafile/ts2.287.771686743'
NOTE: Per generic TTS requirement, database users that own objects being transported must exist in the destination database before performing the transportable import.

Resources:

Phase 6 - Validate the Transported Data

Step 6.1 Check destination database for objects, pl/sql objects, sequences, views etc., are not transported by TTS

Only those database objects that are physically located in the tablespace(s) being transported will be copied to the destination system. Other objects, such as users, pl/sql objects, sequences, views etc., located in the SYSTEM tablespace will not be transported. You will need to pre-create the users and copy such objects to the destination system, possibly using data pump. See:

MAA paper Platform Migration Using Transportable Tablespaces: Oracle Database.

Step 6.2 Check tablespaces for corruption

At this step, the transported data is READ ONLY in the destination database. Perform application specific validation to verify the transported data.

Also, run RMAN to check for physical and logical block corruption by running VALIDATE TABLESPACE as follows:

RMAN> validate tablespace TS1, TS2 check logical;

Step 6.3 - Alter the tablespace(s) READ WRITE in the destination database

The final step is to make the destination tablespace(s) READ WRITE in the destination database.

system@dest/prod SQL> alter tablespace TS1 read write;

Tablespace altered.

system@dest/prod SQL> alter tablespace TS2 read write;

Tablespace altered.

Phase 7 - Cleanup

If a separate incremental convert home and instance were created for the migration, then the instance may be shutdown and the software removed.
Files created by this process are no longer required and may now be removed. They include the following:
  • src_scratch_location location on the source system
  • dest_scratch_location location on the destination system
  • $TMPDIR location in both source and destination systems

Appendix

Description of Perl Script xttdriver.pl Options

The following table describes the options available for the main supporting script xttdriver.pl.

OptionDescription

--backup

Creates a level 0 or level 1 backup of the datafiles belonging to the selected tablespaces. These backups will be written into the location pointed to by the xtt.properties variable “src_scratch_location”. These backups need to be copied over to the destination to the location pointed to by the variable “dest_scratch_location”.


--restore

Restores and converts the datafiles from the backups copied to the location “dest_scratch_location” in destination. The restored files will be placed in the path defined by the variable “dest_datafile_location”.

-eGenerates Data Pump TTS command step which can be executed on the destination system. This command is executed on the source and the command file created can be transfered to the destination for execution.

This step creates the template of a Data Pump Import command that uses a network_link to import metadata of objects that are in the tablespaces being transported.

It should be executed on the source and the resulting file transferred to the destination.
--debugEnables debug mode for xttdriver.pl and RMAN commands it executes. Debug mode can also be enabled by setting environment variable XTTDEBUG=1. Debug allows for 1,2,3 levels of debug. I.e., xttdriver.pl -debug 3

Description of Parameters in Configuration File xtt.properties

The following table describes the parameters used by xttdriver.pl which must be defined in the xtt.properties file that are needed for this 12c or higher procedure. Other parameters may exist which are needed for backward compatibility.

ParameterDescriptionExample Setting
tablespacesComma-separated list of tablespaces to transport from source database to destination database. Must be a single line, any subsequent lines will not be read.tablespaces=TS1,TS2
platformidSource database platform id, obtained from V$DATABASE.PLATFORM_ID.platformid=13
dest_datafile_location

Directory object in the destination database that defines where the destination datafiles will be created.

NOTE: Only one location is allowed for this parameter.

dest_datafile_location=/u01/oradata/V122
i.e.:
dest_datafile_location=+DATA
src_scratch_location

Location on the source system where backups are created.

This location must have sufficient free space to hold the level=0 backup of the tablespace datafiles and all subsequent incremental backups created for one iteration through the process documented above.

This location may be an NFS-mounted filesystem that is shared with the destination system, in which case it should reference the same NFS location as the dest_scratch_location parameter for the destination system.


NOTE: Only one location is allowed for this parameter.

src_scratch_location=/stage_source
dest_scratch_location

Location on the destination system where backups are placed by the user when they are transferred manually from the source system.

This location must have sufficient free space to hold level=0 backup and all subsequent incremental backups transferred from the source.

This location may be a DBFS-mounted filesystem.

This location may be an NFS-mounted filesystem that is shared with the source system, in which case it should reference the same NFS location as the src_scratch_location parameters for the source system. See Note 359515.1 for mount option guidelines.


NOTE: Only one location is allowed for this parameter.

dest_scratch_location=/stage_dest
asm_homeORACLE_HOME for the ASM instance that runs on the destination system.

asm_home=/u01/app/ 12.2.0.1 /grid
asm_sidORACLE_SID for the ASM instance that runs on the destination system.asm_sid=+ASM1
parallel

Defines the degree of parallelism used in copying (prepare phase), converting. Incremental backup creation parallelism is defined by RMAN configuration for DEVICE TYPE DISK PARALLELISM.


parallel=3
srcconnstr

In CDB environment, specifies connect string of the source pluggable database.


srcconnstr=sys/passwd@sourc_cdb_PDB1
destconnstr

In CDB environment, specifies connect string of the destination pluggable database.


destconnstr=sys/passwd@dest_cdb_PDB1
usermantransport

It is recommended this be set if the source database is running 12c or higher. This causes new 12c (and higher) functionality to be used. Note, this must be set to the SAME value between source and destination. So if source is running 11g, this parameter should NOT be set on the destination, regardless of the destination version.

Again, usermantransport is set (or remains commented out) in BOTH the source and destination xtt.properties. It should never be different between the source and destination.

usermantransport=1

Change History

 

ChangeDate

rman_xttconvert_VER4.zip released

29-Nov-2018

rman_xttconvert_VER4.3.zip released

05-Mar-2021

References


BUG:18596700 - CHANGE TRACKING IS NOT USED DURING INCREMENTAL BACKUP FOR READ ONLY TBS
NOTE:25890056.8 - Bug 25890056 - Fix for Bug 25670970 on 12.2 - ORA-600 [25027] on Securefile Lob Segment after XTTS

NOTE:2853054.1 - Using XTTs in a Data Guard Environment

NOTE:2311677.1 - Known Issues for Cross Platform Transportable Tablespaces XTTS
NOTE:25670970.8 - Bug 25670970 - ORA-600 [25027] or ORA-600 [kcfrbd_3] on Securefile Lob Segment after XTTS
NOTE:2689397.1 - V4 XTTs: Restore Returns Errors (ORA-19625 or ORA-19641) With Large Number of Datafiles

NOTE:2888288.1 - ALERT: Possible Block Corruption After Migrating From Solaris to Linux Exadata using HCC
NOTE:2013271.1 - 12c How Perform Cross-Platform Database Transport to different Endian Platform with RMAN Backup Sets
NOTE:2460552.1 - Cross Platform Database Migration using ZDLRA
NOTE:35022351.8 - Bug 35022351 - ORA-600: [kdbblkcheckerror], Block Checking error code [6101] / [6108] against HCC tables with ROW LEVEL LOCKING enabled

BUG:28309542 - LFI-00005 LFI-01007 XTTDRIVER.PL --RESTORE


NOTE:2999157.1 - M5 Cross Endian Platform Migration using Full Transportable Data Pump Export/Import and RMAN Incremental Backups
BUG:29268792 - V4 XTTS WILL ATTEMPT TO DELETE FILE SYSTEM FILE WITH ASMCMD
NOTE:793585.1 - EXCLUDE=STATISTICS Or EXCLUDE=INDEX_STATISTICS During Datapump Import Still Analyzes The Indexes

M5 Cross Endian Platform Migration using Full Transportable Data Pump Export/Import and RMAN Incremental Backups (Doc ID 2999157.1)

 


Prerequisites

High level migration workflow

Detailed migration workflow

Migration process explained

Appendix

Cross platform database migration is the process of moving databases to a new platform, including Exadata Database Machine, Exadata Cloud@Customer, Exadata Database Service, etc. This note provides a simple, reliable, and fast migration solution with minimal downtime.

The information below will guide you in performing a cross platform (Big Endian to small Endian, vice versa, or same platform when Data Guard option is not available) database migration.

Note:
1. This procedure only supports Oracle Database 19.18 or higher on source and destination.
2. Tablespaces SYSTEM, SYSAUX, and UNDO are excluded from the migration procedure.
3. RMAN backup encryption is not supported with this procedure.
4. This procedure supports using multi-section backups (SECTION SIZE) to perform faster backups of bigfile tablespaces.
5. This procedure supports migration from non-container database to pluggable database.
6. This procedure supports migration of pluggable databases across platforms.
7. This procedure supports migration using a physical standby as the source database.
8. This procedure supports migration of TDE encrypted database.

 

If your environment cannot meet the prerequisites of the supported version in this note, the other alternative is to use V4 Reduce Transportable Tablespace Downtime using Cross Platform Incremental Backup (Doc ID 2471245.1)

Prerequisites

  • Review MOS note Data Pump Recommended Proactive Patches For 19.10 and Above (Doc ID 2819284.1) and apply all the required fixes for the 19.18 or higher database on both source and destination.
    • Data Pump recommended fixes are required to address known issues and improve performance. This is a mandatory prerequisite.
  • Evaluate whether the below RMAN 19c database fixes apply to your migration.
    • Ensure to have fix for bug 35564253 applied on your destination database.
    • If using physical standby database as a migration source, fix for bug 35675516 is required for your destination database.
    • If Transparent Data Encryption (TDE) is used in the source database, fix for bug 30817822 is required for your destination database. This fix is already included in release update 19.20.0.0 and above.
  • This procedure works with Zero Data Loss Recovery Appliance and any local storage like NFS
    • When using Zero Data Loss Recovery Appliance ensure to have the latest libra.so (ZDLRA: Where to download new sbt library (libra.so module) (Doc ID 2219812.1))
    • When using Zero Data Loss Recovery Appliance for Cross Endian migration, the backups taken will not be virtualized (i.e., stored in RMAN format). They will be replicated to a downstream ZDLRA if configured (See Appendix Section 6). 
  • The db_create_file_dest parameter must be set on the destination database.
  • Capture all RMAN and Data Pump logs for troubleshooting the migration procedure.
  • When using NFS as a backup and restore target, ensure that you have write privileges from the source database and read privileges from the destination database.
  • When using TDE, the keys should be exported from the source and imported on the destination prior to starting the migration process (See Appendix Section 5)
    • When using the Zero Data Loss Recovery Appliance as a backup and restore target for source database backups, uncomment the variable BKP_DEST_TYPE=SBT_TAPE in the properties file and comment the variable BKP_DEST_TYPE=DISK. Also, update the variable BKP_DEST_PARM with the Zero Data Loss Recovery Appliance SBT_TAPE configuration and parameters. Only one variable for the target backup destination is accepted at a given time.  Ensure that your protected database is enrolled accordingly and with a proper Recovery Window Goal, more information can be found under Configuring Recovery Appliance for Protected Database Access

High level migration process

 

  • Perform a level 0 backup on the source
  • Restore the L0 backup taken previously on the destination
  • Perform incremental backups on source and apply on the destination. Repeat this procedure until final incremental is taken, which requires minimal read-only downtime window.
  • Downtime Begins
    • Source Data Tablespaces converted to READ ONLY
    • Final Incremental backup on source
    • Metadata Tablespace Export on source
    • Apply Incremental on destination.
    • Metadata tablespace Import on destination.
  • Downtime Ends

Detailed migration workflow

 Flow

 

 

Note:
1. All steps in this procedure are run as the oracle user that is a member of the OSDBA group. OS authentication is used to connect to both the source and destination databases.
2. This procedure assumes the destination database has been created on destination platform with specific requirements ("compatible" parameter, character set, database options on both source and destination database)
3. General Limitations on Transporting Data are documented here:
a. https://docs.oracle.com/en/database/oracle/oracle-database/19/admin/transporting-data.html#GUID-28800719-6CB9-4A71-95DD-4B61AA603173.
b. Transportable Tablespace (TTS) Restrictions and Limitations: Details, Reference, and Version Where Applicable (Note 1454872.1)
c. Primary Note for Transportable Tablespaces (TTS) -- Common Questions and Issues (Note 1166564.1)
4. Cross platform is only possible with Enterprise Edition. This procedure cannot be used with Standard Edition.
5. The source and destination databases COMPATIBLE parameter must be the same, minimum 19.0.0.0.
6. RMAN configuration on the source system must not have BACKUP TYPE TO COPY. The source must have BACKUP TYPE TO BACKUPSET.

 

Migration process explained

 

The migration relies on a driver script that reads a properties file to perform tablespace backup iterations on the source database using a 'from SCN' based on previous backup.  The backup iterations are L0 (full backup), L1 (incremental backup) command and L1F (final incremental backup).  See Appendix Section 2   

For ease of use, environment variables and other important variables required for the driver script are consolidated in a properties file (See Appendix Sections 3 and 4), the list of tablespaces to be migrated has its own file (See Appendix Section 5).

For more agility and simplicity, you can set up SSH equivalence between the source and destination servers if your security policies allow.  The SSH equivalence will allow the automatic copy of the required scripts between the source and destination automatically.

Note: If SSH equivalence cannot be established, the RMAN command file generated under the log directory on the source database should be copied manually to the destination and executed after each backup is taken on the source database. If NFS is not used or the source and destination do not share the directory where the backup pieces are generated, these should be made available to the destination database, the RMAN restore cmd file should be modified to include the new path the backup pieces are copied to.

During the final backup, the export Data Pump file should be also manually copied from the source database to destination database DATA_PUMP_DIR directory. If the destination is a RAC database, ensure the export Data Pump file is copied to a location shared to all nodes.

 

As previously mentioned, when using the Zero Data Loss Recovery Appliance as a backup and restore target for source database backups, the backups will not be part of the block pool, i.e., they won’t be virtualized.  For this, the backups should be stalled from being ingested.  See Appendix Section 4.

 

During the migration process, the user can issue as many incremental backups as required.  The idea is to increase the frequency of level 1 backups, and by increasing the frequency reducing their size. Smaller backups will get backed up, converted and restored quicker.  When performing the final incremental level 1, the tablespaces to be migrated will be put in read only mode and the backup is taken. On the destination environment, the restore of the last backup should be performed and the impdp.sh script executed afterwards to plugin the tablespaces metadata.

 

The migration can be performed from a physical standby database if such exists. This removes the load on the primary database. The migration steps are similar except that at the last step, the standby database is converted to snapshot standby, the metadata export is performed, and the database is converted back to physical standby. This automation uses the Data Guard broker to perform these operations.

 

For additional guidance on migrating pluggable databases refer to Appendix Section 6.

 

The final incremental backup (L1F step) is the last step before the import. During this task the user will be asked to provide a privileged user password on the source database. This is needed to put the tablespaces being transported into read only mode, and to perform the export. On the destination, the impdp.sh will require the user to provide the system password during the import of the transportable tablespaces.

 

Step-by-Step Migration

 

1. Preparation

  1. Create a working directory on an NFS share that is shared between source and destination. The path to the NFS share should be identical on both source and destination.
    1. If an NFS share is not possible, use any local storage. Create the directory on the source host and copy it to the destination host. This needs to be repeated multiple times throughout the process. The path on the source and destination host should be identical. If not, various changes are required to the scripts which is beyond the scope of this procedure.
  2. Download DBMIG.zip (md5sum 24b5acb9e7cbe600de7547faf1e86962) to your working directory.
  3. Unzip DBMIG.zip. The file contains:
      1. dbmig_driver_m5.sh: Migration driver script.
      2. impdp.sh: Import driver script.
      3. log directory: Created at runtime. Stores the RMAN backup log, migration logs, and additional logs in addition to:
        1. rman_mig_bkp.log: The centralized migration logfile.
        2. rman_mig_bkp.lck: Lock file created at the beginning of every backup to prevent concurrent driver execution. If a driver execution aborts the lock will not be removed. The problem should be investigated and resolved prior to removing the lock.
      4. cmd directory: Created at runtime. Stores the RMAN backup and restore scripts in addition to:
        1. dbmig_driver.properties: Contains the environment variables required for the migration; see Appendix section 1 below for additional information. This file is user-modifiable.
        2. dbmig_ts_list.txt: Contains the list of tablespaces to be migrated; names are comma-separated without quotes; see Appendix section 2. This file is user-modifiable.
  4. Modify the variables in dbmig_driver.properties (in cmd directory) to reflect the environment you are using; refer to Appendix section 1 for the description of the variable.
      1. If backing up to disk (BKP_DEST_TYPE=DISK), it is recommended to store the RMAN backup on the shared NFS. Set BKP_DEST_PARM to a subdirectory in the working directory.
      2. If possible, for ease of use, store the Data Pump dump file in a subdirectory in the working directory. Change DIRECTORY_PATH of the DATA_PUMP_DIR directory object.
      3. If backing up to ZDLRA (BKP_DEST_TYPE=SBT) and you don't use a shared NFS, you can establish SSH equivalence between the source and destination servers. The script can automatically transfer RMAN restore scripts and Data Pump dump file. Set DEST_SERVER to the network name of the destination database. Otherwise, leave it blank.
  5. Update dbmig_ts_list.txt with the tablespace names you are migrating. See Appendix section 2 for details.
  6. Create a new, empty destination database (non-CDB or PDB). The destination database must be on the same or higher release. The destination may be on a higher patch level than the source.

2. Level 0 Backup

  1. On source, execute an initial level 0 (L0) backup of the source database:
    1. $ sh dbmig_driver_m5.sh L0
  2. If not using a shared NFS, copy the working directory to the destination host.
  3. On destination, set the environment to the destination database.
  4. The driver script creates a restore script. It is stored in the cmd directory:
    ls -ltra cmd/restore_L0_*  
  5. Restore the level 0 (L0) backup on the destination database using the latest restore script. Ensure the path to the libra.so and wallet is valid in the restore script.
    1. For non-CDB using backups on disk:
      $ rman target / cmdfile=<restore_cmdfile_name>           
    2. For non-CDB using ZDLRA:
      $ man target / catalog /@scaningest:<port>/<service_name>cmdfile=<restore_cmdfile_name>
    3. For PDB connect directly into PDB
      $ rman target sys@<pdb-connect-string> cmdfile=<restore_cmdfile_name>
  6. Check the RMAN log file:
    $ egrep "WARN-|ORA-" log/restore_*.log
  7. In case of warnings or errors in the log file, contact Oracle Support.

 

3. Level 1 Backup

  1. On source, execute a level 1 (L1) incremental backup of the source database:
    1. $ sh dbmig_driver_m5.sh L1
  2. If not using a shared NFS, copy the working directory to the destination host.
  3. On destination, set the environment to the destination database.
  4. The driver script creates a restore script. It is stored in the cmd directory:
    ls -ltra cmd/restore_L1_*  
  5. Restore the level 1 (L1) backup on the destination database using the latest restore script. Ensure the path to the libra.so and wallet is valid in the restore script.
    1. For non-CDB using backups on disk:
      $ rman target / cmdfile=<restore_cmdfile_name>
    2. For non-CDB using ZDLRA:
      $ man target / catalog /@scaningest:<port>/<service_name> cmdfile=<restore_cmdfile_name>
    3. For PDB connect directly into PDB
      $ rman target sys@<pdb-connect-string> cmdfile=<restore_cmdfile_name>
  6. Check the RMAN log file:
    $ egrep "WARN-|ORA-" log/restore_*.log
  7. In case of warnings or errors in the log file, contact Oracle Support.
  8. Repeat steps 1-6 at regular intervals until the final level 1 incremental backup.

 

4. Final Level 1 Backup

  1. Downtime starts now.
  2. On source, execute the final level 1 (L1) incremental backup of the source database:
    1. $ sh dbmig_driver_m5.sh L1F
    2. The script prompts for the system password. This is the password of the user specified in SYSTEM_USR in the properties file.
    3. Then, it sets the specified tablespaces in read-only mode.
    4. Then, it starts the final level 1 incremental backup.
    5. Finally, it starts Data Pump to perform a full transportable export.
  3. Review the Data Pump export log file.
  4. If not using a shared NFS, copy the working directory to the destination host.
  5. If desired, you can shut down the source database at this point. It is no longer needed.
  6. On destination, set the environment to the destination database.
  7. The driver script creates a restore script. It is stored in the cmd directory:
    ls -ltra cmd/restore_L1F_*  
  8. Restore the final level 1 (L1) backup on the destination database using the final restore script.
    1. For non-CDB using backups on disk:
      $ rman target / cmdfile=<restore_cmdfile_name>
    2. For non-CDB using ZDLRA:
      $ man target / catalog /@scaningest:<port>/<service_name> cmdfile=<restore_cmdfile_name>
    3. For PDB connect directly into PDB
      $ rman target sys@<pdb-connect-string> cmdfile=<restore_cmdfile_name>
  9. Check the RMAN log file:
    $ egrep "WARN-|ORA-" log/restore_*.log
  10. In case of warnings or errors in the log file, contact Oracle Support.

 

5. Data Pump Import

  1. On destination, edit the import driver script (impdp.sh). Use information about the destination database to fill in the variables
    1. ORACLE_HOME: The path to the Oracle home of the destination database.
    2. ORACLE_SID: SID of the destination database.
    3. ORACLE_CONNECT_STRING: A connect string directly into the destination non-CDB database or destination PDB.
    4. DATA_PUMP_PARALLEL: Data Pump parallel level for the import. Use if the destination database is Oracle Database 21c or later.
    5. DATA_PUMP_TRACE: Data Pump trace level. Set to to disable trace.
  2. The import driver script takes four parameters:
    1. expdp_dumpfile: Name of the Data Pump dump file, e.g., exp_UP19_240206134155.dmp
    2. rman_last_restore_log: Relative path to the last RMAN restore log, e.g., log/restore_L1F_UP19_240206134155.log
    3. run-mode: One of the below:
      1. test: Generates the Data Pump parameter file. Does not start Data Pump.
      2. test-readonly. Generates the Data Pump parameter file and adds TRANSPORTABLE=KEEP_READ_ONLY to the Data Pump parameter file. Review the documentation for the usage of this parameter. Does not start Data Pump.
      3. run: Generates the Data Pump parameter file. Starts Data Pump to perform the import.
      4. run-readonly: Generates the Data Pump parameter file and adds TRANSPORTABLE=KEEP_READ_ONLY to the Data Pump parameter file. Review the documentation for the usage of this parameter. Starts Data Pump to perform the import.
    1. DP_ENC_PROMPT: One of the below
      1. Y: Will prompt the user for an encryption password in case one was used for the export
      2. N: Will not prompt the user for an encryption password.
  3. Start the import driver script in test mode to verify the correctness of the generated Data Pump parameter file:
    1. $ sh impdp.sh <expdp_dumpfile> <rman_last_restore_log> test
  4. The driver script generates the parameter file in the current directory. Examine it.
    1. $ vi imp_<oracle_sid>_<timestamp>_xtts.par
  5. Do not change the generated Data Pump parameter file. If you need to change the parameter file, change the code in the driver script, so it generates the desired parameter file.
  6. Copy the Data Pump dump file to the directory path specified by the DATA_PUMP_DIR directory object in the destination database. Use the below query to determine the directory path. If destination is a PDB, be sure to connect to the PDB to execute the query:
    1. SQL> select directory_path from dba_directories where directory_name='DATA_PUMP_DIR';
  7. Start the import driver script in run mode to perform the Data Pump import:
    1. $ sh impdp.sh <expdp_dumpfile> <rman_last_restore_log> run
    2. The script starts by creating a guaranteed restore point. If the destination is a PDB, the script creates the restore point in the CDB (CDB-level). The restore point is only useful for testing in combination with TRANSPORTABLE=KEEP_READ_ONLY. Unless you use this option, then re-use of the data files is not possible
    3. Then, it starts Data Pump to perform a full transportable import. Data Pump prompts for the password of the user executing the import.
  8. Always review the Data Pump log file. Even if the Data Pump is listed as successful, you should always review the Data Pump log file for any critical errors.
  9. When determined to go live, remove the previously created restore point.

 

Note:
If a service request needs to be opened with Oracle Support for an issue related with this procedure, the following should be uploaded:

1. The whole DBMIG directory including the sub-directories from the source database
2. The working directory from the destination, this includes the RMAN restore files, trace files, log files, import data pump par file(s) and log(s)
3. From both databases, the alert log files, opach lsinventory output, and the *dm* and *dw* trace files generated in the database dump directory.

 

Appendix Section 1

dbmig_driver.properties file explained

 

Parameter

Description

Default value

Modifiable

ORACLE_HOME

The Oracle home of the source big endian database

User defined

Yes

ORACLE_SID

The SID of the source big endian database

User defined

Yes

SRC_SCAN

Source database Easy connect

User defined

Yes

MIG_PDB

Set to 1 when migrating a pluggable database, otherwise 0 for non-CDB

User defined

Yes

PDB_NAME

Pluggable database where the tablespaces are being migrated from

User defined

Yes

BKP_FROM_STDBY

When migrating using STANDBY database as source, the value is 1, otherwise 0 when migrating from the PRIMARY

User defined

Yes

SOURCE_DPDMP

Data pump file location on source

User defined

Yes

SYSTEM_USR

System privileged user which will change the tablespaces to read only and perform the export data pump.  Recommended to use SYSTEM

Inherited in the script

NO

DP_TRACE

Data Pump trace level. Use 0 to disable trace. Use 3FF0300 to full transportable tracing, see MOS Doc ID 286496.1 for details.

0

Directed by Support

DP_PARALLEL

Data Pump parallel setting. Accepted values: 1 to 999. Example: 16

1

Yes

DP_ENC_PROMPT

Specifies whether Oracle Data Pump prompts you for the encryption password.

N

Yes

BKP_DEST_TYPE

Location of backups, accepted values: DISK, SBT_TAPE. Choose DISK to backup up to local storage. Choose SBT_TAPE to use ZDLRA.

User defined

Yes

BKP_DEST_PARM

If BKP_DEST_TYPE=DISK, enter location for backup: Example: /u01/app/oracle/m5/rman

If BKP_DEST_TYPE=SBT_TAPE, enter channel configuration: Example: "'%d_%U' PARMS \"SBT_LIBRARY=<oracle_home>/lib/libra.so,SBT_PARMS=

(RA_WALLET='location=file:<oracle_home>/dbs/zdlra credential_alias=<zdlra-connect-string>')\""

User defined

Yes

CAT_CRED

If you use ZDLRA, specify connect string

Example: <scan-name>:<port>/<service>

User defined

Yes

SECTION_SIZE

Section size used in RMAN backups

64G

Yes

CHN

Number of RMAN channels allocated.

User defined

Yes

DEST_SERVER

Network name of the destination server. Leave blank if you manually transfer backups and dump files.

User defined

Yes

DEST_USER

Destination OS username.  If SSH equivalence is established between the source and destination environments, it will use passwordless SCP to copy files from the source to destination. Example: oracle

User defined

Yes

DEST_WORKDIR

Destination working directory, this is to store the RMAN restore scripts, logs, trace, import script and par files

User defined

Yes

DEST_DPDMP

The directory path used by DATA_PUMP_DIR on the destination.

Example: /u01/app/oracle/m5/data_pump_dir

User defined

Yes

WORKDIR

The working directory containing the migration script, CMD and LOG directories on source.

User defined

Yes

LOG_DIR

The log directory under the WORKDIR holding the backup log files and others on source.

Inherited in the script

NO

CMD_DIR

The cmd directory under the WORKDIR holding the backup and restore scripts on source.

Inherited in the script

NO

DT=`date +%y%m%d%H%M%S`

CMD_MKDIR=`which mkdir`

CMD_TOUCH=`which touch`

CMD_CAT=`which cat`

CMD_RM=`which rm`

CMD_AWK=`which awk`

CMD_SCP=`which scp`

Additional variables to set the date format and load OS commands.

Inherited in the script

NO

TS

The migrated tablespaces names in a list format

User defined

Yes

TSF

The migrated tablespaces filename

 

NO

SCN

File to keep track of the SCN when the backup was run

Inherited in the script

No

 

dbmig_driver.properties sample file

 

############################################################
#Source database properties
#my_M5_prop_version=2
# -
# - ORACLE_HOME Path to Oracle Home
# - ORACLE_SID SID of the source database
# - SRC_SCAN Connect string to source database via SCAN.
# If no SCAN, specify source database network name.
# Enclose in single quotes
# Example: '@myhost-scan/db1'
# Example: '@localhost/pdb1'
# - MIG_PDB Accepted values: 0, 1
# Choose 0 if source is non-CDB
# Choose 1 if source is a PDB
# - PDB_NAME If source is a PDB, specify PDB name.
# Else leave blank
# Example: PDB1
# - BKP_FROM_STDBY Accepted values: 0, 1
# Choose 0 to back up from primary database,
# or if Data Guard is not in use.
# Choose 1 to back up from standby database.
############################################################
export ORACLE_HOME=/u01/app/oracle/product/19.0.0.0/dbhome_1
export PATH=$PATH:$ORACLE_HOME/bin
export ORACLE_SID=soldb1918
export SRC_SCAN='@scan1/soldb1918'
export MIG_PDB=0
export PDB_NAME=
export BKP_FROM_STDBY=0
############################################################
#Source Data Pump settings
# - SOURCE_DPDMP Directory path of the directory DATA_PUMP_DIR
# Example: /u01/app/oracle/m5/data_pump_dir
# - SOURCE_DPIR Data Pump Directory, typically DATA_PUMP_DIR
# - SYSTEM_USR Username for Data Pump export.
# Do not use SYS AS SYSDBA
# Example: SYSTEM
# - DP_TRACE Data Pump trace level.
# Use 0 to disable trace.
# Use 3FF0300 to full transportable tracing
# See MOS Doc ID 286496.1 for details.
# - DP_PARALLEL Data Pump parallel setting.
# Accepted values: 1 to 999
# Example: 16
############################################################
export SOURCE_DPDMP=/u01/app/admin/soldb1918/dpdump
export SOURCE_DPDIR=DATA_PUMP_DIR
export SYSTEM_USR=SYSTEM
export DP_TRACE=0
export DP_PARALLEL=1
export DP_ENC_PROMPT=N
############################################################
#Source RMAN settings
# - BKP_DEST_TYPE Accepted values: DISK, SBT_TAPE
# Choose DISK to backup up to local storage
# Choose SBT_TAPE to use ZDLRA
# - BKP_DEST_PARM If BKP_DEST_TYPE=DISK, enter location for backup:
# Example: /u01/app/oracle/m5/rman
# If BKP_DEST_TYPE=SBT_TAPE, enter channel configuration:
# Example: "'%d_%U' PARMS \"SBT_LIBRARY=<oracle_home>/lib/libra.so,SBT_PARMS=(RA_WALLET='location=file:<oracle_home>/dbs/zdlra credential_alias=<zdlra-connect-string>')\""
# - CAT_CRED If you use RMAN catalog or ZDLRA, specify connect string to catalog database
# Example: <scan-name>:<port>/<service>
# - SECTION_SIZE Section size used in RMAN backups
# - CHN Number of RMAN channels allocated
############################################################
export BKP_DEST_TYPE=DISK
export BKP_DEST_PARM=/SHAREDY/DBMIGBKP/SOLDB1918
export CAT_CRED=
export SECTION_SIZE=64G
export CHN=8
############################################################
#Destination host settings
#If specified, the script transfers the RMAN backups and
#Data Pump dump file to the destination via over SSH.
#SSH equivalence is required.
# - DEST_SERVER Network name of the destination server.
# Leave blank if you manually transfer
# backups and dump files
# - DEST_USER User for SSH connection
# Example: oracle
# - DEST_WORKDIR The script working directory on destination
# Example: /u01/app/oracle/m5
# - DEST_DPDMP The directory path used by DATA_PUMP_DIR
# in destination database
# Example: /u01/app/oracle/m5/data_pump_dir
############################################################
export DEST_SERVER=srvadm01
export DEST_USER=oracle
export DEST_WORKDIR=/SHAREDY/STAGE/CROSSPLATMIG/PRIMARY
export DEST_DPDMP=/u01/app/oracle/product/19.0.0.0/dbhome_13/rdbms/log

############################################################
#Advanced settings
#Normally, you don't need to edit this section
############################################################
export WORKDIR=$PWD
export LOG_DIR=${WORKDIR}/log
export CMD_DIR=${WORKDIR}/cmd
export PATH=$PATH:$ORACLE_HOME/bin
export DT='date +%y%m%d%H%M%S'
export CMD_MKDIR='which mkdir'
export CMD_TOUCH='which touch'
export CMD_CAT='which cat'
export CMD_RM='which rm'
export CMD_AWK='which awk'
export CMD_SCP='which scp'
export CMD_CUT='which cut'
export CMD_PLATFORM='uname'
if [[ "$CMD_PLATFORM" = "Linux" ]]; then
export CMD_GREP="/usr/bin/grep"
else
if [[ "$CMD_PLATFORM" = "AIX" ]]; then
export CMD_GREP="/usr/bin/grep"
else
if [[ "$CMD_PLATFORM" = "HPUX" ]]; then
export CMD_GREP="/usr/bin/grep"
else
export CMD_GREP='which ggrep'
fi
fi
fi
export my_M5_prop_version=2

 

Appendix Section 2

Structure of the dbmig_ts_list.txt file

This file contain the names of the tablespaces to be migrated, the names should be separated by comma only.  See sample below.

$ cat dbmig_ts_list.txt

 

JSXTTS_1,JSXTTS_2,JSXTTS_3,JSXTTS_4,JSXTTS_5,JSXTTS_6,JSXTTS_7,JSXTTS_8,JSXTTS_9,JSXTTS_10

 

 

Appendix Section 3

The file impdp.sh should be copied to the destination working directory.

Export the ORACLE_HOME, ORACLE_SID and ORACLE_PDB_SID (If required) to reflect the database you are migrating to, enter the password for the user performing the import when prompted.  The script requires three parameters, the export dumpfile name which is displayed during the last L1 backup on the source, the last RMAN restore log and the run mode (test or run).  Run mode allows to perform a test run to check the import parameter file for correctness before executing a real run with the run parameter.

 

Impdp.sh runs as follows:

$ sh impdp.sh <exp.dmp> <log/res_L1F.log> [test|test-readonly|run-readonly|run] <encryption_pwd_prompt[Y|N]>

 

Run mode explained.

 

test

Runs the script and produces an imp_${ORACLE_SID}_xtts.par file without the parameter TRANSPORTABLE=KEEP_READ_ONLY, check the imp_${ORACLE_SID}_xtts.par file for correctness

$sh impdp.sh <exp.dmp> <res.log> test

test-readonly

Runs the script and produces an imp_${ORACLE_SID}_xtts.par file with the parameter TRANSPORTABLE=KEEP_READ_ONLY, check the imp_${ORACLE_SID}_xtts.par file for correctness

run

run: Runs the script, produces an imp_${ORACLE_SID}_xtts.par file without the parameter TRANSPORTABLE=KEEP_READ_ONLY, generates a guaranteed restore point and performs the import.

run-readonly

Runs the script, produces an imp_${ORACLE_SID}_xtts.par file with the parameter TRANSPORTABLE=KEEP_READ_ONLY, generates a guaranteed restore point and performs the import.

 

 

$ cat impdp.sh

#!/bin/bash

 

if [ $# -ne 4 ]; then

        echo "Please call this script using the syntax $0 "

        echo "Example: # sh impdp.sh <expdp_dumpfile> <rman_last_restore_log> [run|test|run-readonly|test-readonly] <encryption_pwd_prompt[Y|N]>"

        exit 1

fi

 

#Full path to Oracle Home

export ORACLE_HOME=/u01/app/oracle/product/19.0.0.0/dbhome_13

export PATH=$PATH:$ORACLE_HOME/bin

#SID of the destination database

export ORACLE_SID=entain1

#Connect string to destination database. If PDB, connect directly into PDB

export ORACLE_CONNECT_STRING=entain

#Data Pump directory

export DATA_PUMP_DIR=DATA_PUMP_DIR

#Data Pump parallel setting

export DATA_PUMP_PARALLEL=1

#Data Pump trace level. 0 to disable. 3FF0300 for transportable tablespace trace

export DATA_PUMP_TRACE=0

 

############################################################

#Normally, do not edit below this line

############################################################

 

export PATH=$PATH:$ORACLE_HOME/bin

export DT=`date +%y%m%d%H%M%S`

export EXPDP_DMP=${1}

export RES_LOG=${2}

export RUN_MODE=${3}

export DP_ENC_PROMPT=${4}

 

echo "userid='system@${ORACLE_CONNECT_STRING}'" > imp_${ORACLE_SID}_${DT}_xtts.par

echo "dumpfile=${EXPDP_DMP}" >> imp_${ORACLE_SID}_${DT}_xtts.par

echo "directory=${DATA_PUMP_DIR}" >> imp_${ORACLE_SID}_${DT}_xtts.par

echo "LOGTIME=ALL" >> imp_${ORACLE_SID}_${DT}_xtts.par

echo "TRACE=${DATA_PUMP_TRACE}" >> imp_${ORACLE_SID}_${DT}_xtts.par

echo "PARALLEL=${DATA_PUMP_PARALLEL}" >> imp_${ORACLE_SID}_${DT}_xtts.par

echo "LOGFILE=imp_${ORACLE_SID}_${DT}_xtts.log" >> imp_${ORACLE_SID}_${DT}_xtts.par

echo "METRICS=YES" >> imp_${ORACLE_SID}_${DT}_xtts.par

case ${DP_ENC_PROMPT} in

     "Y")

        echo "ENCRYPTION_PWD_PROMPT=YES" >> imp_${ORACLE_SID}_${DT}_xtts.par

       ;;

     "N")

        echo "ENCRYPTION_PWD_PROMPT=NO" >> imp_${ORACLE_SID}_${DT}_xtts.par

      ;;

      *)

      ;;     

esac

echo "TRANSPORT_DATAFILES=" >> imp_${ORACLE_SID}_${DT}_xtts.par

 

input=$(grep "restoring foreign" ${RES_LOG} | awk '{print $9}' | sort |  uniq)

 

IFS=$'\n' read -rd '' -a input_list <<< "$input"

output_list=()

for item in "${input_list[@]}"; do

    output_list+=("'$item',")

done

 

last_index=$(( ${#output_list[@]} - 1 ))

output_list["$last_index"]="${output_list[last_index]%,}"

 

for item in "${output_list[@]}"; do

    echo "$item" >> imp_${ORACLE_SID}_${DT}_xtts.par

done

 

 

function gen_grp {

${ORACLE_HOME}/bin/sqlplus -s /nolog << EOF

                connect / as sysdba

                set lines 333

                col time for a48

                col restore_point_time for a24

                col name for a24

                spool ${PWD}/dbmig_grp.lst

                create restore point before_imp_${DT} guarantee flashback database;

                select NAME, SCN, TIME, GUARANTEE_FLASHBACK_DATABASE, STORAGE_SIZE FROM V\$RESTORE_POINT WHERE GUARANTEE_FLASHBACK_DATABASE='YES';

                spool off;

                exit

EOF

}

 

case ${RUN_MODE} in

              "run")

                echo "Running in Run mode"

                gen_grp

                impdp parfile=imp_${ORACLE_SID}_${DT}_xtts.par

              ;;

              "run-readonly")                             

                echo "Running in Read Only, for the final run you must flashback the database to the restore point created now"

                gen_grp

                awk 'NR==3 {print "TRANSPORTABLE=KEEP_READ_ONLY"} {print}' imp_${ORACLE_SID}_${DT}_xtts.par > tmp_file && mv tmp_file imp_${ORACLE_SID}_${DT}_xtts.par

                impdp parfile=imp_${ORACLE_SID}_${DT}_xtts.par

              ;;

              "test")                            

                echo "Running in test mode, check par file for correctness"

              ;;

              "test-readonly")                            

                echo "Running in test mode with read only, check par file for correctness"

                awk 'NR==3 {print "TRANSPORTABLE=KEEP_READ_ONLY"} {print}' imp_${ORACLE_SID}_${DT}_xtts.par > tmp_file && mv tmp_file imp_${ORACLE_SID}_${DT}_xtts.par

              ;;                        

              *)

                echo "Unknown run mode, options are: run, test, run-readonly, test-readonly"

esac

 

Appendix Section 4

 

When using the Zero Data Loss Recovery Appliance a backup and restore target, the backup from the source databases should be stalled (i.e, blocked) from being indexed.

 

To do so, identify your database DB_KEY by querying the RA_DATABASE view using the DB_UNQIUE_NAME in the Zero Data Loss Recovery Appliance database.

 

As user RASYS in the Zero Data Loss Recovery Appliance database create a stall_when view like the one below:

 

SQL> create or replace view RAI_STALL_WHEN

     AS

     SELECT task_id

       FROM task

      WHERE task_type = 140

        AND db_key = <DB_KEY>;

 

As user RASYS on the Zero Data Loss Recovery Appliance database enable the stall_when parameter:

 

 SQL> EXEC DBMS_RA_INT.config(p_name=>'_stall_when', p_value=>'ON', comments=>'Enabled for Cross Endian Migration');

 

When the migration is finished, the stall_when flagged should be removed and the view dropped.  This will trigger the INDEX_BACKUP task on the ZDLRA and virtualize the backups.

 

SQL> DROP view RAI_STALL_WHEN;

SQL> EXEC DBMS_RA_INT.config(p_name=>'_stall_when', p_value=>'OFF', comments=>'Disabled for Cross Endian Migration');

 

 

Note: Prior to creating the RAI_STALL_WHEN view, verify if the _stall_when flag has already been set for any other reason. If the flag was already set, do not proceed with the migration and contact Oracle Support.

 

Note: If the compliance mode is implemented on the ZDLRA, the use of dbms_ra_int is considered a "break glass".

 

Note: If you are using block change tracking, and as backups to the ZDLRA are not being virtualized, it is important the backup performance can be impacted if number of backups (L0+ L1s) is higher than default value (8) for “_bct_bitmaps_per_file”
Reference: RMAN incremental level 1 backups are not using block change tracking file (Doc ID 1192652.1).

 

Appendix Section 5

 

When using TDE, the encryption keys should be exported from the source database keystore and imported on the destination database keystore ahead of the migration.

 

  1. On the source big endian system, export the TDE encryption keys as a user who has been granted the ADMINISTER KEY MANAGEMENT or SYSKM privilege. Protect the exported file as it contains sensitive information.

$ sqlplus / as sysdba
SQL> ADMINISTER KEY MANAGEMENT EXPORT ENCRYPTION KEYS FORCE WITH SECRET "new_password_to_encrypt_export_file" TO 'exportkeys.exp' IDENTIFIED BY "existing_keystore_password";

Keystore altered

 o Check the KEY_ID from the V$ENCRYPTION_KEYS


$ sqlplus / as sysdba
SQL> SELECT KEY_ID FROM V$ENCRYPTION_KEYS;

KEY_ID
------------------------------------------------------------------------------
AbWpl9Eq2U8Ev22+H6Ufy14AAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AVZ8DckBpk8xv+9IzzfKvoMAAAAAAAAAAAAAAAAAAAAAAAAAAAAA

 o Copy the encrypted export file ‘exportkeys.exp’ to the Linux destination:

$ scp exportkey.exp user@server:/path

 2. On the Linux destination system, import the TDE master encryption key from the file generated in step 1

$ sqlplus / as sysdba
SQL> ADMINISTER KEY MANAGEMENT IMPORT KEYS WITH SECRET "Password_used_in_step1_export" FROM 'exportkeys.exp' IDENTIFIED BY "keystore_password_used_on_detination_db" WITH BACKUP;

Keystore altered

 o Check the KEY_ID from the V$ENCRYPTION_KEYS

$ sqlplus / as sysdba
SQL> SELECT KEY_ID FROM V$ENCRYPTION_KEYS;

KEY_ID
------------------------------------------------------------------------------
AXj7ZZ4Lq08Jv4pzTBn8/DoAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AVZ8DckBpk8xv+9IzzfKvoMAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AbWpl9Eq2U8Ev22+H6Ufy14AAAAAAAAAAAAAAAAAAAAAAAAAAAAA

 o To activate the previously imported the TDE keys from the source

$ sqlplus / as sysdba
SQL> ADMINISTER KEY MANAGEMENT USE KEY <KEY_ID> USING TAG 'Activate Key on target' FORCE KEYSTORE IDENTIFIED BY "keystore_password_used_in_step1_above" WITH BACKUP;

Keystore altered

 

Appendix Section 6

This note supports migrating a pluggable database from a big-endian source container database to a small endian destination container database.

 

In the dbmig_driver.properties, set the flags MIG_PDB to 1 and PDB_NAME to the name of the pluggable database to be migrated.  The file dbmig_ts_list.txt should contain the list of tablespaces to be migrated.  During the migration process the automation creates a file called dbmig_ts_list.txt.pdb.  This file contains the name of the pluggable database appended to the name of the tablespaces to be migrated.  The content of the file is used when building the RMAN backup scripts.

 

The restore operations of L0 and L1s are the same for every migration.  For the pluggable database migration and to ensure the tablespaces being transported are plugged in the correct destination pluggable database, the connection string in the impdp.sh should be adapted accordingly.

 

During the restore operation, RMAN will create the datafiles in a ASM sub-directory on the destination which is different to the directory used by the pluggable database. If you are wanting to move the files into the correct destination of the pluggable database then mount the pluggable database and perform and RMAN ‘BACKUP AS COPY’ specifying the data diskgroup in the FORMAT clause. Once the backup is complete, you can perform an RMAN ‘SWITCH DATAFILE TO COPY’ command to update the controlfile with the new correct pluggable database directory.

Upgrading Oracle E-Business Suite Release 12.2 with Oracle Database 19c to 23ai on Oracle Exadata Database Service on Dedicated Infrastructure or Cloud@Customer

  1.1 Carry Out Performance Evaluation in a Test Environment When upgrading your Oracle E-Business Suite database, it is essential to ensure...