Cloning Oracle E-Business Suite Release 12 RAC-Enabled Systems with Rapid Clone
Last Updated: April 4, 2019
This document describes the process of using the Oracle Applications Rapid Clone utility to create a clone (copy) of an Oracle E-Business Suite Release 12 system that utilizes the Oracle Database 10g, 11g or 12c Real Application Clusters feature.
The resulting duplicate Oracle Applications Release 12 RAC environment can then be used for purposes such as:
- Patch testing
- User Acceptance testing
- Performance testing
- Load testing
- QA validation
- Disaster recovery
The most current version of this document can be obtained in OracleMetaLink Note 559518.1.
There is a change log at the end of this document.
In This Document
- Section 1: Overview, Prerequisites and Restrictions
- Section 2: Configuration Requirements for the Source RAC System
- Section 3: Configuration requirements for the Target RAC System
- Section 4: Prepare Source RAC System
- Section 5: RAC-to-RAC Cloning
- Section 6: RAC-to-Single Instance Cloning
- Section 7: Applications Tier Cloning for RAC
- Section 8: Advanced Cloning Scenarios
- Appendix A: Configuring Oracle Clusterware on the Target System Database Nodes
Note: At present, the procedures described in this document apply to UNIX and Linux platforms only, and are not suitable for Oracle Applications Release 12 RAC-enabled systems running on WIndows.
A number of conventions are used in describing the Oracle Applications architecture:
Convention | Meaning |
---|---|
Application tier | Machines (nodes) running Forms, Web, and other services (servers). Also called middle tier. |
Database tier | Machines (nodes) running the Oracle Applications database. |
oracle | User account that owns the database file system (database ORACLE_HOME and files). |
CONTEXT_NAME | The CONTEXT_NAME variable specifies the name of the Applications context that is used by AutoConfig. The default is [SID]_[hostname]. |
CONTEXT_FILE | Full path to the Applications context file on the application tier or database tier. |
APPSpwd | Oracle Applications database user password. |
Source System | Original Applications and database system that is to be duplicated. |
Target System | New Applications and database system that is being created as a copy of the source system. |
ORACLE_HOME | The top-level directory into which the database software has been installed. |
CRS_ORACLE_HOME | The top-level directory into which the Cluster Ready Services (CRS) software has been installed. |
ASM_ORACLE_HOME | The top-level directory into which the Automatic Storage Management (ASM) software has been installed. |
RMAN | Oracle's Recovery Manager utility, which ships with the 10g, 11g and 12c Database. |
Image | The RMAN proprietary-format files from the source system backup. |
Monospace Text | Represents command line text. Type such a command exactly as shown. |
[ ] | Text enclosed in square brackets represents a variable. Substitute a value for the variable text. Do not type the square brackets. |
\ | On UNIX, the backslash character is entered at the end of a command line to indicate continuation of the command on the next line. |
Section 1: Overview, Prerequisites and Restrictions
1.1 Overview
Converting Oracle E-Business Suite Release 12 from a single instance database to a multi-node Oracle Real Application Clusters (Oracle RAC) enabled database (described in OracleMetalinkNote 388577.1) is a complex and time-consuming process. It is therefore common for many sites to maintain only a single system in which Oracle RAC is enabled with the E-Business Suite environment. Typically, this will be the main production system. In many large enterprises, however, there is often a need to maintain two or more Oracle RAC-enabled environments that are exact copies (or clones) of each other. This may be needed, for example, when undertaking specialized development, testing patches, working with Oracle Global Support Services, and other scenarios. It is not advisable to carry out such tasks on a live production system, even if it is the only environment enabled to use Oracle Real Application Clusters.
The goal of this document (and the patches mentioned herein) is to provide a rapid, clear-cut, and easily achievable method of cloning an Oracle RAC enabled E-Business Suite Release 12 environment to a new set of machines on which a duplicate RAC enabled E-Business Suite system is to be deployed.
This process will be referred to as RAC-To-RAC cloning from here on.
1.1.2 Cluster Terminology
You should understand the terminology used in a cluster environment. Key terms include the following.
- Automatic Storage Management (ASM) is an Oracle database component that acts as an integrated file system and volume manager, providing the performance of raw devices with the ease of management of a file system. In an ASM environment, you specify a disk group rather than the traditional datafile when creating or modifying a database structure such as a tablespace. ASM then creates and manages the underlying files automatically.
- Oracle Cluster File System (OCFS2) is a general purpose cluster file system which can, for example, be used to store Oracle database files on a shared disk.
- Certified Network File Systems is an Oracle-certified network attached storage (NAS) filer: such products are available from EMC, HP, NetApp, and other vendors. See the Oracle Release 10g, 11g or 12c Real Application Clusters installation and user guides for details on supported NAS devices and certified cluster file systems.
- Cluster Ready Services (CRS) is the primary program that manages high availability operations in an Oracle RAC environment. The crs process manages designated cluster resources, such as databases, instances, services, and listeners.
- Oracle Real Application Clusters (Oracle RAC) is a database feature that allows multiple machines to work on the same data in parallel, reducing processing time. Of equal or greater significance, depending on the specific need, an Oracle RAC environment also offers resilience if one or more machines become temporarily unavailable as a result of planned or unplanned downtime.
1.3 Prerequisites
- This document is only for use in RAC-To-RAC cloning of a source Oracle E-Business Suite Release 12 RAC System to a target Oracle E-Business Suite RAC System.
- The steps described in this note are for use by accomplished Applications and Database Administrators, who should be:
- Familiar with the principles of cloning an Oracle E-Business Suite system, as described in OracleMetaLink Note 406982.1, Cloning Oracle Applications Release 12 with Rapid Clone.
- Familiar with Oracle Database Server 10g,11g or 12c and have at least a basic knowledge of Oracle Real Application Clusters (Oracle RAC).
- Experienced in the use of of RapidClone, AutoConfig, and AD utilities, as well as the steps required to convert from a single instance Oracle E-Business Suite installation to a RAC-enabled one.
- The source system must remain in a running and active state during database Image creation.
- The addition of database RAC nodes (beyond the assumed secondary node) is, from the RapidClone perspective, easily handled. However, the Clusterware software stack and cluster-specific configuration must be in place first, to allow RapidClone to configure the database technology stack properly. The CRS specific steps required for the addition of database nodes are briefly covered further in Appendix A however the Oracle Clusterware product documentation should be referred to for greater detail and understanding.
- Details such as operating system configuration of mount points, installation and configuration of ASM, OCFS2, NFS or other forms of cluster file systems are not covered in this document.
- Oracle Clusterware installation and component service registration are not covered in this document.
- Following are useful references when planning to set up Real Application Clusters and shared devices:
1.4 Restrictions
Before using RapidClone to create a clone of an Oracle E-Business Suite Release 12 RAC-enabled system, you should be aware of the following restrictions and limitations:
- This RAC-To-RAC cloning procedure can be used on Oracle Database 10g, 11g and 12c RAC Systems.
- The final cloned RAC environment will:
- Use the Oracle Managed Files option for datafile names.
- Contain the same number of redo log threads as the source system.
- Have all datafiles located under a single "DATA_TOP" location.
- Contain only a single control file, without any of the extra copies that the DBA typically expects.
- During the cloning process, no allowance is made for the use of a Flash Recovery Area (FRA). If an FRA needs to be configured on the target system, it must be done manually.
- At the conclusion of the cloning process, the final cloned Oracle RAC environment will use a pfile (parameter file) instead of an spfile. For proper CRS functionality, you should create an spfile and locate it in a shared storage location that is accessible from both Oracle RAC nodes.
- Beside ASM and OCFS2, only NetApp branded devices (certified NFS clustered file systems) have been confirmed to work at present. While other certified clustered file systems should work for RAC-To-RAC cloning, shared storage combinations not specifically mentioned in this the article are not guaranteed to work, and will therefore only be supported on a best-efforts basis.
Section 2: Configuration Requirements for the Source Oracle RAC System
2.1 Required Patches
Please refer to My Oracle Support Knowledge Document 406982.1, "Cloning Oracle Applications Release 12 with Rapid Clone" to obtain the latest required RapidClone Consolidated Update patch number. Download and apply the latest required RapidClone Consolidated Update patch at this time.
Warning: After applying any new Rapid Clone, AD or AutoConfig patch, the ORACLE_HOME(s) on the source system must be updated with the files included in those patches. To synchronize the Rapid Clone and AutoConfig files within the RDBMS ORACLE_HOME using the admkappsutil.pl utility, refer to OracleMetaLink Note 387859.1, Using AutoConfig to Manage System Configurations in Oracle E-Business Suite Release 12, and follow the instructions in section System Configuration and Maintenance, subsection Patching AutoConfig
2.2 Supported Oracle RAC Migration
The source Oracle E-Business Suite RAC environment must be created in accordance with My Oracle Support Knowledge Document 388577.1, Using Oracle 10g Release 2 Real Application Clusters and Automatic Storage Management with Oracle E-Business Suite Release 12. The RAC-To-RAC cloning process described here has only been validated for use on Oracle E-Business Suite Release 12 systems that have been converted to use Oracle RAC as per this note.
2.3 AutoConfig Compliance on Oracle RAC Nodes
Also in accordance with My Oracle Support Knowledge Document 388577.1, AutoConfig must have been used during Oracle RAC configuration of the source system (following conversion).
2.4 Supported Datafile Storage Methods
The storage method used for the source system datafiles must be one of the following Oracle 10g/11g/12c RAC Certified types:
- NFS Clustered File Systems (such as NetApp Filers)
- ASM (Oracle Automatic Storage Management)
- OCFS2 (Oracle Cluster File System V2)
2.5 Archive Log Mode
The source system database instances must be in archive log mode, and the archive log files must be located within the shared storage area where the datafiles are currently stored. This conforms to standard Oracle RAC best practices.
Warning: If the source system was not previously in archive log mode, but it has recently been enabled, or if the source system parameter ARCHIVE_LOG_DEST was at some point set to any local disk directory location, you must ensure that RMAN has a properly maintained list of valid archive logs located exclusively in the shared storage area.
To confirm RMAN knows only of your archive logs located on the shared disk storage area, do the following.
First, use SQL*Plus or RMAN to show the locations of the archive logs. For example:
If the output shows a local disk location, change this location appropriately, and back up or relocated any archive log files to the shared storage area. It will then be necessary to correct the RMAN archive log manifest, as follows:
Review the output archive log file locations and, assuming you have relocated or removed any locally stored archive logs, you will need to correct the invalid or expired archive logs as follows:
It is essential to carry out the above steps (if applicable) before you continue with the Oracle E-Business Suite Release 12 RAC cloning procedure.
To confirm RMAN knows only of your archive logs located on the shared disk storage area, do the following.
First, use SQL*Plus or RMAN to show the locations of the archive logs. For example:
SQL>archive log list
If the output shows a local disk location, change this location appropriately, and back up or relocated any archive log files to the shared storage area. It will then be necessary to correct the RMAN archive log manifest, as follows:
RMAN>crosscheck archivelog all;
Review the output archive log file locations and, assuming you have relocated or removed any locally stored archive logs, you will need to correct the invalid or expired archive logs as follows:
RMAN>delete expired archivelog all;
It is essential to carry out the above steps (if applicable) before you continue with the Oracle E-Business Suite Release 12 RAC cloning procedure.
2.6 Control File Location
The database instance control files must be located in the shared storage area as well.
Section 3: Configuration Requirements for the Target RAC System
3.1 User Equivalence between Oracle RAC Nodes
Set up ssh and rsh user equivalence (that is, without password prompting) between primary and secondary target Oracle RAC nodes. This is described in the following documents:
- Oracle Database Oracle Clusterware and Oracle Real Application Clusters Installation Guide 10g Release 2 (10.2), with the required steps being listed in Section 2.4.7, "Configuring SSH on All Cluster Nodes".
- Oracle Grid Infrastructure Installation Guide 11g Release 2 (11.2), with the required steps being listed in Section E.1.2, "Configuring SSH on All Cluster Nodes"
- Oracle Grid Infrastructure Installation Guide 12c Release 1 (12.1), with the required steps being listed in Section F.1.2. "Configuring SSH on All Cluster Nodes".
Note: SSH connectivity can also be setup automatically during installation. This is described in:
- Section 2.14, "Automatic SSH Configuration During Installation" in Oracle Grid Infrastructure Installation Guide 11g Release 2 (11.2)
- Section 4.16, "Using Automatic SSH Configuration During Installation" in Oracle Grid Infrastructure Installation Guide 12c Release 1 (12.1).
3.2 Install Cluster Manager
Install Oracle Cluster Manager, and update the version to match that of the source system database. For example, if the original source system database is 10.2.0.3, Cluster Manager must also be patched to the 10.2.0.3 level.
Note: For detailed instructions regarding the installation and usage of Oracle's Clusterware software as it relates to Oracle Real Applications Clusters, see the following articles:
- Oracle Database Oracle Clusterware and Oracle Real Application Clusters Administration and Deployment Guide10g Release 2 (10.2).
- Oracle Real Application Clusters Administration and Deployment Guide 11g Release 2 (11.2)
- Oracle Real Application Clusters Administration and Deployment Guide 12c Release 1 (12.1)
3.3 Verify Shared Mount Points or Disks
Ensure that all shared disk sub-systems are fully and properly configured: they need to have adequate space, be writable by the future oracle software owner, and be accessible from both primary and secondary nodes.
Note: For details on configuring ASM, OCFS2, and NFS with NetApp Filer, see the following articles:
- Following documents contain details on creating ASM instances:
- Oracle Database Administrator's Guide 10g Release 2(10.2)
- Oracle Automatic Storage Management Administrator's Guide 11g Release 2 (11.2)
- Oracle Automatic Storage Management Administrator's Guide 12c Release 1 (12.1)
- Oracle Cluster File System User's Guide contains details on installing and configuring OCFS2. For OCFS best practices, refer to Linux OCFS - Best Practices.
- Linux/NetApp RHEL/SUSE Setup Recommendations for NetApp Filer Storage contains details specific to Linux NFS mount options and please see Configuring Network Appliance's NetApp To Work With Oracle for details on where to find NetApp co-authored articles related to using NetApp-branded devices with Oracle products.
Note: For ASM target deployments, it is strongly recommended that a separate $ORACLE_HOME be installed for ASM management, whatever the the location of your ASM listener configuration, and it is required to change the default listener configuration via the netca executable. The ASM default listener name (or service name) must not be of the form LISTENER_[HOSTNAME]. This listener name (LISTENER_[HOSTNAME]) will be specified and used later by AutoConfig for the RAC-enabled Oracle E-Business Suite database listener.
3.4 Verify Network Layer Interconnects
Ensure that the network layer is properly defined for private, public and VIP (Clusterware) Interconnects. This should not be a problem if runcluvfy.sh from the Oracle Clusterware software stage area was executed without error prior to CRS installation.
Section 4: Preparing the Source Oracle RAC System for Cloning
4.1 Update the File System with the latest Oracle RAC Patches
The latest RapidClone Consolidate Update patch (with the post-patch steps in its README) and all pre-requisite patches should have already been applied above from Section 2 of this note. After patch application, adpreclone.pl must be re-executed on all the application tiers and database tiers. For example, on the database tier, the following command would be used:
$ cd $ORACLE_HOME/appsutil/scripts/[context_name]
$ perl adpreclone.pl dbTier
After executing adpreclone.pl on all all the application and database tiers, perform the steps below.
4.2 Create Database Image
Note: Do NOT shut down the source system database services to complete the steps on this section. The database must remain mounted and open for the imaging process to successfully complete. RapidClone for RAC-enabled Oracle E-Business Suite Release 12 systems operates differently from single instance cloning.
Login to the primary Oracle RAC node, navigate to [ORACLE_HOME]/appsutil/clone/bin, and run the adclone.pl utility from a shell as follows:
perl adclone.pl \
java=[JDK 1.5 Location] \
mode=stage \
stage=[Stage Directory] \
component=database \
method=RMAN \
dbctx=[RAC DB Context File] \
showProgress
Where:
Parameter | Usage |
---|---|
stage | Any directory or mount point location outside the current ORACLE_HOME location, with enough space to hold the existing database datafiles in an uncompressed form. |
dbctx | Full Path to the existing Oracle RAC database context file. |
The above command will create a series of directories under the specified stage location.
After the stage creation is completed, navigate to [stage]/data/stage. In this directory, you will find several 2GB RMAN backup/image files. These files will have names like "1jj9c44g_1_1". The number of files present will depend on the source system configuration. The files, along with the "backup_controlfile.ctl", will need to be transferred to the target system upon which you wish to creation your new primary Oracle RAC node.
These files should be placed into a temporary holding area, which will ultimately be removed later.
These files should be placed into a temporary holding area, which will ultimately be removed later.
4.3 Archive the ORACLE_HOME
Note: The database may be left up and running during the ORACLE_HOME archive creation process.
Create an archive of the source system ORACLE_HOME on the primary node:
$ cd $ORACLE_HOME/..
$ tar -cvzf rac_db_oh.tgz [DATABASE TOP LEVEL DIRECTORY]
Note: Consider using data integrity utilities such as md5sum, sha1sum, or cksum to validate the file sum both before and after transfer to the target system.
This source system ORACLE_HOME archive should now be transferred to the target system RAC nodes upon which you will be configuring the new system, and placed in the directory you wish to use as the new $ORACLE_HOME.
Section 5: RAC-to-RAC Cloning
5.1 Target System Primary Node Configuration (Clone Initial Node)
Follow the steps below to clone the primary node (i.e. Node 1) to the new target system.
5.1.1 Uncompress ORACLE_HOME
Uncompress the ORACLE_HOME archive that was transferred from the source system. Choose a suitable location, and rename the extracted top-level directory name to something meaningful on the new target system.
$ tar -xvzf rac_db_oh.tgz
5.1.2 Create pairsfile.txt File for Primary Node
Create a [NEW_ORACLE_HOME]/appsutils/clone/pairsfile.txt text file with contents as shown below:
s_undo_tablespace=[UNDOTBS1 for Initial Node]
s_dbClusterInst=[Total number of Instances in a cluster e.g. 2]
s_db_oh=[Location of new ORACLE_HOME]
5.1.3 Create Context File for Primary Node
Execute the following command to create a new context file, providing carefully determined answers to the prompts.
Navigate to [NEW_ORACLE_HOME]/appsutil/clone/bin and run the adclonectx.pl utility with the following parameters:
perl adclonectx.pl \
contextfile=[PATH to OLD Source RAC contextfile.xml] \
template=[NEW ORACLE_HOME]/appsutil/template/adxdbctx.tmp \
pairsfile=[NEW ORACLE_HOME]/appsutil/clone/pairsfile.txt \
initialnode
Where:
Parameter | Usage |
---|---|
contextfile | Full path to the old source RAC database context file. |
template | Full path to the existing database context file template. |
pairsfile | Full path to the pairsfile created in the last step. |
Note: A new and unique global database name (DB name) must be selected when creating the new target system context file. Do not use the source system global database name or sid name uring any of the context file interview prompts as shown below.
You will be present with the following questions [sample answers provided]:
Target System Hostname (virtual or normal) [<hostname>]
[Enter appropriate value if not defaulted]
Do you want the inputs to be validated (y/n) [n] ? : [Enter n]
Target Instance is RAC (y/n) [y] : [Enter y]
Target System Database Name : [Enter new desired global DB name, not a SID; <DB NAME>
global name was selected here]
Do you want the target system to have the same port values as the source
system (y/n) [y] ? : [Select yes or no]
Provide information for the initial RAC node:
Host name [<hostname>] : [Always need to change this value to the current
public machine name, for example kawasaki]
Virtual Host name [null] : [Enter the Clusterware VIP interconnect name, for example
kawasaki-vip ]
Instance number [1] : 1 [Enter 1, as this will always be the instance
number when you are on the primary target node]
Private interconnect name [<Private interconnect name>] [Always need to change this value;
enter the private interconnect name, such as kawasaki-priv]
Target System quorum disk location required for cluster manager and node
monitor : /tmp [Legacy parameter; just enter /tmp]
Target System cluster manager service port : 9998 [This is a default port
used for CRS ]
Target System Base Directory : [Enter the base directory that contains the new_oh_loc
dir]
Oracle OS User [oracle] : [Should default to correct current user; just hit
enter]
Oracle OS Group [dba] : [Should default to correct current group; just hit
enter]
Target System utl_file_dir Directory List : <source utl_file_dir value>[Specify an
appropriate value for your requirements]
Number of DATA_TOP's on the Target System [2] : 1 [At present, you can only have one data_top with RAC-To-RAC
cloning]
Target System DATA_TOP Directory 1 : +APPS_RAC_DISK [The shared storage
location; ASM diskgroup/NetApps NFS mount point/OCFS2 mount point]
Do you want to preserve the Display [null] (y/n) ? : [Respond according to
your requirements]
New context path and file name [<new context file path>]
: [Double-check proposed location, and amend if needed]
Note: It is critical that the correct values are selected above: if you are uncertain, review the newly-written context file and compare it with values selected during source system migration to RAC (as per OracleMetalink Note 388577.1).
When making comparisons, always ensure that any path differences between the source and target systems are understood and accounted for.
When making comparisons, always ensure that any path differences between the source and target systems are understood and accounted for.
Note: If the most current AutoConfig Template patch as listed in OracleMetalink Note 387859.1 has already been applied to the source system, it is necessary to edit the value of the target system context variable "s_db_listener" to reflect the desired name for the tns listener. The traditionally accepted value is "LISTENER_[HOSTNAME]" but may be any valid value unique to the target host.
5.1.4 Restore Database on Target System Primary Node
Warning: It is NOT recommended to clone an E-Business Suite RAC enabled environments to the same host however if the source and target systems must be the same host, make certain the source system is cleanly shutdown and the datafiles moved to a temporarily inaccessible location prior to restoring/recovering the new target system.
Failure to understand this warning could result in corrupt redo logs on the source system. Same host RAC cloning requires the source system to be down.
Warning: In addition to same host RAC node cloning, it is also NOT recommended to attempt cloning E-Business Suite RAC enabled environments to a target system which can directly access source system dbf files (perhaps via an nfs shared mount). If the intended target file system has access to the to the source dbf files, corruption of redo log files can occur on the source system. It is also possible that corruption might occur if ANY dbf files exist on the new intended target file system which match the original source mount point [i.e. /foo/datafiles]. If existing datafiles on the target are in a file system location as is present on the source server [i.e. /foo/datafiles], shutdown the database which owns these datafiles.
Failure to understand this warning could result in corrupt redo logs on the source system or any existing database on the target host, having a mount point the same as the original and perhaps unrelated source system. If unsure, shutdown any database which stroes datafiles in a path which existed on the source system and in which datafiles were stored.
Failure to understand this warning could result in corrupt redo logs on the source system. Same host RAC cloning requires the source system to be down.
Warning: In addition to same host RAC node cloning, it is also NOT recommended to attempt cloning E-Business Suite RAC enabled environments to a target system which can directly access source system dbf files (perhaps via an nfs shared mount). If the intended target file system has access to the to the source dbf files, corruption of redo log files can occur on the source system. It is also possible that corruption might occur if ANY dbf files exist on the new intended target file system which match the original source mount point [i.e. /foo/datafiles]. If existing datafiles on the target are in a file system location as is present on the source server [i.e. /foo/datafiles], shutdown the database which owns these datafiles.
Failure to understand this warning could result in corrupt redo logs on the source system or any existing database on the target host, having a mount point the same as the original and perhaps unrelated source system. If unsure, shutdown any database which stroes datafiles in a path which existed on the source system and in which datafiles were stored.
Restore the database after the new ORACLE_HOME is configured.
5.1.4.1 Run adclone.pl to Restore and Rename Database on New Target System
Navigate to [NEW_ORACLE_HOME]/appsutil/clone/bin and run Rapid Clone (adclone.pl utility) with the following parameters:
perl adclone.pl \
java=[JDK 1.5 Location] \
component=dbTier \
mode=apply \
stage=[ORACLE_HOME]/appsutil/clone \
method=CUSTOM \
dbctxtg=[Full Path to the Target Context File] \
rmanstage=[Location of the Source RMAN dump files... i.e. RMAN_STAGE/data/stage] \
rmantgtloc=[Shared storage loc for datafiles...ASM diskgroup / NetApps NFS
mount / OCFS mount point]
srcdbname=[Source RAC system GLOBAL name] \
pwd=[APPS Password] \
showProgressode
Where:
Parameter | Usage |
---|---|
java | Full path to the directory where JDK 1.5 is installed. |
stage | This parameter is static and refers to the newly-unzipped [ORACLE_HOME]/appsutil/clone directory. |
dbctxtg | Full path to the new context file created by adclonectx.pl under [ORACLE_HOME]/appsutil. |
rmanstage | Temporary location where you have placed database "image" files transferred from the source system to the new target host. |
rmantgtloc | Base directory or ASM diskgroup location into which you wish the database (dbf) files to be extracted. The recreation process will create subdirectories of [GLOBAL_DB_NAME]/data, into which the dbf files will be placed. Only the shared storage mount point top level location needs be supplied. |
srcdbname | Source system GLOBAL_DB_NAME (not the SID of a specific node). Refer to the source system context file parameter s_global_database_name. Note that no domain suffix should be added. |
pwd | Password for the APPS user. |
Note: The directories and mount points selected for the rmanstage and rmantgtloc locations should not contain datafiles for any other databases. The presence of unrelated datafiles may result in very lengthy restore operations, and on some systems a potential hang of the adclone.pl restore command .
Running the adclone.pl command may take several hours. From a terminal window, you can run:
$ tail -f [ORACLE_HOME]/appsutil/log/$CONTEXT_NAME/ ApplyDatabase_[time].log
This will display and periodically refresh the last few lines of the main log file (mentioned when you run adclone.pl), where you will see references to additional log files that can help show the current actions being executed.
Note: If the database version is 12c Release 1, ensure to add the following line in your sqlnet_ifile.ora after adclone.pl execution completes:
SQLNET.ALLOWED_LOGON_VERSION_SERVER = 8
(if the initialization parameterSEC_CASE_SENSITIVE_LOGON
is set toFALSE
)SQLNET.ALLOWED_LOGON_VERSION_SERVER = 10
(ifSEC_CASE_SENSITIVE_LOGON
is set toTRUE
)
5.1.4.2 Verify TNS Listener has been started
After the above process exits, and it has been confirmed that no errors were encountered, you will have a running database and TNS listener, with the new SID name chosen earlier.
Confirm that the TNS listener is running, and has the appropriate service name format as follows:
$ ps -ef | grep tns | awk '{ print $9}'
5.1.4.3 Run AutoConfig
At this point, the new database is fully functional. However, to complete the configuration you must navigate to [ORACLE_HOME]/appsutil/scripts/[CONTEXT_NAME] and execute the following command to run AutoConfig:
$ adautocfg.sh appspass=[APPS Password]
5.2 Target System Secondary Node Configuration (Clone Additional Nodes)
Follow the steps below to clone the secondary nodes (for example, Node 2) on to the target system.
5.2.1 Uncompress the archived ORACLE_HOME transferred from the Source System
Uncompress the source system ORACLE_HOME archive to a location matching that present on your target system primary node. The directory structure should match that present on the newly created target system primary node.$ tar -xvzf rac_db_oh.tgz
5.2.2 Archive the [ORACLE_HOME]/appsutil directory structure from the new Primary Node
Log in to the new target system primary node, and execute the following commands:
$ cd [ORACLE_HOME]
$ zip -r appsutil_node1.zip appsutil
5.2.3 Copy appsutil_node1.zip to the Secondary Target Node
Transfer and then expand the appsutil_node1.zip into the secondary target RAC node [NEW ORACLE_HOME].$ cd [NEW ORACLE_HOME]
$ unzip -o appsutil_node1.zip
5.2.4 Update pairsfile.txt for the Secondary Target Node
Alter the existing pairsfile.txt (from the first target node) and change the s_undo_tablespace parameter. As this is the second node, the correct value would be UNDOTBS2. As an example, the [NEW_ORACLE_HOME]/appsutils/clone/pairsfile.txt would look like:s_undo_tablespace=[Or UNDOTBS(+1) for additional
Nodes]
s_dbClusterInst=[Total number of Instances in a cluster e.g. 2]
s_db_oh=[Location of new ORACLE_HOME]
5.2.5 Create a Context File for the Secondary Node
Navigate to [NEW_ORACLE_HOME]/appsutil/clone/bin and run the adclonectx.pl utility as follows:
perl adclonectx.pl \
contextfile=[Path to Existing Context File from the First Node] \
template=[NEW ORACLE_HOME]/appsutil/template/adxdbctx.tmp \
pairsfile=[NEW ORACLE_HOME]/appsutil/clone/pairsfile.txt \
addnode
Where:
Parameter | Usage |
---|---|
contextfile | Full path to the existing context file from the first (primary) node. |
template | Full path to the existing database context file template. |
pairsfile | Full path to the pairsfile created on last step. |
Several of the interview prompts are the same as on Node 1. However, there are some new questions which are specific to the "addnode" option used when on the second node.
Note: When answering the questions below, review your responses carefully before entering them. The rest of the inputs (not shown) are the same as those encountered during the context file creation on the initial node (primary node).
Host name of the live RAC node : <Hostname of RAC Node 1>
[enter appropriate value if not defaulted]
Domain name of the live RAC node : <domain name of RAC Node 1>[enter appropriate value
if not defaulted]
Database SID of the live RAC node : <DB SID of RAC Node 1>[enter the individual SID, NOT
the Global DB name]
Listener port number of the live RAC node : <DB port of RAC Node 1>[enter the port # of the
Primary Target Node you just created]
Provide information for the new Node:
Host name : <Hostname of current node>[enter appropriate value if not defaulted, like suzuki]
Virtual Host name : <Clusterware interconnect name>[enter the Clusterware VIP interconnect name,
like suzuki-vip.yourdomain.com]
Instance number : <Instance number for current node>[enter the instance # for this current node]
Private interconnect name : <Private interconnect name>[enter the private interconnect
name, like suzuki-priv]
Current Node:
Host Name : <Hostname of current node>
SID : <DB SID of current node>
Instance Name : <Instance Name of current node>
Instance Number : <Instance number>
Instance Thread : <Instance thread>
Undo Table Space: <Undo tablespace name for current node>[enter value earlier added to pairsfile.txt, if
not defaulted]
Listener Port : <Listener port>
Target System quorum disk location required for cluster manager and node
monitor : [legacy parameter, enter /tmp]
Note: At the conclusion of these "interview" questions related to context file creation, look carefully at the generated context file and ensure that the values contained therein compare to the values entered during context file creation on Node 1. The values should be almost identical, a small but important exception being the local instance name will have a number 2 instead of a 1.
Note: If the most current AutoConfig Template patch as listed in OracleMetalink Note 387859.1 has already been applied to the source system, it is necessary to edit the value of the target system context variable "s_db_listener" to reflect the desired name for the tns listener. The traditionally accepted value is "LISTENER_[HOSTNAME]" but may be any valid value unique to the target host
5.2.6 Configure NEW ORACLE_HOME
Run the commands below to move to the correct directory and continue the cloning process:$ cd [NEW ORACLE_HOME]/appsutil/clone/bin
$ perl adcfgclone.pl dbTechStack [Full path to the database context file
created in previous step]
Note: At the conclusion of this command, you will receive a console message indicating that the process exited with status 1 and that the addlnctl.sh script failed to start a listener named [SID]. That is expected, as this is not the proper service name. Start the proper listener by executing the following command:
[NEW_ORACLE_HOME]/appsutil/scripts/[CONTEXT_NAME]/addlnctl.sh start LISTENER_[hostname].
This command will start the correct (RAC-specific) listener with the proper service name.
[NEW_ORACLE_HOME]/appsutil/scripts/[CONTEXT_NAME]/addlnctl.sh start LISTENER_[hostname].
This command will start the correct (RAC-specific) listener with the proper service name.
Note: If the database version is 12c Release 1, ensure to add the following line in your sqlnet_ifile.ora after adcfgclone.pl execution completes:
SQLNET.ALLOWED_LOGON_VERSION_SERVER = 8
(if the initialization parameterSEC_CASE_SENSITIVE_LOGON
is set toFALSE
)SQLNET.ALLOWED_LOGON_VERSION_SERVER = 10
(ifSEC_CASE_SENSITIVE_LOGON
is set toTRUE
)
5.2.7 Source the new environment file in the ORACLE_HOME
Run the commands below to move to the correct directory and source the environment:$ cd [NEW ORACLE_HOME]
$ ./[CONTEXT_NAME].env
5.2.8 Modify [SID]_APPS_BASE.ora
Edit the [SID]_APPS_BASE.ora file and change the control file parameter to reflect the correct control file location on the shared storage. This will be the same value as in the [SID]_APPS_BASE.ora on the target system primary node which was just created.5.2.9 Start Oracle RAC Database
Start the database using the following commands:$ sqlplus /nolog
SQL> connect / as sysdba
SQL> startup
5.2.10 Execute AutoConfig
Run AutoConfig to generate the proper listener.ora and tnsnames.ora files:$ cd $ORACLE_HOME/appsutil/scripts/$CONTEXT_NAME
$ ./adautocfg.sh appspass=[APPS Password]
5.3 Carry Out Target System (Primary Node) Final Oracle RAC Configuration Tasks
5.3.1 Recreate TNSNAMES and LISTENER.ORA
Login again to the target primary node (Node 1) and run AutoConfig to perform the final Oracle RAC configuration and create new listener.ora and tnsnames.ora (as the FND_NODES table did not contain the second node hostname until AutoConfig was run on the secondary target RAC node).
$ cd $ORACLE_HOME/appsutil/scripts/[CONTEXT_NAME]
$ ./adautocfg.sh appspass=[APPS Password]
Note: This execution of AutoConfig on the primary target RAC Node 1 will add the second RAC Node connection information to the first node's tnsnames.ora, such that listener load balancing can occur. If you have more than two nodes in your new target system cluster, you must repeat Sections 4.2 and 4.3 for all subsequent nodes.
Section 6: RAC to Single Instance Cloning
It is now possible to clone from a RAC enabled E-Business Suite (source) environment to a Single Instance E-Business Suite (target) environment following nearly the same process detailed above in Section 5.
To clone from a RAC source environment to a Single Instance target, the image creation process as noted in Section 4 remains unchanged. On the target host system however, while working through Section 5, the context file creation (step 5.1.3 above) should be done as in the case of Single Instance cloning. All other primary target restore tasks remain the same from Section 5 in the case of Single Instance Restore. Disregard any references to secondary node configuration (starting at step 5.2) as it would not apply here.
For example:
Target Instance is RAC (y/n) [y] : [Enter n]
The Rapid Clone command to restore the database on the target system (step 5.1.4) remains the same whether the target is to be RAC or Single Instance.
Note: In a RAC to Single Instance Cloning scenario, there are no data structure changes to the database in regards to UNDO tablespaces or REDO log groups or members. These data structures will remain as were present in the source system RAC database. In some use cases, it might be advisable for the DBA to reduce the complexity carried over from the source RAC environment.
Section 7: Applications Tier Cloning for RAC
The target system Applications Tier may be located in any one of these locations:
- Primary target database node
- Secondary target database node
- An independent machine, running neither of the target system RAC nodes
- Shared between two or more machines
Because of the complexities which might arise, it is suggested that the applications tier should initially be configured to connect to a single database instance. After proper configuration with one of the two target system RAC nodes has been achieved, context variable changes can be made such that JDBC and TNS Listener load balancing are enabled.
7.1 Clone the Applications Tier
In order to clone the applications tier, follow the standard steps for the applications node posted on Sections 2 and 3 from OracleMetalink Note 406982.1, Cloning Oracle Applications Release 12 with Rapid Clone. This includes adpreclone steps, copy the bits to the target, configuration portion, and finishing tasks steps.
Note: On the applications tier, during the adcfgclone.pl execution, you will be asked for a database to which the applications tier services should connect to. Enter the information specific to a single target system RAC node (such as the primary node). On successful completion of this step, the applications node services will be started, and you should be able to log in and use the new target system Applications system.
7.2 Configure Application Tier JDBC and Listener Load Balancing
Reconfigure the applications node context variables such that database listener/instance load balancing can occur.
Note: The following details have been extracted from OracleMetalink Note 388577.1 for your convenience. Consult this note for further information.
Implement load balancing for the Applications database connections:
- Run the context editor (through Oracle Applications Manager) and set the value of "Tools OH TWO_TASK" (s_tools_two_task),"iAS OH TWO_TASK"(s_weboh_twotask) and "Apps JDBC Connect Alias" (s_apps_jdbc_connect_alias).
- To load-balance the Forms-based Applications database connections, set the value of "Tools OH TWO_TASK" to point to the [database_name]_balance alias generated in the tnsnames.ora file.
- To load-balance the self-service Applications database connections, set the value of iAS OH TWO_TASK" and "Apps JDBC Connect Alias" to point to the [database_name]_balance alias generated in the tnsnames.ora file.
- After successful completion of AutoConfig, restart the Applications tier processes via the scripts located in $ADMIN_SCRIPTS_HOME.
- Ensure that value of the profile option "Application Database ID" is set to dbc file name generated at $FND_SECURE.
cd $ADMIN_SCRIPTS_HOME; ./adautocfg.sh
Section 8: Advanced Cloning Scenarios
8.1 Cloning the Database Separately
In certain cases, customers may require the RAC database to be recreated separately, without using the full lock-step mechanism employed during a regular E-Business Suite RAC RapidClone scenario.
This section documents the steps needed to allow for manual creation of the target RAC database control files (or the reuse of existing control files) within the Rapid Clone process.
Unless otherwise noted, all command are specific to the primary target database instance.
Follow ONLY steps 1 and 2 in Section 2: Cloning Tasks of OracleMetalink Note 406982.1, then continue with these steps below to complete Cloning the Database Separately.
- Log on to the primary target system host as the ORACLE UNIX user.
- Copy and uncompress the Oracle Home from the source database tier and create the target database tier context file as noted above in Section 5: RAC-to-RAC Cloning: execute ONLYsteps 5.1.1, 5.1.2 and 5.1.3.
- Configure the [RDBMS ORACLE_HOME] by executing the following command:
$ cd [RDBMS_ORACLE_HOME]/appsutil/clone/bin
$ perl adcfgclone.pl dbTechStack [Full path to the database context file created in Step 5.1.3]/<contextfile>.xml - Create the target database control files manually (if needed) or modify the existing control files as needed to define datafile, redo and archive log locations along with any other relevant and required setting.
In this step, you copy and recreate the database using your preferred method, such as RMAN restore, Flash Copy, Snap View, or Mirror View. - Start the new target RAC database in open mode.
- Run the library update script against the RAC database.
$ cd [RDBMS ORACLE_HOME]/appsutil/install/[CONTEXT_NAME]
$ sqlplus "/ as sysdba" @adupdlib.sql [libext] - Configure the primary target database
The database must be running and open before performing this step.
$ cd [RDBMS ORACLE_HOME]/appsutil/clone/bin
$ perl adcfgclone.pl dbconfig [Database target context file]
Note: Thedbconfig
option will configure the database with the required settings for the new target, but it will not recreate the control files. - When the above tasks (1-6) are completed on the primary target database instance, see "5.2 Target System Secondary Node Configuration(Clone Additional Nodes)" to configure any secondary database instances.
8.2 Additional Advanced RAC Cloning Scenarios
Rapid Clone is only certified for RAC-to-RAC and RAC-to-Single Instance Cloning at this time. Addition or removal of RAC nodes during the cloning process is not currently supported.
Appendix A: Configuring Oracle Clusterware on the Target System Database Nodes
Associating Target System Oracle RAC Database instances and listeners with Clusterware (CRS)
Add target system database, instances and listeners to CRS by running the following commands as the owner of the CRS installation:
$ srvctl add database -d [database_name] -o [oracle_home]
$ srvctl add instance -d [database_name] -i [instance_name] -n [host_name]
$ srvctl add service -d [name] -s [service_name]
Note:For detailed instructions regarding the installation and usage of Oracle Clusterware software as it relates to Real Applications Clusters, see the following articles:
- Oracle Database Oracle Clusterware and Oracle Real Application Clusters Administration and Deployment Guide10g Release 2 (10.2)
- Oracle Real Application Clusters Administration and Deployment Guide 11g Release 2 (11.2)
- Oracle Real Application Clusters Administration and Deployment Guide 12c Release 1 (12.1)
Change Log
Date | Description |
---|---|
Jul 20, 2018 |
|
Mar 02, 2016 |
|
Oct 20, 2010 |
|
Sep 24, 2010 |
|
Mar 4, 2010 |
|
Feb 15, 2010 |
|
Feb 14, 2010 |
|
May 12, 2009 |
|
Mar 02, 2009 |
|
Jul 28, 2008 |
|
Jul 22, 2008 |
|
Jun 18, 2008 |
|
May 27, 2008 |
|
May 16, 2008 |
|
Apr 4, 2008 |
|
Copyright 2008, Oracle
REFERENCES
NOTE:387859.1 - Using AutoConfig to Manage System Configurations in Oracle E-Business Suite Release 12
NOTE:388577.1 - Using Oracle 10g Release 2 Real Application Clusters and Automatic Storage Management with Oracle E-Business Suite Release 12
NOTE:406982.1 - Cloning Oracle Applications Release 12 with Rapid Clone
NOTE:783188.1 - Certified RAC Scenarios for E-Business Suite Cloning
No comments:
Post a Comment