Oracle E-Business Suite Release 12.2 has numerous configuration options that can be chosen to suit particular business scenarios, uptime requirements, hardware capability, and availability requirements. This document describes how to migrate Oracle E-Business Suite Release 12.2 to an Oracle Real Application Clusters (Oracle RAC) environment running Oracle Database 12c Release 1 on Oracle Grid Infrastructure 12c Release 1 (12.1), Release 2 (12.2), or Oracle Grid Infrastructure 19c.
CRS
ORACLE_HOME
, other locations, and commands might vary between the these deployments.The most current version of this note is available as My Oracle Support Knowledge Document 1626606.1 Using Oracle 12c Release 1 (12.1) Real Application Clusters Database with Oracle E-Business Suite Release 12.2.
There is a change log at the end of this document.
A number of conventions are used in describing the Oracle E-Business Suite architecture:
Convention | Meaning |
---|---|
Application tier | Machines (nodes) running Forms, Web, and other services (servers). Sometimes called middle tier. |
Database tier | Machines (nodes) running the Oracle E-Business Suite database. |
oracle | User account that owns the database file system (database ORACLE_HOME and files). |
CONTEXT_NAME | The CONTEXT_NAME variable specifies the name of the Applications context that is used by AutoConfig. The default is <SID>_<hostname> . |
CONTEXT_FILE | Full path to the Applications context file on the application tier or database tier. The default locations are as follows. Application tier context file: <APPL_TOP>/admin/<CONTEXT_NAME>.xml Database tier context file: <RDBMS ORACLE_HOME>/appsutil/<CONTEXT_NAME>.xml |
APPSpwd | Oracle E-Business Suite database user password. |
Monospace Text | Represents command line text. Type such a command exactly as shown. |
< > | Text enclosed in angle brackets represents a variable. Substitute a value for the variable text. Do not type the angle brackets. |
\ | On UNIX or Linux, the backslash character can be entered to indicate continuation of the command line on the next screen line. |
This document is divided into following sections:
- Section 1: Overview
- Section 2: Environment
- Section 3: Install Oracle Grid Infrastructure 12c Release 1 (12.1), Release 2 (12.2), or 19c
- Section 4: Migrate Oracle E-Business Suite Release 12.2 on a Single Database Instance to a RAC Database
- Section 5: Using Rapid Install to Install Oracle E-Business Suite Release 12.2 on a RAC Database
- Section 6: References
- Appendices: Useful for Migrating a Non-RAC System to Oracle RAC:
- Appendices: Useful for Migrating to, or Installing Oracle RAC:
- Appendix C: An Example Grid Installation
- Appendix D: Enabling and Disabling SCAN Listener Support in AutoConfig
- Appendix E: Instance and Listener Interaction
- Appendix F: Considerations When Using a Shared ORACLE_HOME
- Appendix G: Configuring ASM Disks
- Appendix H: Job Role Separation
- Appendix I: Configuring Parallel Concurrent Processing
- Appendix J: Known Issues
Section 1: Overview
Starting with Oracle E-Business Suite Release 12.2, there are two methods for configuring an Oracle E-Business system for RAC. The traditional manual process uses ST tools such as DBCA, which is appropriate for existing Oracle E-Business Suite Release 12.2 non-RAC systems. It is now possible to use Rapid Install to deploy a fully-configured RAC system for new Oracle E-Business Suite Release 12.2 systems. For both methods, there are common steps, such as configuring Oracle Grid and shared storage. You should be familiar with Oracle Database 12c Release 1, and have a good knowledge of Oracle Real Application Clusters (RAC). Refer to appropriate Oracle Real Application Clusters document corresponding to the version you are using, such as Oracle Real Application Clusters Administration and Deployment Guide 12c Release 1 (12.1), Oracle Real Application Clusters Administration and Deployment Guide 12c Release 2 (12.2), or Oracle Real Application Clusters Administration and Deployment Guide 19c when planning to set up Oracle Real Application Clusters and shared devices.1. Cluster Terminology
You should understand the terminology used in a cluster environment. Key terms include the following.- Automatic Storage Management (ASM) is an Oracle database component that acts as an integrated file system and volume manager, providing the performance of raw devices with the ease of management of a file system. In an ASM environment, you specify a disk group rather than the traditional data file when creating or modifying a database structure, such as a tablespace. ASM then creates and manages the underlying files automatically.
- Cluster Ready Services (CRS) is the primary program that manages high availability operations in an Oracle RAC environment. The CRS process manages designated cluster resources, such as databases, instances, services, and listeners.
- Parallel Concurrent Processing (PCP) is an extension of the Oracle E-Business Suite Concurrent Processing architecture. PCP allows concurrent processing activities to be distributed across multiple nodes in an Oracle RAC environment, maximizing throughput and providing resilience to node failure. This is discussed in Appendix I: Configuring Parallel Concurrent Processing.
- Real Application Clusters (Oracle RAC) is an Oracle database technology that allows multiple machines to work on the same data in parallel, thereby significantly reducing processing time. An Oracle RAC environment also offers resilience if one or more machines become temporarily unavailable as a result of planned or unplanned downtime.
- Oracle Grid Infrastructure is the new unified ORACLE_HOME for both ASM and CRS. That is, Grid Infrastructure Install replaces the Clusterware Install in Oracle Database 12c Release 1 (12.1.0). For further information on Linux, refer to Oracle Grid Infrastructure Installation Guide 12c Release 1 (12.1) for Linux, Oracle Grid Installation and Upgrade Guide Release 2 (12.2) for Linux, or Oracle Grid Infrastructure Installation and Upgrade Guide 19c for Linux.
Section 2: Environment
2.1 Software and Hardware Configuration
Refer to the relevant platform installation guides for supported hardware configurations. For example, Oracle Grid Infrastructure Installation Guide 12c Release 1 (12.1) for Linux, Oracle Grid Installation and Upgrade Guide Release 2 (12.2) for Linux, or Oracle Grid Infrastructure Installation and Upgrade Guide 19c for Linux and Oracle Database Administrator's Guide 12c Release 1 (12.1).The software versions are as follows:Component | Applicable Version(s) |
---|---|
Oracle E-Business Suite Release 12 | 12.2 (12.2.3 and later) |
Oracle Database | 12.1.0.2 |
Oracle Cluster Ready Services | 12.1.0.2, 12.2.0.1, 19c |
If you are planning to use Job Role Separation with ASM, then ensure that you have the latest Rapid Install StartCD. You can obtain the latest Oracle Database 12c Release 1 (12.1.0) software from: https://www.oracle.com/technology/software/products/database/index.html
2.2 ORACLE_HOME Nomenclature
This document refers to various ORACLE_HOMEs, as follows:ORACLE_HOME | Purpose |
---|---|
SOURCE_ORACLE_HOME | Database ORACLE_HOME used by Oracle E-Business Suite Release Release 12. It can be any supported version. This is non-RAC. |
12c Release 1 (12.1.0) ORACLE_HOME | Database ORACLE_HOME installed for the Oracle 12c Release 1 (12.1.0) RAC Database. |
12c Release 1 (12.1.0) OR | ORACLE_HOME installed for the Oracle Database 12c Release 1 (12.1.0) or 12c Release 2 (12.2.0) Cluster Ready Services (Infrastructure home) or 19c Cluster Ready Services (Infrastructure home). |
Section 3: Install Oracle Grid Infrastructure 12c Release 1 (12.1), Release 2 (12.2), or 19c
If you want to use Oracle Grid Infrastructure 19c, then you have option to install Oracle Grid Infrastructure 19c or upgrade an existing 11gR2 or 12c Grid Infrastructure to 19c. Since Oracle Linux 7 is the minimum requirement for Oracle Grid Infrastructure 19c, you may need to upgrade your operating system or you can install Oracle Grid Infrastructure 19c on a new server. If you decide to use Oracle Grid Infrastructure 19c, then skip this section for installation of Oracle Grid Infrastructure 12c Release 1 or Release 2 and Appendix H for job role separation. Refer to Document 2676282.1, Installing and Upgrading Oracle Grid Infrastructure 19c for Oracle E-Business Suite Release 12.x Databases, for guidance on how to install or upgrade Oracle Grid Infrastructure 19c.
This section should be followed for configuration methods of manual migration.
3.1 Check Network Requirements
In Oracle Database 12c Release 1, the Infrastructure install can be configured to specify address management via node addresses, names (as per earlier releases), or via Grid Naming Services. Regardless of the choice, nodes must satisfy the following requirements:- Each node must have at least two network adapters: one for the public network interface and one for the private network interface (interconnect).
- For the public network, each network adapter must support the TCP/IP protocol.
- For the private network, the interconnect must support the user datagram protocol (UDP) using high-speed network adapters, and switches that support TCP/IP (Gigabit Ethernet or better is recommended).
- To improve availability, backup public and private network adapters can be configured for each node. Multiple NICs can be bonded to form one interface, which is often called bondethN (where N is the interface number i.e., bondeth0) and assigned an IP address.
- The interface names associated with the network adapter(s) for each network must be the same on all nodes.
- An IP address and associated host name for each public network interface must be registered in the DNS.
- One unused virtual IP address (VIP) and its associated virtual host name are registered in the DNS or resolved in the host file, or both, and which will be configured for the primary public network interface. The virtual IP address must be in the same subnet as the associated public interface. After installation, clients can be configured to use either the virtual host name or virtual IP address. If a node fails, its virtual IP address fails over to another node.
- A private IP address (and optionally a host name) for each private interface. Oracle recommends that you use private network IP addresses for these interfaces.
- An additional three virtual IP address (VIP) and associated virtual host name for the SCAN Listener, registered in the DNS.
For further information, refer to the "Configuring Networks for Oracle Grid Infrastructure and Oracle RAC" section in Oracle Grid Infrastructure Installation Guide 12c Release 1 (12.1) for Linux or Oracle Grid Installation and Upgrade Guide Release 2 (12.2) for Linux.
ntpd
incorrectly. For instructions, refer to the "Setting Network Time Protocol for Cluster Time Synchronization" section in Oracle Grid Infrastructure Installation Guide 12c Release 1 (12.1) for Linux or Oracle Grid Installation and Upgrade Guide Release 2 (12.2) for Linux.3.2 Verify Kernel Parameters
As part of the Infrastructure install, the preinstallation process checks the kernel parameters and, if necessary, creates a "fixup" script that corrects most of the common kernel parameter issues. Follow the installation instructions for running this script. Detailed hardware and OS requirements are detailed in the "Oracle Grid Infrastructure Installation Checklist" section in Oracle Grid Infrastructure Installation Guide 12c Release 1 (12.1) for Linux or Oracle Grid Installation and Upgrade Guide Release 2 (12.2) for Linux.3.3 Set Up Shared Storage
The available shared storage options are either ASM or shared file system (clustered or NFS). The use of raw disk devices is only supported for upgrades. These storage options are described in the "Configuring Storage for Oracle Grid Infrastructure for a Cluster and Oracle RAC" section in Oracle Grid Infrastructure Installation Guide 12c Release 1 (12.1) for Linux or Oracle Grid Installation and Upgrade Guide Release 2 (12.2) for Linux.3.3.1 ASM Configuration
- Install the required libraries for ASM. For more information, refer to "Appendix F: How to Complete Installation Prerequisite Tasks Manually" in Oracle Grid Infrastructure Installation Guide 12c Release 1 (12.1) for Linux or Oracle Grid Installation and Upgrade Guide Release 2 (12.2) for Linux.
- Ensure that ASM disks are configured properly using
oracleasm
. For more information on usingoracleasm
commands, refer to Appendix G: Configuring ASM Disks. - Note that Grid Infrastructure installation creates a single disk group. Currently Rapid Install only supports a single disk group.
Prior to Oracle Database Release 12c, it is mandatory to run an ASM instance on all the nodes of a cluster. All the ASM instances running on the cluster form an ASM cluster and communicate with each other. The ASM cluster allows the database clients running on the cluster to access the shared disk groups. If the ASM instance that is running on any node of that cluster fails, the database instances associated with the failed ASM instance also fail. In Oracle Database Release 12c, you no longer need to run an ASM instance on every node in the cluster. Oracle Flex ASM enables an Oracle ASM instance to run on a separate physical server from the database servers and several Oracle ASM instances can be clustered to support a large number of database clients. Oracle Flex ASM enables database clients to connect to ASM instances running on remote nodes. If any ASM instance goes down, database clients connected to the failed ASM instance will reconnect to any of the available ASM instances.
The defined minimum number of ASM instances can be set using the following command:
$ srvctl modify asm -count <required_asm_cardinality>
"srvctl modify asm"
command. You no longer need to run an ASM instance on every node in the cluster. For example, if you have a cluster with eight nodes and have ASM Cardinality set to three, you will have three ASM instances running on any three nodes of the cluster. If a server running an ASM instance fails, Oracle Clusterware will start a new ASM instance on a different server to maintain the cardinality. If an Oracle 12c Database instance is using a particular ASM instance, and that instance is lost because of a server crash or ASM instance failure, the Oracle Database 12c instance will reconnect to an existing ASM instance on another node.Oracle Clusterware ensures that the ASM Cardinality on a cluster is always maintained. It also ensures that the database instances associated with a failed ASM instance are reconnected to any of the available ASM instances on the same cluster. For more information, refer to Oracle Automatic Storage Management Administrator's Guide 12c Release 1 (12.1) and Oracle Database 2 Day + Real Application Clusters Guide 12c Release 1 (12.1), or Oracle Automatic Storage Management Administrator's Guide for Grid Infrastructure Release 12.2.
The following diagram shows the difference in ASM (one-to-one mapping) as used with previous versions of the database and Flex ASM (one-to-many mapping):
Use the following command to check if Oracle Flex ASM is enabled:
$ asmcmd
ASMCMD> showclustermode
ASM cluster : Flex mode enabled
Use the following command to check the status of ASM instances:
In this example, this command is executed on host 1; this is also used in other examples.
$ srvctl status asm -detail
ASM is running on dbhost1,dbhost2,dbhost3
ASM is enabled.
$ srvctl config asm -detail
ASM home: <CRS home>
Password file: +DATA/orapwASM
ASM listener: LISTENER
ASM is enabled.
ASM is individually enabled on nodes:
ASM is individually disabled on nodes:
ASM instance count: 3
Cluster ASM listener: ASMNET1LSNR_ASM
Use the following command to change the cardinality of the ASM:
$ srvctl modify asm -count 2
$ srvctl config asm -detail
ASM home: <CRS home>
Password file: +DATA/orapwASM
ASM listener: LISTENER
ASM is enabled.
ASM is individually enabled on nodes:
ASM is individually disabled on nodes:
ASM instance count: 2
Cluster ASM listener: ASMNET1LSNR_ASM
$ srvctl status asm -detail
ASM is running on dbhost1,dbhost2
ASM is enabled.
SQL> select instance_number, instance_name, host_name from gv$instance;
INSTANCE_NUMBER INSTANCE_NAME HOST_NAME
--------------- ---------------- ------------------------------
1 orcl1 dbhost1
2 orcl2 dbhost2
3 orcl3 dbhost3
In this case, you can see that the ASM cardinality was reduced to 2, but the database instances are still running on all of the nodes.
Use the following command to force the ASM instance on a specific node to shutdown:
$ srvctl stop asm -node dbhost1 -stopoption abort -force
$ srvctl status asm -detail
ASM is running on dbhost2
ASM is enabled.
$ ps -ef | grep pmon
oracle 3813 1 0 17:40 ? 00:00:00 mdb_pmon_-MGMTDB
oracle 5806 1 0 17:42 ? 00:00:00 ora_pmon_orcl1
Note: In the above example, a database instance is associated with a specific ASM instance running on the same node. If it is not possible to bring up the ASM instance on that node, the database instance can still be started on that node as it will connect to any of the available ASM instances running on any of the remote nodes.
3.3.2 Shared File System
Ensure that the database directory is mounted with the mount options defined in My Oracle Support Knowledge Document 359515.1, .
3.4 Check Account Setup
- Configure the
oracle
account's environment for Oracle Clusterware and Oracle Database 12c Release 1 (12.1.0, as per the "Configuring Users, Groups and Environments for Oracle Grid Infrastructure and Oracle RAC" section in Oracle Grid Infrastructure Installation Guide 12c Release 1 (12.1) for Linux or Oracle Grid Installation and Upgrade Guide Release 2 (12.2) for Linux.
3.5 Configure Secure Shell on All Cluster Nodes
Secure Shell configuration is covered in detail in both the Oracle Real Application Clusters Installation Guide 12c Release 1 (12.1) for Linux and UNIX and Oracle Grid Infrastructure Installation Guide (Oracle Grid Infrastructure Installation Guide 12c Release 1 (12.1) for Linux or Oracle Grid Installation and Upgrade Guide Release 2 (12.2) for Linux). The Oracle Database 12c Release 1 (12.1.0) installer now provides the option to automatically set up passwordless ssh connectivity. Therefore, unlike previous releases, manual setup of Secure Shell is not necessary.
For further details on manual setup of passwordless ssh, refer to "Appendix F: How to Complete Installation Prerequisite Tasks Manually" in Oracle Grid Infrastructure Installation Guide 12c Release 1 (12.1) for Linux or Oracle Grid Installation and Upgrade Guide Release 2 (12.2) for Linux.
3.6 Run the Cluster Verification Utility (CVU)
The installer automatically runs the Cluster Verify tool and provides a fixup script for any OS issues. However, to check for potential issues, you can also run CVU prior to installation:- Install the
cvuqdisk
package as detailed in the "Installing the cvuqdisk RPM for Linux" section in Oracle Grid Infrastructure Installation Guide 12c Release 1 (12.1) for Linux or Oracle Grid Installation and Upgrade Guide Release 2 (12.2) for Linux.
- Use the following command to determine which preinstallation steps have been completed, and which need to be performed:In this and the following commands, substitute
$ <12c Grid Software Stage>/runcluvfy.sh stage -pre crsinst -n <node_list>
<12cg Grid Software Stage>
with the stage location on your system. Substitute<node_list>
with the names of the nodes in your cluster, separated by commas. To identify and resolve issues at this stage (rather than during the installation), consider adding the-fixup
and-verbose
options to the above command. - Use the following command to check the networking setup with CVU:
$ <12c Grid Software Stage> /runcluvfy.sh comp nodecon -n <node_list> [-verbose]
- Use the following command to check operating system requirements with CVU:
$ <12c Software Stage> /runcluvfy.sh comp sys -n <node_list> -p {crs|database} \
-osdba osdba_group -orainv orainv_group -verbose
3.7 Install Oracle Clusterware 12c Release 1 or 12c Release 2
- Use the same
oraInventory
location that was created during the installation of Oracle E-Business Suite Release 12; make a backup oforaInventory
prior to the installation. - Start
runInstaller
from the Oracle Clusterware 12c Release 1 staging area, and install as per your requirements. For further information, refer to the "Installing Oracle Grid Infrastructure for a Cluster" section in Oracle Grid Infrastructure Installation Guide 12c Release 1 (12.1) for Linux or Oracle Grid Installation and Upgrade Guide Release 2 (12.2) for Linux.
Note: Customers who have an existing Grid Infrastructure install that has been tailored to their requirements can skip this step. Those who do not, require further information, or who are perhaps doing a test install, should refer to Appendix C: An Example Grid Installation for an example walkthrough. - Confirm the Oracle Clusterware function:
- After installation, log in as
root
, and use the following command to confirm that your Oracle Clusterware installation is running correctly:$ <CRS_HOME>/bin/crs_stat -t -v
- Successful Oracle Clusterware operation can also be verified using the following command:
$ <CRS_HOME>/bin/crsctl check crs
CRS-4638: Oracle High Availability Services is online
CRS-4537: Cluster Ready Services is online
CRS-4529: Cluster Synchronization Services is online
CRS-4533: Event Manager is online
- After installation, log in as
Section 4: Migrate Oracle E-Business Suite Release 12.2 on a Single Database Instance to a Oracle RAC Database
This section explains how to migrate Oracle E-Business Suite Release 12.2 running on a single database (non-RAC) instance to an Oracle Real Application Cluster environment.
This topic is divided into following sections:
- 4.1: Configuration Prerequisites
- 4.2: Install the Oracle Database Software 12c Release 1 (12.1.0) and Upgrade the Oracle E-Business Suite Database to 12c Release 1 (12.1.0)
- 4.3: Configure Shared Storage
- 4.4: Listener Configuration in 12c Release 1 (12.1)
- 4.5: Convert the Oracle 12c Database to Oracle RAC
- 4.6: Enable AutoConfig on All Database Nodes
- 4.7: Establish the Oracle E-Business Suite Environment for Oracle RAC
4.1: Configuration Prerequisites
The prerequisites for migrating a database to Oracle RAC are as follows:
- An existing Oracle E-Business Suite Release 12.2 non-RAC system.
- Ensure that your data files are on shared storage. If your data files are on local storage, then move them to shared storage and re-create the control files.
- Ensure that you have completed Section 3 of this document.
- Ensure that you have applied all applications relevant patches, as detailed in My Oracle Support Knowledge Document 1926201.1, Interoperability Notes Oracle EBS 12.2 with Oracle Database 12c Release 1.
4.2: Install the Oracle Database Software 12c Release 1 (12.1.0) and Upgrade the Oracle E-Business Suite Database to 12c Release 1 (12.1.0)
oraInventory
directory before starting this stage, in which you will run the Oracle Universal Installer (runInstaller
) to perform the Oracle RAC Database Installation. In the Cluster Nodes Window, select all the nodes to be included in your Oracle RAC cluster.4.2.1 Install Oracle Database Software 12c Release 1 with RAC Option
oraInventory
directory before starting this stage, in which you will run the Oracle Universal Installer (runInstaller
) to carry out an Oracle Database Installation with Oracle RAC.4.2.1.1 Install Oracle Database Software 12c Release 1 (12.1.0.2.0)
To install the Oracle Database software, select all the nodes to be included in the cluster. Ensure that the database software is installed on all nodes in the cluster.
4.2.1.2 Apply All Required Database Patches
Ensure that all the database patches are applied, as listed in My Oracle Support Knowledge Document 1926201.1, Interoperability Notes Oracle EBS 12.2 with Oracle Database 12c Release 1.
To install the Oracle Database 12c Release 1 software and upgrade an existing database to 12c Release 1, refer to the Interoperability Notes Document 1926201.1.
Follow all the instructions and steps listed in Document 1926201.1 except for the following, which will not be performed:
- Start the new database listener (Conditional)
- Implement and run AutoConfig
- Restart Applications server processes
Note that the following steps from the Interoperability note Document 1926201.1 cannot be done at this point because there is no database listener. You will perform these steps after you have converted your system to Oracle RAC and configured the application tier, as described in section 4.7.3: Establish the Oracle E-Business Suite Environment for Oracle RAC.
- Step-25: Validate Workflow ruleset
- Step-30: Apply post-upgrade WMS patch (conditional)
- Step-35: Synchronize Workflow views
4.3: Configure Shared Storage
This document does not discuss the setup of shared storage, as there are no specific tasks in setting up ASM, NFS (NAS), or clustered storage.
- For ASM, refer to Oracle Automatic Storage Management Administrator's Guide 12c Release 1 (12.1).
- For configuring shared storage, refer to the "Configuring Storage for Oracle Grid Infrastructure for a Cluster and Oracle RAC" section in Oracle Grid Infrastructure Installation Guide 12c Release 1 (12.1) for Linux, Oracle Grid Installation and Upgrade Guide Release 2 (12.2) for Linux, or Oracle Grid Infrastructure Installation and Upgrade Guide 19c for Linux.
4.4: Listener Configuration in 12c Release 1 (12.1)
The Listener configuration requires careful attention when converting an Oracle Database to Oracle RAC.
There are two types of listener in Oracle 12c Release 1 Clusterware: the SCAN listener and general database listeners. The SCAN listener provides a single named access point for clients, and replaces the use of Virtual IP addresses (VIP) in client connection requests (tnsnames.ora aliases). However, connection requests can still be routed via the VIP name, as both access methods are fully supported.
To start or stop a listener using srvctl
, the following three configuration components are required:
- An Oracle Home from which to run
lsncrtl
. - The
listener.ora
file under theTNS_ADMIN
network directory. - The listener name (defined in the
listener.ora
) to start and stop the service.
The Oracle Home can either be the Infrastructure home or a Database home. The TNS_ADMIN
directory can be any accessible directory. The listener name must be unique within the listener
.ora
file. For further information, refer to the "Oracle Net Services Configuration for Oracle RAC Databases" section in Oracle Real Application Clusters Installation Guide 12c Release 1 (12.1) for Linux and UNIX, Oracle Real Application Clusters Installation Guide 12c Release 2 (12.2) for Linux and UNIX , or Oracle Real Application Clusters Installation Guide 19c for Linux and UNIX.
Three issues must be considered:
- Listener configuration in Oracle 12c Clusterware
- Listener requirements for converting Oracle Database to Oracle RAC
- Listener requirements for AutoConfig
For a more detailed explanation of how instances interact with listeners, refer to Appendix E: Instance and Listener Interaction.
For enabling and disabling SCAN Listener support in AutoConfig with Oracle E-Business Suite 12.2, refer to Appendix D: Enabling and Disabling SCAN Listener Support in AutoConfig.
4.4.1 Listener Requirements for Converting Oracle Database to Oracle RAC
Tools such as rconfig
, dbca
, dbua
impose additional restrictions on the choice of listener. The listener must be the default listener and it must run from the Grid home. So if the default listener is not set up for rconfig
, then configure it up using the following command:
$ srvctl add listener -p <EBS Database port>
The default listener port may be changed to the Oracle E-Business Suite database with -p option
.
If the default listener is running out of the Grid home, the port can be modified using the following command:
$ srvctl modify listener -l LISTENER -p <EBS Database port>
After the conversion, you can reconfigure the listener as required.
4.4.2 Listener Requirements for AutoConfig
AutoConfig supports use of either a named VIP database listener, or the named VIP database listener along with the SCAN listeners. Usage of CRS Grid Listener for database connections is not supported.4.5: Convert the Oracle 12c Database to Oracle RAC
There are three options for converting Oracle Database to Oracle RAC, which are detailed in the "Converting Single-Instance Oracle Databases to Oracle RAC and Oracle RAC One Node" section of Oracle Real Application Clusters Administration and Deployment Guide 12c Release 1 (12.1), Oracle Real Application Clusters Administration and Deployment Guide 12c Release 2 (12.2), or Oracle Real Application Clusters Administration and Deployment Guide 19c.
- DBCA
- rconfig
- Enterprise Manager
All these will convert a database to Oracle RAC, so you may choose the one you are most familiar with.
Prerequisites for conversion are as follows:
- A clustered Grid Infrastructure install with at least one SCAN listener address.
- The default listener running from the Grid Infrastructure home.
- The default port can be used, or another port specified during the Grid Infrastructure install as
rconfig
anddbca
will look for the port number that it finds in CRS.
- The default port can be used, or another port specified during the Grid Infrastructure install as
- A 12c
ORACLE_HOME
installed on all nodes in the cluster. - Shared storage - the database files can already be on shared storage (CFS or ASM) or moved to ASM as part of the conversion as described in Section 3.3: Set Up Shared Storage.
rconfig
.The steps involved for Admin Managed rconfig
conversion are as follows:
- As the
oracle
user, navigate to the 12c directory$12c_ORACLE_HOME/assistants/rconfig/sampleXMLs
, and open the sample fileConvertToRAC_AdminManaged.xml
using a text editor such asvi
. This XML sample file contains comment lines that provide instructions on how to edit the file for your specific configuration. - Make a copy of the sample
ConvertToRAC.xml
file, and modify the parameters as necessary. Keep a note of the name of your modified copy.
Note: Study the example file (and associated notes) in Appendix A: A Sample Config XML file before you edit your own file and runrconfig
. - Execute
rconfig
using the convert optionconvert verify="ONLY"
prior to performing the actual conversion. Although this is optional, it is highly recommended, as the test validates the parameters and identifies any issues that need to be corrected before the conversion takes place.
Note: Specify theSourceDBHome
variable inConvertToRAC_AdminManaged.xml
as the Non-RAC Oracle Home(<SOURCE_ORACLE_HOME>)
. If you wish to specify theNEW_ORACLE_HOME,
start the database from the new Oracle Home. - Navigate to
$12c_ORACLE_HOME/bin
, and runrconfig
as follows:
Note: Before runningrconfig
, make sure that thelocal_listener
initialization parameter is set toNULL
.
If necessary, thelocal_listener
can be unset by running the following command:SQL>alter system reset local_listener sid='*' scope=both;
Note:rconfig
is currently not certified with Oracle Flex ASM and so you will need to use the following workaround to convert a single instance database to Oracle RAC. Use the following command to set the ASM cardinality to ALL:$ srvctl modify asm -count ALL
The$ ./rconfig <path to rconfig XML file created in Step 2 of this list>
rconfig
command will perform the following tasks:- Migrate the database to ASM storage (if ASM is specified as the storage option in the configuration XML file).
- Create database instances on all nodes in the cluster.
- Configure the listener and NetService entries.
- Configure and register the CRS resources.
- Start the instances on all nodes in the cluster.
4.5.1 Post-Migration Steps
4.5.1.1 Disable Archivelog Mode (Optional)
The conversion tools may change some configuration options. Most notably, your database will now be in archivelog mode, regardless of whether it was prior to the conversion. If you do not want to use the archivelog mode, perform the following steps:
- Shut down all the RAC instances and mount only one instance, using the
startup mount
command. - Use the command
alter database noarchivelog
to disable archiving. - Shut down the database using the
shutdown immediate
command. - Start up the database using the
startup
command.
For further details of how to control archiving, refer to Oracle Database Administrator's Guide 12c Release 1 (12.1).
4.5.1.2 Listener Configuration
When converting Oracle Database to Oracle RAC, rconfig
will use the Grid Local Listener (LISTENER) for the actual conversion.
At this point, it is recommended that you use Oracle E-Business Suite local listeners along with the SCAN listener. Configure the Oracle E-Business Suite local listeners on all nodes using the same port as used for the single instance.
4.5.1.2.1 Configure EBS Local Listener (on all nodes in the cluster)
- Create a
<CONTEXT_NAME>
directory under<12c_ORACLE_HOME>/network/admin
, using the new instance name. For example, if your database name isVISRAC
and you want to use "vis
" as the instance prefix, create the<CONTEXT_NAME>
directory asvis1_<hostname>
. - Copy
listener.ora
andtnsnames.ora
from<OLD Oracle Home>/network/admin/<context directory>
to$ORACLE_HOME/network/admin
/<CONTEXT_NAME>.
- Modify the listener and tnsnames to point to the New
ORACLE_HOME
andSID
. Ensure that the<sid>_Local
alias exists intnsnames.ora
; otherwise, create the alias. - Resource the environment to point to the new
ORACLE_HOME
andTNS_ADMIN
directories. - Start the listeners on all nodes.
- Log on to the database and set the local listener parameter to
<sid>_local;
repeat the following on all nodes for each instance in the cluster:$ sqlplus / as sysdba
SQL>alter system set local_listener = <SID_LOCAL> scope=both sid='<SID>'
4.6. Enable AutoConfig on All Database Nodes
4.6.1 Steps to Perform On All Oracle RAC Nodes
- Ensure that you have applied the Oracle E-Business Suite patches listed in the Prerequisites section.
- Execute
$AD_TOP/bin/admkappsutil.pl
on the applications tier to generate anappsutil.zip
file for the database tier. - Copy (i.e. via ftp) the
appsutil.zip
file to the database tier12c_ORACLE_HOME
. - Unzip the
appsutil.zip
file to create theappsutil
directory in the12c_ORACLE_HOME
. - Copy the jre directory from
SOURCE_ORACLE_HOME>/appsutil
to12c_ORACLE_HOME>/appsutil.
- Create a
<CONTEXT_NAME>
directory under<12c_ORACLE_HOME>/network/admin
. Use the new instance name while creating the context directory. Normally database name and instance prefix are same, but if you want the instance prefix to be different from the database name, then create the<CONTEXT_NAME>
directory asSID1_<hostname>.
For example, if your database name is VISRAC and you want to use "vis" as the instance prefix, create the<CONTEXT_NAME>
directory asvis1_<hostname>.
- Set the following environment variables:
ORACLE_HOME =<12c_ORACLE_HOME>
LD_LIBRARY_PATH = <12c_ORACLE_HOME>/lib:<12c_ORACLE_HOME>/ctx/lib
ORACLE_SID = <instance name for current database node>
PATH= $PATH:$ORACLE_HOME/bin
TNS_ADMIN = $ORACLE_HOME/network/admin/<context_name> - Copy the listener.ora file from the old
ORACLE_HOME
to the new$TNS_ADMIN
directory and modify theORACLE_HOME
to12c_ORACLE_HOME.
- Copy the tnsnames.ora file from the old
$ORACLE_HOME/network/admin
to the new$TNS_ADMIN
directory, and edit the aliases forSID=<new RAC instance name>
and<SID>_local.
- If the EBS listener is running from the old
$ORACLE_HOME
, then stop it. Start the listener from the new$ORACLE_HOME
. - As the APPS user, run the following command to deregister the current configuration:
SQL>exec fnd_conc_clone.setup_clean;
- From the 12c
ORACLE_HOME/appsutil/bin
directory, create an instance-specific XML context file by executing the command:Enable SCAN and provide the SCAN name and SCAN Port, when prompted.$ perl adbldxml.pl appsuser=<APPSuser> appspass=<APPSpwd>
- Set the value of
s_virtual_hostname
to point to the virtual host name for the database host, by editing the database context file$ORACLE_HOME/appsutil/<sid>_hostname.xml.
- From the 12c
ORACLE_HOME/appsutil/bin
directory, execute AutoConfig on the database tier by running theadconfig.pl
script. - If you prefer to manage the listener using srvctl, follow the steps in Section 4.6.4.
- To ensure all AutoConfig TNS aliases are set up to recognize all available nodes, re-run AutoConfig on all nodes.
4.6.2 Shut Down Instances and Listeners
Use the following commands:
$ srvctl stop listener
$ srvctl stop database -d <database>
4.6.3 Update the Server Parameter File Settings
After the Oracle RAC conversion, you will have a central server parameter file (spfile
).
It is important to understand the Oracle RAC specific changes that were introduced by AutoConfig, and to ensure that the context file in $ORACLE_HOME/appsutil/<sid>_hostname.xml>
is in sync with the database initialization parameters.
The affected parameters are listed in the Oracle RAC template under 12c_ORACLE_HOME/appsutil/template/afinit_db121RAC.ora
. They are also listed below. Many will have been set by the conversion, and others are likely to be set by customers for non-RAC related reasons.
service_names
- Oracle E-Business Suite customers may well have a variety of services already set. You must ensure thatservice_names
includes%s_dbService% [database name]
across all instances.local_listener
- If you are usingSRVCTL
to manage your database, the installation guides recommend leaving this unset as it is dynamically set during instance startup. However, using the AutoConfig<instance>_local
alias will also work. If you are using a non-default listener, then this parameter must be set to<instance>_local.
remote_listener
- If you are using AutoConfig to manage your connections, then theremote_listener
must be set to the<database>_remote
AutoConfig alias.
cluster_database
cluster_database_instances
undo_tablespace
instance_name
instance_number
thread
4.6.4 Update the New listener.ora
If you intend to manage an Oracle E-Business Suite database with SRVCTL
, you must perform the following additional steps:
TNS_ADMIN
cannot be shared as the directory path must be same on all nodes. Refer to Appendix F: Considerations When Using a Shared ORACLE_HOME for an example of how to use SRVCTL
to manage listeners in a shared Oracle Home configuration.- If you wish to use the port allocated to the default listener, stop and remove the default listener.
- Add an Oracle E-Business Suite listener using the following commands:
$ srvctl add listener -l listener_<name> -o <
12c
ORACLE_HOME> -p <port>
$ srvctl setenv listener -l listener_<name> -T TNS_ADMIN= $ORACLE_HOME/network/adminNote: If registering the listener with Cluster Services failed with an CRS-0254 authorization failure error, refer to the Known Issues Section. - On each node, add the AutoConfig
listener.ora
file as anifile
in$ORACLE_HOME/network/admin/listener.ora
.
- On each node, add the AutoConfig
tnsnames.ora
file as anifile
in$ORACLE_HOME/network/admin/tnsnames.ora
.
- On each node, add the AutoConfig
sqlnet.ora
file as anifile
in$ORACLE_HOME/network/admin/sqlnet.ora
- Add
TNS_ADMIN
to the database using the following command:$ srvctl setenv database -d <database_name> -t TNS_ADMIN=$ORACLE_HOME/network/admin
- Add
ORA_NLS10
to the database using the following command:$ srvctl setenv database -d <database_name> -t ORA_NLS10=$ORACLE_HOME/nls/data/9idata
- Start up the database instances and listeners on all nodes. The database can now be managed via
SRVCTL
.
4.7. Establish the Oracle E-Business Suite Environment for Oracle RAC
4.7.1 Preparatory Steps
If the patching cycle has not completed, run the following command as the owner of the source administration server, which will complete any in-progress adop session:
$adop phase=prepare,cutover
- Source the Oracle E-Business Suite environment.
- On both Run and Patch file systems, edit
INSTANCE_NAME=<RAC Instance Name>
and PORT=<New listener port>(if changed)
in$TNS_ADMIN/tnsnames.ora
to set up a connection to one of the instances in the Oracle RAC environment for all aliases including <sid>_patch alias.
- Confirm you are able to connect to one of the instances in the Oracle RAC environment from the RUN file system.
- On Run and Patch file systems, edit the context variable
s_apps_jdbc_patch_connect_descriptor
, modify the PORT value, if it has changed, and modify INSTANCE_NAME to one of the RAC instance in theCONNECT_DATA
parameter value in the context file. For example :<patch_jdbc_url oa_var="s_apps_jdbc_patch_connect_descriptor"> jdbc:oracle:thin:@(DESCRIPTION=(ADDRESS=(PROTOCOL=tcps)(HOST=<host name>)(PORT=<dbport>))(CONNECT_DATA=(SERVICE_NAME=ebs_patch)(
INSTANCE_NAME=<RAC Instance Name
>)) </patch_jdbc_url> - Run AutoConfig on both the RUN and PATCH file systems using the command:
$ $AD_TOP/bin/adconfig.sh contextfile=$INST_TOP/appl/admin/<context_file>
Note: AutoConfig will fail because adgentns.pl requires a Patch Edition, which does not exist yet. Ignore this error.
For more information on AutoConfig, refer to the "Technical Configuration" section in the Oracle E-Business Suite Setup Guide Release 12.2.
- Edit all aliases ($TWO_TASK and SID_PATCH) in tnsnames.ora on the Patch file system to set up a connection to one of the RAC instance.
- Execute
adop phase=prepare,cutover
on the Run file system to sync the Run and Patch file systems.
- Check the
$INST_TOP/admin/log/<MMDDhhmm>
AutoConfig log file for errors.
- Source the environment by using the latest environment file generated.
- Verify the
tnsnames.ora
andlistener.ora
files in the$INST_TOP/ora/10.1.2/network/admin
directory (in both Run and Patch File systems) , and the jdbc URL in$FMW_HOME/user_projects/domains/EBS_domain_<sid>/config/jdbc/EBSDataSource-<num>-jdbc.xml
. Ensure that the correct TNS aliases have been generated for load balancing and failover in each of these files, and that all the aliases are defined using the virtual host names.
- In the
dbc
file located at$FND_SECURE
, ensure that the parameterAPPS_JDBC_URL
is configured with all of the instances in the environment, and thatload_balance
is set toYES
.
4.7.2 Configure Load Balancing
The steps in this section describe how to implement load balancing for the Oracle E-Business Suite database connections.
Implement load balancing across the Oracle E-Business Suite database connections:
- Using the Context Editor (via the Oracle Applications Manager interface), modify the variables as follows:
- To load balance the Forms-based application database connections, set the value of "Tools OH TWO_TASK" (
s_tools_twotask
) to point to the<database_name>_balance
alias generated in thetnsnames.ora
file. - To load balance the Self-Service (HTML-based) application database connections, set the value of "iAS OH TWO_TASK" (
s_weboh_twotask
) and "Apps JDBC Connect Alias" (s_apps_jdbc_connect_alias
) to point to the<database_name>_balance
alias generated in thetnsnames.ora
file.
- To load balance the Forms-based application database connections, set the value of "Tools OH TWO_TASK" (
- Execute AutoConfig on the Run file system by running the command:
$ $AD_TOP/bin/adconfig.sh contextfile=$INST_TOP/appl/admin/<context_file>
- Restart the Oracle E-Business Suite processes using the new scripts generated by AutoConfig.
- Ensure that the value of the profile option "Application Database ID" is set to the dbc file name generated in
$FND_SECURE
.
Note: If you are adding a new node to the application tier, repeat all the above steps to set up load balancing on the new application tier node.
4.7.3 Finalize the Skipped Steps from the Interoperability Note
The following steps were skipped in section 4.2.1.2. Complete the following steps from the Interoperability note (Document 1926201.1):
- Step-25: Validate Workflow ruleset
- Step-30: Apply post-upgrade WMS patch (conditional)
- Step-35: Synchronize Workflow views
Section 5: Using Rapid Install to Install Oracle E-Business Suite Release 12.2 on a RAC Database
The Rapid Install included with Oracle E-Business Suite Release 12.2 introduces the ability to specify a Real Applications Cluster database. There are several configuration options including, for example, a shared Oracle Home, and ASM or shared file system database storage.
This section covers the following topics:
- 5.1: Configuration Prerequisites
- 5.2: Installing Oracle E-Business Suite Release 12.2 with Oracle Database 12cR1 on Cluster Nodes
- 5.3 Troubleshooting and Known Issues
5.1: Configuration Prerequisites
Oracle E-Business Suite Release 12.2 with an Oracle Real Application Clusters (RAC) database supports a number of different configurations. Each of the following sections contains the prerequisites for each of the options:
- 5.1.1 Cluster Prerequisites
- 5.1.2 Shared File System Prerequisites
- 5.1.3 Shared Oracle Home Prerequisites
- 5.1.4 Database Software Install Prerequisites
5.1.1 Cluster Prerequisites
- Ensure that you have already installed the Oracle Grid infrastructure, as per Section 3: Install Oracle Grid Infrastructure 12c Release 1 (12.1), Release 2 (12.2), or 19c.
- Ensure that cluster services are up and running using the following command. In particular, check for the SCAN_LISTENER and LISTENER.
$ $CRS_HOME/bin/crs_stat
5.1.2 Shared File System Prerequisites
The shared storage configuration has been discussed earlier in sections "3.3 Set Up Shared Storage" and "4.3: Configure Shared Storage".
For ASM, verify that data group is created and sized appropriately for the Oracle E-Business Suite database install. Ensure that the FREE_MB
value as shown in the following example is greater than, or equal to the current size of your database and allows for future expansion.
$CRS_HOME/bin/asmcmd
ASMCMD> lsdg
State | Type | Rebal | Sector | Block | AU | Total_MB | Free_MB | Req_mir_free_MB | Usable_file_MB | Offline_disks | Voting_files | Name |
MOUNTED | EXTERN | N | 512 | 4096 | 1048576 | 244995 | 244547 | 0 | 244547 | 0 | Y | DATA |
5.1.3 Shared Oracle Home Prerequisites
Rapid Install supports a shared Oracle Home install. The Oracle Home directory must be mounted on all of the RAC nodes in the cluster. Rapid Install checks the availability of that directory during the install.
If you plan to use Shared Oracle Home, ensure that the database directory is mounted with required mount options and shared across all of the cluster nodes, as detailed in My Oracle Support Knowledge Document 359515.1, Mount Options for Oracle files when used with NFS on NAS devices.
For example, the mount options for Linux x86-64 are as follows:
rw,bg,hard,nointr,rsize=32768, wsize=32768,tcp,vers=3, timeo=600, actimeo=0
5.1.4 Database Software Install Prerequisites
Note: If you are planning to install Oracle E-Business Suite with a user, other than the grid
user, ensure that the asmadmin
and asmdba
groups are assigned to the grid
user. Refer to the "About Job Role Separation Operating System Privileges Groups and Users" section in Oracle Grid Infrastructure Installation Guide 12c Release 1 (12.1) for Linux or Oracle Grid Installation and Upgrade Guide Release 2 (12.2) for Linux.
Refer to Appendix H: Job Role Separation for an example of creating role separated users and groups.
- If you are planning to use a different user (other than the
grid
user) for the Database software, refer to Oracle Grid Infrastructure Installation Guide 12c Release 1 (12.1) for Linux or Oracle Grid Installation and Upgrade Guide Release 2 (12.2) for Linux, which contains additional mandatory steps.
- If you are using ASM, use the following command to add
asmdba
to theoracle
user's groups:$ /usr/bin/usermod -G dba,asmdba,asmadmin <oracle software user>
5.1.5 Other Considerations
- Manually run
cluvfy
before starting the installation from thegrid
home, using the following command:Substitute$ <12c Grid Software Stage>/runcluvfy.sh stage -pre dbinst -n <node_list>
<12cg Grid Software Stage>
with the stage location on your system. Substitute<node_list>
with the names of the nodes in your cluster, separated by commas.
- Check that the SCAN LISTENERS and GRID LOCAL LISTENERS are still up and running as they are used by Rapid Install to convert the database into Oracle RAC.
- In order to avoid port listener conflicts, always select a different port pool for the EBS Local Listener and Grid Local listener. The Grid local listener uses port 1521 by default, so do not use that for Oracle E-Business Suite unless you previously changed it during the grid installation.
- In a split-tier configuration, add the SCAN and VIP host details to the
/etc/hosts
file on the application tier server. Verify that the SCAN and VIP hosts can be pinged from the server where you are going to install the application tier. Note that this is only used during the installation process (and does not provide failover or load balancing). It should be removed once the installation is complete as the failover and load balancing are achieved by the SCAN as defined in DNS.
5.1.6 Prerequisite Patches
Deploying a role separated Oracle Grid Infrastructure on Oracle E-Business Suite requires additional patches. Refer to both of the following documents for details:
- Document 1320300.1, Oracle E-Business Suite Release Notes, Release 12.2
- Document 1594274.1, Oracle E-Business Suite Release 12.2: Consolidated List of Patches and Technology Bug Fixes, Appendix B: Additional Bug Fixes Needed When Installing Oracle E-Business Suite Release 12.2 For Use With Oracle Grid
5.2: Installing Oracle E-Business Suite Release 12.2 with Oracle Database 12c1 on Cluster Nodes
Rapid Install allows several different configuration options, including ,for example, a shared Oracle Home installed on to either ASM or shared File System.
5.2.1 The Rapid Install
This section describes the actions that the Oracle E-Business Suite Release 12.2 Rapid Install program performs.
5.2.1.1 Installation Phase
The installation phase performs the following tasks:
- Installs the database software on all of the selected nodes. If a shared Oracle Home is selected, the installer identifies the type of Oracle Home, performs the cluster verification, and performs the installation.
- Runs
rman
to restore the database on to the nominated shared storage (either ASM or shared file system). - Runs AutoConfig and configures the Oracle Database as a single instance for Oracle E-Business Suite.
- Runs
rconfig
to convert the single-node database to Oracle RAC. - Runs AutoConfig on all of the nodes in the cluster.
Figure 1 shows the Oracle E-Business Suite Release 12.2 Rapid Install Database Node screen, which has the following Oracle RAC options:
- RAC Enabled
- Storage Type
- Shared Oracle Home
- RAC Nodes
- Instance Prefix
When you select the Oracle RAC option, ASM storage is used by default. If you are not using ASM, you must specify the type file system. Select the appropriate check box if you are using a shared Oracle Home. When you check the RAC Nodes box, the nodes list opens enabling you to select the required nodes to be included in the installation. Additionally, there is also an option to update the instance prefix.
Once you have made your selections, the installer checks to ensure that the prerequisites have been met. This may take a few minutes.
5.2.1.2 Validation Checks
Rapid Install performs a set of standard validation checks including, for example, available temporary and swap space. When performing a RAC installation, it also performs cluster verification. If there are any prerequisite failures, it will create an output file cluvfy_output_<number>.lst
in the temporary directory. It is essential that you review this file and resolve any problems. If there are any issues that can be fixed automatically, Rapid Install will create a fixup.sh
script, also in the temporary directory that you will need to run.
While converting the Oracle Database to Oracle RAC, rconfig
uses the Oracle Grid local Listener, but once the AutoConfig configuration completes, it automatically switches to use the SCAN and Oracle E-Business Suite Database Listeners.
5.2.2 Post Install Steps
Update SRVCTL for the New listener.ora
Best practice is to use SRVCTL
to manage the Oracle E-Business Suite database. In order to use this, you need to perform the following additional steps:
TNS_ADMIN
cannot be shared as the directory path must be same on all nodes. Refer to Appendix F: Considerations When Using a Shared ORACLE_HOME for an example of the implementation required to use SRVCTL
to manage listeners in a shared Oracle Home configuration.- If you wish to use the port allocated to the default listener, you need to stop and remove the default listener. This will free up the port.
- Add the Oracle E-Business Suite listener as follows:
$ srvctl add listener -l <Listener_Name> -o <12cR1 ORACLE_HOME> -p <port>
$ srvctl setenv listener -l <Listener_name> -T TNS_ADMIN= $ORACLE_HOME/network/admin - On each node, add the AutoConfig
listener.ora
file as anifile
in$ORACLE_HOME/network/admin/listener.ora
. The contents of thelistener.ora
file, for example, would be as follows:ifile=<12c1 ORACLE_HOME>/network/admin/<context_directory>/listener.ora
- On each node, add the AutoConfig
tnsnames.ora
file as anifile
to$ORACLE_HOME/network/admin/tnsnames.ora
.
- On each node, add the AutoConfig
sqlnet.ora
file as anifile
to$ORACLE_HOME/network/admin/sqlnet.ora
. - Add
TNS_ADMIN
to the database as follows:$ srvctl setenv database -d <database_name> -T TNS_ADMIN= $ORACLE_HOME/network/admin
- Use
SRVCTL
to start the database and listeners.
5.2.3 Application Tier Install
Typically, the application tier is not hosted on the database server. In this case, you need to copy the configuration file $ORACLE_HOME/appsutil/config_<db_name>.txt
to the application tier. If you prefer to load the configuration from the database, instead use the Applications Database Listener port and dbname
for the SID. For example: <server1.example.com>:<dbname>:<port>
Run Rapid Install, which will create the application tier with a patch, run, and non-editioned file system.
By default, all the database connections go through the SCAN Listener. Verify the SCAN host and port settings in the following files:
$TNS_ADMIN/tnsnames.ora
- Context file:
s_apps_jdbc_connect_descriptor
ands_apps_jdbc_patch_connect_descriptor
variables $FND_SECURE/<dbname>.dbc
- The port and Oracle WebLogic Server data source
<FMW_HOME>/user_projects/domains/EBS_domain_<dbname>/config/jdbc/EBSDataSource-<number>-jdbc.xml
5.2.4 Configure Load Balancing
Follow instructions given in Section 4.7.2.
5.3 Troubleshooting and Known Issues
- Ensure that all the prerequisites have been applied.
- If you encounter any errors, review the Rapid Install log for further information.
- On the Oracle Database tier:
ORACLE_HOME/appsutil/log/<context_name>/<Number>.log
- On the the application tier(s):
$INST_TOP/apps/<CONTEXT_NAME>/logs/<mmddHHMI>.log
- Review the OUI inventory log
: <Global Inventory Location>/logs/install<mmddHHMI>.log
A commonly-reported issue is error:
PRVF-5640.
In this case, check that/etc/resolv.conf
does not have multiple domains defined. - On the Oracle Database tier:
- Depending on the installation, when Rapid Install completes, the
remote_listener
parameter will be set to<SCAN NAME>:<SCAN PORT>.
If this happens, update theremote_listener
parameter to<DBNAME>_remote
on all of the instances using the following command:Also change theSQL>alter system set remote_listener=<dbname>_remote scope=both sid='<Instance_name>'
s_instRemoteListener
context variable in the context file and then run AutoConfig.
Section 6: References
- Document 745759.1, Oracle E-Business Suite and Oracle Real Application Clusters Documentation Roadmap
- Document 384248.1, Sharing The Application Tier file system in Oracle E-Business Suite Release 12
- Document 387859.1, Using AutoConfig to Manage System Configurations with Oracle E-Business Suite Release 12
- Document 406982.1, Cloning Oracle Applications Release 12 with Rapid Clone
- Document 265633.1, Automatic Storage Management Technical Best Practices
- Oracle E-Business Suite Installation Guide: Using Rapid Install
- Oracle E-Business Suite Setup Guide
Appendices
The appendices are divided into two categories. The first category (consisting of Appendices A and B) is primarily intended to help with migrating non-Oracle RAC to Oracle RAC, and the second category (consisting of Appendices C, D, E, and F) is primarily for Rapid Install usage in setting up Oracle RAC.
Appendices: Useful for Migrating a Non-RAC System to Oracle RAC
Appendix A: A Sample Config XML file
This appendix shows example contents of an rconfig
XML input file. <!-- Comments like this -->
have been added to the code, and notes have been inserted between sections of code.
RConfig xsi:schemaLocation="http://example.com/rconfig">
-
<n:ConvertToRAC>
-
<!-- Verify does a precheck to ensure all pre-requisites are met, before the conversion is attempted. Allowable values are: YES|NO|ONLY -->
-
<n:Convert verify="YES">
Note: The Convert verify
option in the ConvertToRAC.xml
file can take one of the three values (YES/NO/ONLY)
:
- YES:
rconfig
performs the prerequisites check and then starts the conversion. - NO:
rconfig
does not perform the prerequisites check prior to starting the conversion. - ONLY:
rconfig
only performs the prerequisites check and does not start the conversion.
In order to validate and test the settings specified for the conversion to Oracle RAC with rconfig
, it is advisable to execute rconfig
using Convert verify="ONLY"
prior to carrying out the actual conversion.
<!-- Specify current OracleHome of non-RAC database for SourceDBHome -->
<n:SourceDBHome> /oracle/product/12.1.0/db_1 </n:SourceDBHome> <!-- Specify OracleHome where the RAC database should be configured. It can be same as SourceDBHome -->
<n:TargetDBHome> /oracle/product/12.1.0/db_1 </n:TargetDBHome>
-
<!-- Specify SID of non-RAC database and credential. User with sysdba role is required to perform conversion -->
-
<n:SourceDBInfo SID="[SID]">
-
<n:Credentials>
<n:User>[user_name]</n:User>
<n:Password>[password]</n:Password>
<n:Role>sysdba</n:Role>
</n:Credentials>
</n:SourceDBInfo>
<!-- Specify the list of nodes that should have RAC instances running. LocalNode should be the first node in this nodelist. -->
-
<n:NodeList>
<n:Node name="node1"/>
<n:Node name="node2"/>
</n:NodeList>
-
<!-- Specify prefix for RAC instances. It can be same as the instance name for non-RAC database or different. The instance number will be attached to this prefix. Instance Prefix tag is optional starting with 11.2. If left empty, it is derived from db_unique_name-->
<n:InstancePrefix>[INSTANCE_PREFIX]</n:InstancePrefix>
-
<!-- Listener details are no longer needed starting 11.2. Database is registered with default listener and SCAN listener running from Oracle Grid Infrastructure home. -->
<!-- Specify the type of storage to be used by RAC database. Allowable values are CFS and ASM. The non-RAC database should have same storage type. -->
-
<n:SharedStorage type="ASM">
Note: rconfig
can also migrate the single instance database to ASM storage. If you want to use this option, specify the ASM parameters as per your environment in the above xml file.
The ASM instance name specified above is only the current node ASM instance. Ensure that ASM instances on all the nodes are running and the required disk groups are mounted on each of them.
The ASM disk groups can be identified by issuing the following statement when connected to the ASM instance:SQL> select name, state, total_mb, free_mb from v$asm_diskgroup;
<!-- Specify Database Area Location to be configured for RAC database.If this field is left empty, current storage will be used for RAC database. For CFS, this field will have directory path. -->
<n:TargetDatabaseArea>+ASMDG</n:TargetDatabaseArea>
rconfig
can also migrate the single instance database to ASM storage. If you want to use this path, specify the ASM parameters as per your environment in the above XML file.If you are using CFS for your current database files, then specify "NULL" to use the same location unless you want to switch to other CFS location. If you specify the path for
TargetDatabaseArea
, rconfig
will convert the files to Oracle Managed Files nomenclature.<!-- Specify Flash Recovery Area to be configured for RAC database. If this field is left empty, current recovery area of non-RAC database will be configured for RAC database. If current database is not using Recovery Area, the resulting RAC database will not have a recovery area. -->
<n:TargetFlashRecoveryArea>
+ASMDG
</n:TargetFlashRecoveryArea>
</n:SharedStorage>
</n:Convert>
</n:ConvertToRAC>
</n:RConfig>
Appendix B: Database Upgrade Assistant Known Issues
This document does not use the Database Upgrade Assistant. However, it can be used to upgrade the database.
Database Upgrade Assistant (DBUA)
- If DBUA is used to upgrade an existing AutoConfig-enabled Oracle RAC database, you may encounter an error about a pre-12c listener existing in CRS. In such a case, copy the AutoConfig
listener.ora
to the<12c_ORACLE_HOME>/network/admin
directory, and merge the contents in with the existinglistener.ora
file.
Appendices: Useful for Migrating to, or Installing Oracle RAC
- Appendix C: An Example Grid Installation
- Appendix D: Enabling and Disabling SCAN Listener Support in AutoConfig
- Appendix E: Instance and Listener Interaction
- Appendix F: Considerations When Using a Shared ORACLE_HOME
- Appendix G: Configuring ASM Disks
- Appendix H: Job Role Separation
- Appendix I: Configuring Parallel Concurrent Processing
- Appendix J: Known Issues
Appendix C: An Example Grid Installation
The following assumes a fresh Grid install and is intended for those who may have less experienced with Clusterware, or who may be doing a test install.
- Start the installer.
- Choose "Install and Configure Grid Infrastructure for a Cluster". Click "Next".
- Choose "Advanced Configuration". This is needed when specifying a SCAN name that is different from the cluster name. Click "Next".
- Choose Languages. Click "Next".
- Uncheck "Configure GNS" - this is for experienced users only.
- Enter the cluster name, SCAN name and SCAN port. Click "Next".
- Add Hostnames and Virtual IP names for nodes in the cluster.
- Click "SSH Connectivity" and click "Test". If SSH is not established, enter the OS user and password and let the installer set up passwordless connectivity. Click "Test" again, and if successful click "Next".
- Choose one interface as public, one as private. "eth0" should be public, while "eth1" is usually set up as private. Click "Next".
- Uncheck "Grid Infrastructure manager" in the "configuration repository" page.
- Choose "Shared File System". Click "Next".
- Choose the required level of redundancy, and enter location for the OCR disk. This must be located on shared storage. Click "Next".
- Choose the required level of redundancy, and enter location for the voting disk. This must be located on shared storage. Click "Next".
- Choose the default of "Do not use" for IPMI. Click "Next".
- Select an operating system group for the
operator
anddba
accounts. For the purposes of this example installation, choose the same group, such as "dba
", for both. Click "Yes" in the popup window that asks you to confirm that the same group should be used for both, then click "Next". - Enter the Oracle Base and Oracle Home. The Oracle Home should not be located under Oracle Base. Click "Next"
- Enter Create Inventory location. Click "Next".
- In the "Root Script Execution" page, either select or unselect "Automatically run configuration scripts" option (as you prefer).
- System checks are now performed. Fix any errors by clicking "Fix and Check Again", or check "Ignore All" and click "Next". If you are not familiar with the possible effects of ignoring errors, it is advisable to fix them.
- Save the response file for future use, then click "Finish" to start the install.
- You will be required to run various scripts as root during the installation. Follow the relevant on-screen instructions.
Appendix D: Enabling and Disabling SCAN Listener Support in AutoConfig
Managing the SCAN listener is handled on the database server. All that is required for the middle tier is for AutoConfig to be re-run, to pick up the updated connection strings.- Switching from SCAN to non-SCAN
- Modify the database tier context variables
s_scan_name=null, s_scan_port=null
, ands_update_scan=TRUE
. - Modify the database init.ora parameter so that
local_listener
should be<sid>_local
and remote listener<service>_remote
(To allow failover aliases). - Run AutoConfig in the database tier to create non-SCAN aliases in
tnsnames.ora
. - Run AutoConfig on the middle tier to create non-SCAN aliases in
tnsnames.ora
.
- Modify the database tier context variables
- Re-enabling SCAN
- Modify the database tier context variables
s_scan_name=<scan_name>,s_scan_port=<scan_port>,
ands_update_scan=TRUE.
- Modify the database init.ora parameter
remote_listener
to<scan_name>:<scan_port>
using the SQL command:alter system set remote_listener='...'
for all instances. - Modify the database tier context variable
s_instRemotelistener
to <service>_remote value. - Run AutoConfig in the database tier to create SCAN aliases in
tnsnames.ora.
- Run AutoConfig on the middle tier to create SCAN aliases in
tnsnames.ora
.
- Modify the database tier context variables
Appendix E: Instance and Listener Interaction
Understanding how instances and listeners interact is best explained with an example.Consider a 2-node RAC cluster, with nodes C1 and C2.
In this example, two local listeners are used, the default listener and an EBS listener. There is nothing special about the EBS listener - it could equally have been called the ABC listener.
Listener Configuration
Listener Type | Node | SCAN Name | Host Name | VIP Name | Listener Host | Listener Port | Listener Address |
EBS listener | C1 | N/A | C1 | C1-VIP | C1 | 1531 | C1 and C1-VIP |
C2 | N/A | C2 | C2-VIP | C2 | 1531 | C2 and C2-VIP | |
Default listener | C1 | N/A | C1 | C1-VIP | C1 | 1521 | C1 and C1-VIP |
C2 | N/A | C2 | C2-VIP | C2 | 1521 | C2 and C2-VIP | |
SCAN | Either C1 or C2 | C-SCAN | N/A | N/A | Either C1 or C2 | 1521 | C-SCAN |
Note the following:
- The SCAN and local listeners can be on the same port as they listen on different addresses.
- The SCAN listener can run on either C1 or C2.
- Listeners have no built in relationship with instances.
SRVCTL configuration
Listener Type | Listener Name | Listener Port | Listener Host | Listener Address |
General [Local] | listener | 1521 | C1 | C1 and C1-VIP |
1521 | C2 | C2 and C2-VIP | ||
ebs_listener | 1531 | C1 | C1 and C1-VIP | |
1531 | C2 | C2 and C2-VIP | ||
SCAN | SCAN [ name doesn't matter and can be default ] | 1521 | Either C1 or C2 | C-SCAN |
Instance to Listener Assignment
The relationship between instances and listeners is established by the local_listener
and remote_listener init.ora
parameters.Local_Listener
- The instance broadcasts to the address list, informing the listeners that the instance is now available. The local listener must be running on the same node as the instance, as the listener spawns the Oracle processes. The default value comes from the cluster.
- The instances broadcasts to the address list, informing the listeners that the instance is now available for accepting requests, and that the requests are to be handled by the local_listener address. The remote hosts can be on any machine. There is no default value for this parameter.
Database | Instance | Node | Local_Listener | Remote_Listener | Default Listener Status | EBS Listener Status | SCAN Listener Status |
D1 | I1 | C1 | Set to C1 & C1-VIP on 1531 | C-SCAN/1521 | I1 is unavailable | I1 is available | I1 is available via redirect to EBS Listener for C1 |
Set to C1 & C1-VIP on 1531 | C1/C1-VIP on 1531, C2/C2-VIP on 1531 | I1& I2 are unavailable | I1 is available. I2 is available via redirect to EBS Listener for C2. | I1 not available | |||
Not set. Instance uses cluster default listener - i.e. C1 & C1-VIP on 1521 | C-SCAN/1521 | I1 is available | I1 is unavailable. | I1 is available via redirect to Default Listener for C1 | |||
I2 | C2 | Set to C2 & C2-VIP on 1531 | C-SCAN/1521 | I2 is unavailable | I2 is available | I2 is available via redirect to EBS Listener for C2 | |
Set to C2 & C2-VIP on 1531 | C1/C1-VIP on 1531, C2/C2-VIP on 1531 | I2 & I1 are unavailable | I2 is available. I1 is available via redirect to EBS Listener for C1. | I2 not available | |||
Not set. Instance uses cluster default listener - i.e. C2 & C2-VIP on 1521 | C-SCAN/1521 | I2 is available | I2 is unavailable | I2 is available via redirect to Default Listener for C2 |
Appendix F: Considerations When Using a Shared ORACLE_HOME
In Oracle 12c, listeners are configured at the cluster level, and all nodes inherit the port and environment settings. This means that the TNS_ADMIN
directory path will be the same on all nodes. In a shared ORACLE_HOME
configuration, the TNS_ADMIN
directory must be a local, non-shared directory in order to be able to use AutoConfig generated network files. These network files will be included as ifiles.
The following is an example for setting up TNS_ADMIN
for a shared <ORACLE_HOME>
in a two-node cluster, C1 and C2, with respective instances I1 and I2.
- Modify the
s_db_listener
context parameter to a common listener name, such as<listener_ebs>
. Repeat this for all instance context files.
- Run AutoConfig on both nodes. This will create
listener.ora
andtnsnames.ora
under the node network directories. i.e.<ORACLE_HOME>/network/admin/<i1_c1>
and<i2_c2>
.
- Edit AutoConfig
listener.ora
files and changeLISTENER_<c1|c2>
to the listener common name<listener_ebs>
. Skip this step if you have already run AutoConfig.
- Create a
<local_network_dir>
, such as/etc/local/network_admin
.
- Create a
listener.ora
under<local_network_dir>
on each node.<C1> node
ifile=<ORACLE_HOME/network/admin/<i1_c1>/listener.ora
<C2> node
ifile=<ORACLE_HOME/network/admin/<i2_c2>/listener.ora
- Create a
tnsnames.ora
under the<local_network_dir>
directory on each node.<C1> node
ifile=<ORACLE_HOME/network/admin/<i1_c1>/tnsnames.ora
<C2> node
ifile=<ORACLE_HOME/network/admin/<i2_c2>/tnsnames.ora - Add the common listener name to the cluster and set
TNS_ADMIN
to the non-shared directory:$ srvctl add listener -l <listener_ebs> -o <ORACLE_HOME> -p <port>
$ srvctl setenv listener -l <listener_dbs> -t TNS_ADMIN= <local_network_dir>
Appendix G: Configuring ASM Disks
Automatic Storage Management (ASM) simplifies the administration of Oracle database files by allowing the administrator to reference disk groups rather than individual disks and files, which ASM manages internally. This appendix explains how to create ASM disks required for configuring ASM. You need to run the following commands as the root
user.
You can create the ASM disks as follows:
- Configure Oracle ASMLib using the following command:You can see the following example:
$ /etc/init.d/oracleasm configure
Check the configuration file /etc/sysconfig/oracleasm after running the above command to ensure that it is configured properly.$ /etc/init.d/oracleasm configure
Configuring the Oracle ASM library driver.This will configure the on-boot properties of the Oracle ASM library
driver. The following questions will determine whether the driver is
loaded on boot and what permissions it will have. The current values
will be shown in brackets ('[]'). Hitting <ENTER> without typing an
answer will keep that current value. Ctrl-C will abort.Default user to own the driver interface []: oracle
Default group to own the driver interface []: dba
Start Oracle ASM library driver on boot (y/n) [n]: y
Scan for Oracle ASM disks on boot (y/n) [y]: y
Writing Oracle ASM library driver configuration: done
Initializing the Oracle ASMLib driver: [ OK ]
Scanning the system for Oracle ASMLib disks: [ OK ] - Check Oracle ASMLib using the following command:You can see the following example:
$ /etc/init.d/oracleasm status
$
/etc/init.d/oracleasm status
Checking if ASM is loaded: yes
Checking if /dev/oracleasm is mounted: yes - Create Disk Partitions.
ASMLib requires the candidate disks to be partitioned before they can be accessed.You can see the following example:$ fdisk <device_name>
$ fdisk /dev/xvdd
Device contains neither a valid DOS partition table, nor Sun, SGI or OSF disklabel
Building a new DOS disklabel. Changes will remain in memory only,
until you decide to write them. After that, of course, the previous
content won't be recoverable.The number of cylinders for this disk is set to 1305.
There is nothing wrong with that, but this is larger than 1024,
and could in certain setups cause problems with:
1) software that runs at boot time (e.g., old versions of LILO)
2) booting and partitioning software from other OSs
(e.g., DOS FDISK, OS/2 FDISK)
Warning: invalid flag 0x0000 of partition table 4 will be corrected by w(rite)Command (m for help): n
Command action
e extended
p primary partition (1-4)
p
Partition number (1-4): 1
First cylinder (1-1305, default 1):
Using default value 1
Last cylinder or +size or +sizeM or +sizeK (1-1305, default 1305):
Using default value 1305Command (m for help): w
The partition table has been altered!Calling ioctl() to re-read partition table.
Syncing disks.
$ - Make disks available to Oracle ASMLib.
Every disk that ASMLib is going to access needs to be made available. This is accomplished by creating an ASM disk.You can see the following example:$ oracleasm createdisk <asmdisk_name> <disk_partition>
When a disk is added on the first node of a cluster, the other nodes need to be notified about it. Run the$ oracleasm createdisk asmdisk1 /dev/xvdd1
Writing disk header: done
Instantiating disk: done
$ oracleasm createdisk asmdisk2 /dev/xvde1
Writing disk header: done
Instantiating disk: done
$ oracleasm createdisk asmdisk3 /dev/xvdf1
Writing disk header: done
Instantiating disk: done
$ oracleasm listdisks
ASMDISK1
ASMDISK2
ASMDISK3createdisk
command on one node, and run thescandisks
command on all the other nodes of the cluster. Until you run thescandisks
command on the other nodes of the cluster, the disks created on the first node will not be visible on the other nodes of the cluster.$ /etc/init.d/oracleasm scandisks
Scanning system for ASM disks [ OK ]
Appendix H: Job Role Separation
Create Job Role Separation Operating System Privileges Groups, Users, and Directories
This section provides the instructions on how to create the operating system users and groups to install the Oracle software using Job Role Separation. This configuration divides the administration privileges at the operating system level. In this section, the grid
user is the owner of the Grid infrastructure software and Oracle Automatic Storage Management binaries, and oracle
is the owner of the Oracle RAC Software binaries. Both users must have an Oracle Inventory group as their primary group (for example oinstall
).
For further information, refer to the "Oracle ASM Groups for Job Role Separation Installations" section in Oracle Grid Infrastructure Installation Guide 12c Release 1 (12.1) for Linux or Oracle Grid Installation and Upgrade Guide Release 2 (12.2) for Linux.
Several Operating Systems Groups can be created in order to separate the various administration privileges. The following table provides a high degree of separation and this needs to be configured correctly in order to ensure that there are no permission or other issues when running the Rapid Install.
Description | OS Group Name | OS Users Assigned to this Group | Oracle Privilege | Oracle Group Name |
---|---|---|---|---|
Oracle Inventory and Software Owner | oinstall | grid, oracle | ||
Oracle Automatic Storage Management Group | asmadmin | grid | SYSASM | OSASM |
ASM Database Administrator Group | asmdba | grid, oracle | SYSDBA for ASM | OSDBA for ASM |
ASM Operator Group | asmoper | grid | SYSOPER for ASM | OSOPER for ASM |
Database Administrator | dba | oracle | SYSDBA | OSDBA |
Database Operator | oper | oracle | SYSOPER | OSOPER |
Create the Groups and Users for the Grid Software
Create the oinstall, asmadmin,asmdba,asmoper
groups using the following commands (with root
privileges):
$ groupadd -g 9999 oinstall
$ groupadd -g 8888 asmadmin
$ groupadd -g 7777 asmdba
$ groupadd -g 6666 asmoper
The following command creates a user named grid
(who owns the Grid infrastructure) and assigns the necessary groups to that user.
$ useradd -g oinstall -G asmadmin,asmdba,asmoper -d <Home Directory> grid
grid
user password.Create Groups for the Oracle Software
$ groupadd -g 1010 dba
$ groupadd -g 1020 oper
The following command creates a user named oracle
(who will own the Oracle RAC software) and assigns the asmadmin
and asmdba
groups, which is necessary when using different users for Grid and Oracle:
$ useradd -g oinstall -G dba,oper,asmdba -d <Home Directory> oracle
Remember to set the resource limits for the Oracle software installation users as per the documentation.
Appendix I: Configuring Parallel Concurrent Processing
Oracle E-Business Suite Release 12.2 configured with logical host names supports Parallel Concurrent Processing (PCP), which can be used to distribute concurrent managers across multiple nodes, in order to achieve high performance and a fault tolerant system.
If your environment only has a single application tier node, you can add another node by referring to 5.3 Adding a New Application Tier Node to an Existing System of My Oracle Support Knowledge Document 1383621.1, Cloning Oracle E-Business Suite Release 12.2 with Rapid Clone.
Set Up PCP
For further information on PCP and to understand recent developments, refer to the "Overview of Parallel Concurrent Processing" section in the Oracle E-Business Suite Setup Guide. The following steps are used to configure PCP.
- If it is not already set, use the Context Editor (via the Oracle Applications Manager interface) to set the value of the variable
APPLDCP
to ON. This specifies that distributed concurrent processing is to be used. - Source the Oracle E-Business Suite environment.
- Execute AutoConfig by running the following command on each of the concurrent processing nodes:
$ $INST_TOP/admin/scripts/adautocfg.sh
- Check the
tnsnames.ora
andlistener.ora
configuration files, located in$INST_TOP/ora/10.1.2/network/admin
. Ensure that the required FNDSM and FNDFS entries are present for all other concurrent nodes. - Restart the Applications Listener processes on each application tier node.
- Log on to Oracle E-Business Suite Release 12 using the SYSADMIN account, and choose the System Administrator responsibility. Navigate to the Install > Nodes screen and ensure that each node in the cluster is registered.
- Verify that the Internal Monitor for each node is defined properly, with correct primary node specification, and work shift details. For example, Internal Monitor: Host1 must have primary node as "host1". Also ensure that the Internal Monitor manager is activated; this can be done from Concurrent > Manager > Administrator.
Define the standard managers to run in parallel on other nodes. For example, on the two-node Concurrent Processing environment, create a standard manager with multiple work shifts and assign the primary node as the second node; then start that manager. Verify that the standard manager processes are running on the second node. An example Manager setup for a Two-Node PCP Configuration is given in the following table.
Manager Primary Node Secondary Node Running Node Internal Manager Node1 Node2 Node1 Internal Monitor Node1 Node2 Node1 Internal Monitor Node2 Node1 Node2 Standard Manager1 Node1 Node2 Node1 Standard Manager2 Node2 Node1 Node2
- Set the
$APPLCSF
environment variable on all of the Concurrent Processing nodes to point to a log directory on a shared file system. - Set the
$APPLPTMP
environment variable on all of the CP nodes to the value of theUTL_FILE_DIR
entry in theinit.ora
on the database nodes. (This value should point to a directory on a shared file system.) - Set the profile option 'Concurrent: PCP Instance Check' to OFF if database instance-sensitive failover is not required. By setting it to 'ON', a concurrent manager will fail over to a secondary application tier node if the database instance to which it is connected becomes unavailable for some reason.
Set Up Transaction Managers
- Shut down the application services (servers) on all nodes.
- Shut down all the database instances cleanly in the Oracle RAC environment, using the command:
SQL>shutdown immediate;
- Edit $ORACLE_HOME/dbs/<context_name>_ifile.ora. Add the following parameters:
- _lm_global_posts=TRUE
- _immediate_commit_propagation=TRUE
- Start the instances on all database nodes, one by one.
- Start up the application services (servers) on all nodes.
- Log on to Oracle E-Business Suite Release 12 using the SYSADMIN account, and choose the System Administrator responsibility. Navigate to Profile > System, change the profile option ‘Concurrent: TM Transport Type' to ‘QUEUE', and verify that the transaction manager works across the Oracle RAC instance.
- Navigate to Concurrent > Manager > Define screen, and set up the primary and secondary node names for transaction managers.
- Restart the concurrent managers.
- If any of the transaction managers are in deactivated status, activate them from Concurrent > Manager > Administrator.
Configure/Enable Load Balancing
Parallel Concurrent Processing can load balance connections across nodes in a RAC environment. If you are implementing PCP on the application tier, you will need to set up load balancing on the new application tier by executing following steps:
- Using the Context Editor (via the Oracle Applications Manager interface), modify the variables as follows:
- To load balance the database connections for the Oracle Forms-based applications, set the value of "Tools OH TWO_TASK"
(s_tools_twotask)
to point to the<database_name>_balance
alias generated in thetnsnames.ora
file. - To load balance the database connections for the Self-Service (HTML-based) applications, set the value of "iAS OH TWO_TASK"
(s_weboh_twotask)
and "Apps JDBC Connect Alias"(s_apps_jdbc_connect_alias)
to point to the<database_name>_balance
alias generated in thetnsnames.ora
file.
- To load balance the concurrent processing across the Oracle E-Business Suite database connections, set the value of Concurrent Manager TWO_TASK (
s_cp_twotask
) to point to the<database_name>_balance
alias generated in thetnsnames.ora
file.
- To load balance the database connections for the Oracle Forms-based applications, set the value of "Tools OH TWO_TASK"
- Execute AutoConfig by running the command:
$ $AD_TOP/bin/adconfig.sh contextfile=$INST_TOP/appl/admin/<context_file>
- Restart the Oracle E-Business Suite processes using the new scripts that were generated by AutoConfig.
- Ensure that the value of the profile option "Application Database ID" is set to the dbc file name generated in
$FND_SECURE
.
Node Affinity with Concurrent Managers
Node affinity with the concurrent manager is achieved by using the environment variable TWO_TASK with the concurrent manager. Prior to Oracle E-Business Suite Release 12.2.4, environment variables such as TWO_TASK would be the same for all the managers on a node.
From Oracle E-Business Suite Release 12.2.4 and onwards, it is possible to associate a node with a concurrent manager, thereby achieving node affinity at the concurrent manager level. You can achieve node affinity at the concurrent manager level by setting an environment variable TWO_TASK, which can be different for each of the managers.
After setting the TWO_TASK variable for a concurrent manager, you must restart that concurrent manager for the changes to take effect. FNDSM reads the table FND_CONC_QUEUE_ENVIRON
while starting the concurrent managers to get the concurrent manager specific environment variables. The instance affinity settings at the concurrent program level overrides the environment variable settings at the concurrent manager level. This applies to all managers, with the exception of Java services, the Service managers, and the ICM.
The concurrent manager environment variables can be set as follows:
1. Log in to Oracle E-Business Suite.
2. Navigate to System Administrator responsibility.
3. Navigate to Define > Concurrent Manager.
4. Click on the Environment form.
5. Set the TWO_TASK environment variable.
FND_CONC_CLONE.SETUP_CLEAN
is executed. You will have to set them again after executing this script.Appendix J: Known Issues
- Before running
rconfig
to convert the database to RAC, you need run the following command as thegrid
user on all nodes in the cluster.$ $CRS_HOME/bin/setasmgidwrap -o $ORACLE_HOME/bin/oracle
- Whenever the Oracle Home is patched, the oracle executable groups will be changed back to the install group. For example, prior to patching the group, it is
asmadmin
:After patching the group, it will be changed to-rwsr-s--x. 1 oracle asmadmin 324001921 Nov 1 01:22 /u01/NONCDB_DB122/12.1.0/bin/oracle
oinstall
:To get around this problem, the recommended method is to always start the database using-rwsr-s--x. 1 oracle oinstall 324001921 Nov 1 01:22 /u01/NONCDB_DB122/12.1.0/bin/oracle
srvctl
, which automatically runssetasmgidwrap
and will reset the group toasmadmin
. Alternatively, you can use another approach that you will need to manually runsetasmgidwrap
as the Grid user to revert the group toasmadmin:
$CRS_HOME/bin/setasmgidwrap -o $ORACLE_HOME/bin/oracle
No comments:
Post a Comment