Showing posts with label OCI. Show all posts
Showing posts with label OCI. Show all posts

Tuesday, March 19, 2024

19c Database with Oracle E-Business Suite R12 Known Issues & Solutions

 The purpose of this document is to present known issues and solutions for the 19c Database Upgrade with Oracle E-Business Suite (EBS).

Important Information Regarding the Upgrade to Oracle Database 19c

  1. When upgrading your Oracle E-Business Suite to Oracle Database 19c, your database will be converted to a multitenant architecture, which will consist of a Container Database (CDB) with a single Pluggable Database. Only multitenant architecture database are supported for Oracle E-Business Suite with Oracle Database 19c.
  2. During the upgrade, you will also perform steps to migrate directories defined for PL/SQL File I/O to database directory objects. This requirement is due to the desupport in Oracle Database 19c of the UTL_FILE_DIR database initialization parameter.
  3. Oracle 19c Database Release Update Information for Oracle E-Business Suite:
    • Oracle Database Release Update 19.3 as well as Release Update 19.5 and later are certified.
    • Oracle recommends that you upgrade to the latest Oracle Database Release Update that is certified with Oracle E-Business Suite. Applying the latest Release Update will ensure that you have the security-related fixes and high-priority non-security fixes. See My Oracle Support Knowledge Document 2285040.1, Release Update Introduction and FAQ, for more details.
    • You can upgrade directly from an earlier Oracle Database version to Oracle Database Release Update 19.3 or Release Update 19.6 or later.
    • You cannot upgrade directly to Oracle Database Release Update 19.5. You must first upgrade from your earlier Oracle Database version to Oracle Database 19.3, and then apply Oracle Database Release Update 19.5.

 E-Business Suite 19c Database Upgrade Analyzer (Doc ID 2704990.1

The E-Business Suite 19c Database Upgrade Analyzer is a self-service health-check script that reviews E-Business R12 19c Database Upgrade related data, analyzes current configurations and settings for the environment and provides feedback and recommendations on the progress of your 19c database upgrade and any potential missing or failed upgrade steps. Analyzers may also be used as data collection tools by Support Engineers. Your application data is not altered in any way when you run this analyzer.

----------------------------------------
NOTE: In the images below and/or the attached document, user details / company name / address / email / telephone number represent a fictitious sample (based upon made up data used in the Oracle Demo Vision instance). Any similarity to actual persons, living or dead, is purely coincidental and not intended in any manner.
----------------------------------------
NOTE: Make sure to review Doc ID 2567105.1 which contains commonly asked questions and answers about using Oracle E-Business Suite with the multitenant architecture.

 

QUESTIONS AND ANSWERS

 

Questions

QuestionsAnswer
Which listener.ora should be used to start database listener ?

$ORACLE_HOME/network/admin/listener.ora should be used. Use $ORACLE_HOME/appsutil/scripts/<pdbname>_<hostname>/adcdblnctl.sh script : Example  adcdblnctl.sh start <CDBNAME>

How to know which listener.ora is used ?

Run lsnrctl status CDBNAME command look for value of "Listener Parameter File" in the output. Example :

Listener Parameter File   /u01/oracle/db/tech/19.3/network/admin/listener.ora

 

Problems

 

Database Issues

IssueSolution
ORA-65064: system ts doesn't exist

An ORA-65064 might have been raised when we attempted to plug in a legacy database into a CDB and the legacy database was created under a pre-V8 database release and then moved to later releases using the migrate
utility

Cause: When you plug that database into a Multitenant environment you will see an ORA-65064 and the tracefile will have a message that reads:

    kpdbcPlugVerifyFiles: ORA-65064: system ts doesn't exist

Solution:

The Multitenant architecture has a fundamental assumption that any plugged in database will have a file from the SYSTEM tablespace with a relative file
number of 1.

You may be seeing this issue if the database was created under v7 or earlier,
was upgraded to later releases using the migrate utility and an ORA-65064 is
raised during the plug-in operation and where the tracefile indicates that we
could not find the system tablespace.

Checking the XML file generated using dbms_pdb.describe() should also show that there is no SYSTEM tablespace with a relative file number
(<rfn>..</rfn>)
of 1.

WORKAROUND:
Use exp/imp or datapump to rebuild the database first and then perform the
plug in operation.

ORA-7445 [qkacco] when running a Complex Query with Nested Joins

ORA-07445: exception encountered: core dump [qkacco()+91] [SIGSEGV] [ADDR:0x0] [PC:0x289204B] [Address not mapped to object] []

Solution:

ORA-7445 [qkacco] when Running a Complex Query with Nested Joins (Doc ID 2664281.1)


Do not set optimizer_adaptive_plans=false at database level as a workaround. Instead use SQL-Profile, hints , "alter session" commands, etc  to change setting at session level for problematic SQL. Wait for database patch . Apply patch for database Bug 30621255

ORA-01034 while running txkPostPDBCreationTasks.pl script

Starting Context Generation Tool at Tue May 19 13:24:17 CDT 2020
Using GenerateContext version 120.20.12010000.16

Connecting to the cgdev2 database instance...

Connection parameters values:
Database server hostname ==> databasehostname.xx.yy.com
Database listener port ==> 1521
Database SID ==> PDBNAME
Database schema name ==> apps

Could not Connect to the Database : ORA-01034: ORACLE not available
ORA-27101: shared memory realm does not exist
Linux-x86_64 Error: 2: No such file or directory
Additional information: 4376
Additional information: 226357863

 Database listener.ora parameter should have following setting

USE_SID_AS_SERVICE_<CDBNAME>=ON

Without this setting GenerateContext.class file fails to connect to database because it uses JDBC connect string hostname:port:SID notation. But this is changes in 19c and it is hostname:port/SID. Notice / instead of :  . Setting USE_SID_AS_SERVICE_<CDBNAME>=ON solves this issue

Invalid APPS database user credentials errors when running txkPostPDBCreationTasks.pl

Post PDB script failing with following errors :

*******FATAL ERROR*******
PROGRAM : (.../appsutil/bin/txkPostPDBCreationTasks.pl)
TIME : Sat May 16 16:47:09 2020
FUNCTION: main::validateAppsSchemaCredentials [ Level 1 ]
ERRORMSG: Invalid APPS database user credentials.
*******FATAL ERROR*******
PROGRAM : (/.../appsutil/bin/txkPostPDBCreationTasks.pl)
TIME : Sat May 16 16:47:09 2020
FUNCTION: main::validateAppsSchemaCredentials [ Level 1 ]
ERRORMSG: Invalid APPS database user credentials.

Database parameter instance_name was set to PDB_NAME . It should set to CDB_NAME.

PDB services are not registering with database listener 

 

Make sure the database parameter local_listener is not set at PDB level . This value should be inherited from CDB. After the change run "alter system register;" command so that services will be registered with the listener.

 

High "row cache mutex" waits are seen in AWR reports due to SQLs against v$session/v$process

Example SQLs :

0u11juyazvjf5
SELECT PR.SPID , SE.SID , SE.PROGRAM , SE.MODULE FROM V$SESSION SE ,
V$PROCESS PR WHERE SE.SID = (SELECT SID FROM V$MYSTAT WHERE ROWNUM = 1)
AND SE.PADDR = PR.ADDR

0y6qmpuvm4w9u


select sid from v$session, v$process where audsid=
sys_context('userenv', 'sessionid') and addr=paddr

1wr32ytknh1u9

 

SELECT P.PID, S.PROCESS, P.SERIAL#, P.SPID FROM V$PROCESS P,
V$SESSION S WHERE S.AUDSID = USERENV('SESSIONID') AND S.PADDR = P.ADDR

0y7y17bcappxk

 

SELECT P.PID, P.SERIAL#, P.SPID FROM V$PROCESS P, V$SESSION S WHERE
S.AUDSID = USERENV('SESSIONID') AND S.PADDR = P.ADDR

 Solution is to apply database Patch 31903523 . Until then contention can
be reduced as follows :

- Disable EBS auditing : "Sign-On:Audit Level" .

- IF MWA is used , then apply MWA patch 17288009 to reduce # of executions of SQL 0y6qmpuvm4w9u

- Turn off SQL tracing

Running ADZDDBCC.sql in 19c database is taking time for Section 26.

 Before running report set parameter _px_cdb_view_enabled to false .

alter session set "_px_cdb_view_enabled"=false ;
Concurrent programs are failing with ora-20100: Temporary file creation for FND_FILE failed errors.

This can be reproduced as follows in SQL-Plus :

SQL> exec FND_FILE.PUT_LINE(FND_FILE.LOG, 'THIS IS A TEST');
BEGIN FND_FILE.PUT_LINE(FND_FILE.LOG, 'THIS IS A TEST'); END;

*
ERROR at line 1:
ORA-20100: ORA-20100: Temporary file creation for FND_FILE failed.
Directory &FILE_DIR is invalid, contact your system administrator.
Action: Check that the directory is a valid directory with write permissions on
the database server node and is listed under utl_file_dir para
ORA-06512: at "APPS.FND_FILE", line 319
ORA-06512: at "APPS.FND_FILE", line 364
ORA-06512: at "APPS.FND_FILE", line 421
ORA-06512: at line 1

 There are three reasons for that failure :

1> utl_file_dir parameter is not set . Run following SQL to check

select value from apps.v$parameter where name = 'utl_file_dir';

 

2> Event 10946 should be set as explained in Note 2525754.1

alter system set events '10946 trace name context forever, level 8454144';
alter system set event='10946 trace name context forever, level 8454144' scope=spfile;

3> First value from above SQL should match to value of $APPLPTMP environment parameter

Running Materialized View refresh programs are running longer and locks are seen in the database

This is caused by RDBMS Bug 30402386 and explained in Note 2716307.1

Apply database patch. Until then set following hidden parameter as a temporary workaround :

ALTER SYSTEM SET “_mv_refresh_shrink_log”=FALSE;
Database alert.log reports parsing errors for FND_SESSION_MANAGEMENT package

2021-03-09T05:07:08.548176-08:00
mypdb(3):WARNING: too many parse errors, count=621 SQL hash=0x8a3bb1a5
mypdb(3):PARSE ERROR: ospid=24007, error=6550 for statement:
2021-03-09T05:07:08.549382-08:00
mypdb(3):declare PRAGMA AUTONOMOUS_TRANSACTION;begin
fnd_signon.audit_client_ip(:1,:2); commit;exception when others then
rollback;end;
mypdb(3):Additional information: hd=0x61fa9b68 phd=0x6daec960 flg=0x100476
cisid=65 sid=65 ciuid=65 uid=65 sqlid=bb9xkdf53rcd5
mypdb(3):...Current username=APPS
mypdb(3):...Application: e::bes:oracle.apps.icx.security.session.created
Action: /


mypdb(3):WARNING: too many parse errors, count=336 SQL hash=0xe2823bd1
mypdb(3):PARSE ERROR: ospid=16624, error=6550 for statement:
2021-03-08T04:16:36.045025-08:00
mypdb(3):BEGIN :1 := FND_SESSION_MANAGEMENT.getHttpOnly(:2); END;
mypdb(3):Additional information: hd=0xc12e4490 phd=0x84b82418 flg=0x100476
cisid=65 sid=65 ciuid=65 uid=65 sqlid=7kcmdtbj84fyj
mypdb(3):...Current username=APPS
mypdb(3):...Application: e::bes:oracle.apps.icx.security.session.created
Action: /

Errors are safe to ignore since they will not be reported to end users. Apply E-Business Suite Patch 32429191:R12.FND.C and Patch 33429261:R12.FND.C .

 
txkCreatePDB.pl Completes with PDB Plugged in with "Open No Master Key"
For an existing 19c container database, Plugin a EBS PDB in container database, Using txkCreatePDB.pl the wallet status for newly created PDB shows "OPEN_NO_MASTER_KEY" and the PDB is in Restricted Mode 
===================
SQL> select * from v$encryption_wallet;

WRL_TYPE WRL_PARAMETER STATUS WALLET_TYPE WALLET_OR FULLY_BAC CON_ID
-------------------------------------------------------------------------------------------------------------- -------------------- --------- ---------
FILE <Location> OPEN_NO_MASTER_KEY AUTOLOGIN SINGLE UNDEFINED 0

This Applies to EBS 12.1.3 & All 12.2 Versions which supports 19c Database including  OCI/On-Premises/Exadata and any other platform.

1. This can be caused due to passing a wrong value in secret key while plugging the PDB using txkCreatePDB.pl.

2. The Script should stop and fail without plugging the PDB when a wrong argument is been passed for secret key during the execution of txkCreatePDB.pl, hence there is an Enhancement request raised with the Product Development for the same "Enh 33566579"

3. To overcome this Situation, please apply the workaround given in the following Note 

Manually Re-import the Secret Key in E-Business Suite Database On Premises & Cloud (Doc ID 2826133.1)

 

Bug 33405295 - DBMS_STATS.IMPORT_DATABASE_STATS FAILING WITH ERROR OPERATION NOT ALLOWED FROM WITHIN A PLUGGABLE DATABASE

Please follow the below action plan to solve the import stats error issue : 

 

Gather schema stats on the source, as below:

sqlplus apps/<apps>
1. exec fnd_stats.gather_schema_statistics(schemaname=>'ALL',estimate_percent=>dbms_stats.auto_sample_size,options=>'GATHER AUTO');

2. Backed up Statistics into FND_STATTAB, following Doc ID 1581549.1 - Section 7.10.2.1 - Exporting Statistics - (using the FNDSTATS SQL provided within the note).

3. Then follow Doc ID: 2615617.1, as below:

Section 5.9 - Export the Applications database instance
Section 6.4 - Import the Applications database instance
Section 7.1 - Reset Advanced Queues
Section 7.2 - Run adgrants.sql
Section 7.3 - Gather statistics and fixed objects statistics
Section 7.4 - Run post PDB script

4. Upgraded FND_STATTAB Statistics Table:

sqlplus system/<system_pw>@MOSCOW
exec dbms_stats.upgrade_stat_table('APPLSYS','FND_STATTAB');

5. Restored Statistics from FND_STATTAB following 1581549.1 - Section 7.10.2.2 Importing Statistics - (using the FNDSTATS SQL provided within the note).

 

Bug 33427856 - Gathering statistics for indexes on large tables in 19c may be time-consuming (Doc ID 33427856.8)

Apply Patch 33427856 or use workarounds until then

Massive Performance Degradation On Statistics For Large Partitioned Tables After Upgrade To 19c (Doc ID 2611336.1)

 

OCI Database Issues

IssueSolution
Failure in opening the PDB, after renaming it during refresh

Core Dump during DBMS_SERVICE.START_SERVICE using case sensitive name for PDB

DB System Platform: ExaCS

DB Version: 19.10

Solution:

Apply Patch 31143870

ORA-65092: system privilege granted with a different scope to 'OEM_MONITOR'

Failure in Datapatch with error:

ERROR at line 1:
ORA-65092: system privilege granted with a different scope to 'OEM_MONITOR'
ORA-06512: at line 6
ORA-06512: at line 2

DB System Platform: VM DB System

DB Version: 19.8

Solution:

Apply Patch 31796208 and rerun datapatch

 

Forms Issues

IssueSolution
ORA-1017 when opening Forms and ORA-1017 Invalid username/password; logon denied FRM-41352 failed to create a new session

 

 

Cause: Misconfiguration with the listener and registration of services:

Solution:

1. In 19c database service_name parameter should be set to container name

     Connect to the CDB and run the following commands:

        alter System set SERVICE_NAMES='<CDBNAME>' SCOPE=BOTH;
        alter system register;

2. Verify that the PDB database is registered with the  listener

      Connect to PDB and check value of local_listener parameter. It should be not be set at PDB level and should be same as CDB value.

3. PDB should have at a minimum of these services in 12.1 and 12.2.

  select name from v$active_services;

12.1:

  ebs_<PDB NAME>
 <PDB NAME>

12.2:

  ebs_<PDB NAME>
  <PDB NAME>
  <PDB_NAME>_ebs_patch

4. Listener should be started using CDB environment file

      cd $ORACLE_HOME

        .CDB_HOSTNAME.env

         lsnrcrt stop CDBNAME

       lsnrctl start CDBNAME

5. You should see something similar to the following:

$ lsnrctl status

LSNRCTL for Linux: Version 19.0.0.0.0 - Production on 29-APR-2020 16:32:48

Copyright (c) 1991, 2019, Oracle. All rights reserved.

Connecting to (ADDRESS=(PROTOCOL=tcp)(HOST=)(PORT=1521))
STATUS of the LISTENER
------------------------
Alias VISCDB
Version TNSLSNR for Linux: Version 19.0.0.0.0 - Production
Start Date 22-APR-2020 21:27:31
Uptime 6 days 19 hr. 5 min. 17 sec
Trace Level off
Security ON: Local OS Authentication
SNMP OFF
Listener Parameter File /u01/oracle/db/tech/19.3/network/admin/listener.ora
Listener Log File /u01/oracle/db/tech/19.3/admin/PDB_HOSTNAME/diag/tnslsnr/HOSTNAME/cdb/alert/log.xml
Listening Endpoints Summary...
(DESCRIPTION=(ADDRESS=(PROTOCOL=tcp)(HOST=HOSTNAME.<HOSTNAME>)(PORT=1521)))
Services Summary...
Service "2809223196ec2af8e053a740d20a4db6" has 1 instance(s).
Instance "CDB", status READY, has 1 handler(s) for this service...
Service "PDB" has 2 instance(s). -----> PDB service. Started when PDB is started
Instance "PDB", status UNKNOWN, has 1 handler(s) for this service...
Instance "CDB", status READY, has 1 handler(s) for this service...
Service "PDB_ebs_patch" has 1 instance(s). ----> Created during post PDB steps , needed for ADOP patching
Instance "CDB", status READY, has 1 handler(s) for this service...
Service "CDB" has 1 instance(s). ---> CDB service , not used by EBS but needed for connections to CDB
Instance "CDB", status READY, has 1 handler(s) for this service...
Service "CDBXDB" has 1 instance(s). --> optional XDB service
Instance "CDB", status READY, has 1 handler(s) for this service...
Service "ebs_PDB" has 1 instance(s). ---> Created during post PDB steps , needed for EBS
Instance "CDB", status READY, has 1 handler(s) for this service...
The command completed successfully

APP-FND-00706: Routine Detected a data error

 

 Cause: Cursor leaking in the database. The following file version has the bug code:

$Header fdfdds.lc 120.23.12020000.20 2019/04/25 09:45:45 <USER> ship $

Solution: After Upgrading To 12.2.9 Forms Fail: "Error APP-FND-00706: Routine Detected A Data Error" (Doc ID 2629037.1)

ORA-04025: maximum allowed library object lock allocated for SELECT NLS_CHARSET_NAME

 Following errors are seen in oacore log files during load testing

oacore.default_group.2:292946577:-1:-1:GUEST(6):266001490:Thread[AJPRequestHandler-HTTPThreadGroup-6,5,HTTPThreadGroup]:1587728106:172.22.180.117:6160782:0:1341:ERROR:[fnd.common.ErrorStack]:java.sql.SQLException: ORA-20001: Oracle error -4025: ORA-04025: maximum allowed library object lock allocated for SELECT NLS_CHARSET_NAME(:B1 ) FROM SYS.DUAL
ORA-06512: at "SYS.STANDARD", line 270
ORA-06512: at "APPS.FND_GLOBAL", line 1219
 has been detected in fnd_global.set_nls.
ORA-06512: at "APPS.APP_EXCEPTION", line 72
ORA-06512: at "APPS.FND_GLOBAL", line 280
ORA-06512: at "APPS.FND_GLOBAL", line 1584
ORA-06512: at "APPS.FND_GLOBAL", line 2017
ORA-06512: at "APPS.FND_GLOBAL", line 2652
ORA-06512: at "APPS.FND_GLOBAL", line 2804
ORA-06512: at "APPS.FND_SESSION_MANAGEMENT", line 999
ORA-06512: at "APPS.FND_SESSION_MANAGEMENT", line 1304
ORA-06512: at "APPS.FND_AOLJ_UTIL", line 433
ORA-06512: at "APPS.FND_AOLJ_UTIL", line 329
ORA-06512: at line 1

    at oracle.jdbc.driver.T4CTTIoer.processError(T4CTTIoer.java:445)
    at oracle.jdbc.driver.T4CTTIoer.processError(T4CTTIoer.java:396)
    at oracle.jdbc.driver.T4C8Oall.processError(T4C8Oall.java:879)
    at oracle.jdbc.driver.T4CTTIfun.receive(T4CTTIfun.java:450)
    at oracle.jdbc.driver.T4CTTIfun.doRPC(T4CTTIfun.java:192)
    at oracle.jdbc.driver.T4C8Oall.doOALL(T4C8Oall.java:531)
    at oracle.jdbc.driver.T4CCallableStatement.doOall8(T4CCallableStatement.java:204)

Apply database <Patch 31820859> : ORA-4025 DUE TO 65535 ACTIVE LOCKS LIMIT REACHED ON SELECT NLS_CHARSET_ID

 

FRM-40735: ON-ERROR Trigger raised unhandled exception ORA-01001

 

 

Solution: Using Oracle 19c RAC Multitenant (Single PDB) with Oracle E-Business Suite Release 12.1 (Doc ID 2530680.1)

 

ORA-65011: when opening forms after upgrading EBS Apps database 19c

 Similar to to ORA-1017 errors in 12.2

 

Cause: Issue is related with service names

           E-Business Suite 12.1 database should only have one service name defined

Solution:

service_names --> parameter should point to CDB only
connect to PDB and run following SQL
select name from v$active_services;
this should return PDB name . If nothing is returned, then shutdown EBS services,, shutdown DB listener, then close PDB , bounce CDB, start DB listener, start EBS services and re-check.
Service should be automatically started when PDB is opened

Login Issues

IssueSolution
Intermittent login issues after 19c upgrade.

The end users will see the following login issues
- AppsLocalLogin.jsp will have a copyright date of 2014
- See login page not with style sheets
- The system has encountered an error
- Uncaught exception

Apply database Patch 31820859

 

Upgrade Issues

IssueSolution
Running noncdb_to_pdb.sql fails with ORA-01219 : DATABASE OR PLUGGABLE DATABASE NOT OPEN: QUERIES ALLOWED ON FIXED TABLES OR VIEWS ONLY.

When a non-CDB, having configured WALLET_ROOT and TDE_CONFIGURATION
is cloned or described/plugged as a PDB, opening the PDB (the PDB
created from non-CDB) fails with ORA-01219 : DATABASE OR PLUGGABLE
DATABASE NOT OPEN: QUERIES ALLOWED ON FIXED TABLES OR VIEWS ONLY

Cause:

Following Bug has been identified in EBS Database TDE configuration. This issue occurs while performing database upgrade to 19c.

BUG 31494095 - NONCDB_TO_PDB.SQL FAILS WITH ORA-01219: DATABASE OR PLUGGABLE DATABASE NOT OPEN: QUERIES ALLOWED ON FIXED TABLES OR VIEWS ONLY

Solution:

Patch 31494095

Autoconfig Run Fails With Error 'tackTrace: java.lang.NullPointerException at oracle.apps.ad.context.GenerateContext.getPDBCDB' Due To Missing 'SELECT_CATALOG_ROLE' In Adgrants.sql

When upgrading the database from 11.2.0.4 to 19c with EBS version R12 using Doc ID 2580629.1, Autoconfig failed with the following error:

"Autoconfig could not create context file."

 

R12 E-Business Suite Upgrade To 19c Autoconfig Run Fails With Error 'tackTrace: java.lang.NullPointerException at oracle.apps.ad.context.GenerateContext.getPDBCDB' Due To Missing 'SELECT_CATALOG_ROLE' In Adgrants.sql (Doc ID 2672303.1)

Autoconfig is running slower after 19c upgrade.

Slow SQL was
SELECT UO.object_name, aa.object_name procedure_name, aa.ARGUMENT_NAME
FROM user_objects uo, all_arguments aa
WHERE uo.object_name = 'FND_GSM_UTIL'
AND aa.object_id = uo.object_id
AND aa.object_name = 'UPLOAD_CONTEXT_FILE'
AND object_type = 'PACKAGE'
ORDER BY 1, 2, 3

SQL is modified to select from user_arguments instead of all_arguments. Fix is in Patch 31138017:R12.TXK.C

Refer to 12.2 E-Business Technology Stack Autoconfig Performance Issue After Upgrade to 19c Database (Doc ID 2684666.1) for more details.

Running Autoconfig errors after 19c upgrade on application tier - Unable to read UTL_FILE_DIR parameter

ERROR DESCRIPTION:
*******FATAL ERROR*******
PROGRAM : (/fs2/EBSapps/appl/fnd/12.0.0/patch/115/bin/txkCfgUtlfileDir.pl)
TIME :
FUNCTION: main::getUtlFileDirParam [ Level 1 ]
ERRORMSG: Unable to read UTL_FILE_DIR parameter.

ERRORCODE = 1 ERRORCODE_END
.end std out.

The following select also fails to return results:

select value from v$parameter where name='utl_file_dir';

 

R12 E-Business Suite Technology Stack Post 19c Upgrade, Running Autoconfig Script txkCfgUtlfileDir.pl Fails With Fatal Error In FUNCTION: main::getUtlFileDirParam ERRORMSG: Unable to read UTL_FILE_DIR parameter (Doc ID 2685022.1)

Unable to call FND_LDAP_WRAPPER.CREATE USER  - ORA-06508: PL/SQL: could not find program unit being called

If you upgraded to 19c prior to June, you may need to apply Patch 31088182. While creating a user in EBS 12.2.4, and after migrating to DB19c, you can run into this problem:
Unable to call FND_LDAP_WRAPPER.CREATE USER due to the following reason:
   ORA-06508: PL/SQL: could not find program unit being called.
   (USER_NAME=<USER>) 


References :

NOTE:2567105.1 - FAQ: Oracle E-Business Suite and the Oracle Multitenant Architecture

NOTE:2826133.1 - Manually Re-import the Secret Key in E-Business Suite Database On Premises & Cloud
NOTE:2655839.2 - Information Center: Oracle Database 19c with Oracle E-Business Suite 12.2 and 12.1
NOTE:2580629.1 - Interoperability Notes: Oracle E-Business Suite Release 12.1 with Oracle Database 19c
NOTE:2683670.1 - Previous Oracle Database 19c Release Update Database Patch Lists for Oracle E-Business Suite
NOTE:2716307.1 - Refresh Collection Snapshots Running Long After DB Upgrade
NOTE:1594274.1 - Oracle E-Business Suite Release 12.2: Consolidated List of Oracle Database Patches and Bug Fixes
NOTE:2525754.1 - Using UTL_FILE_DIR or Database Directories for PL/SQL File I/O in Oracle E-Business Suite Releases 12.1 and 12.2

NOTE:2611336.1 - Massive Performance Degradation On Statistics For Large Partitioned Tables After Upgrade To 19c 

19c Database with Oracle E-Business Suite R12 Known Issues & Solutions (Doc ID 2662860.1)

Monday, March 18, 2024

Example Manual Migration of Oracle E-Business Suite Release 12.2 with Oracle Database 19c to Oracle Base Database Service DB Systems

 Example Manual Migration of Oracle E-Business Suite Release 12.2 with Oracle Database 19c to Oracle Base Database Service DB Systems (Doc ID 2743151.1)

Thursday, March 7, 2024

Loading data into Autonomous Data Warehouse using Datapump

 Oracle introduced Autonomous Data warehouse over a year ago, and one of the most common questions that customers ask me is how they can move their data/schema's to ADW (Autonomous Data Warehouse) with minimal efforts. My answer to that is to use datapump, also known as expdp/impdp. ADW doesn't support traditional import and export, so you have to use datapump. Oracle suggests using schema and parallel as a parameter while using datapump. Use the parallel depending upon the number of OCPU that you have for your ADW instance. Oracle also suggests to exclude index, cluster, indextype, materialized_view, materialized_view_log, materialized_zonemap, db_link. This is done in order to save space and speed up the data load process. At present you can only use data_pump_dir as an option for directory. This is the default directory created in ADW.  Also you don't have to worry about the performance of the database since ADW uses technologies like storage indexes, Machine learning, etc to achieve the optimal performance. You can use the file stored on Oracle Object Storage, Amazon S3 storage and Azure Blob Storage as your dumpfile location. I will be using Oracle Object storage in this article. 

We will be using the steps below to load data:

1) Export the schema of your current database using expdp

2) Upload the .dmp file to object storage

3) Create Authentication Token 

4) Login into ADW using SQLPLUS

5) Create credentials in Autonomous Data Warehouse to connect to Object Storage

6) Run import using Datapump

7) Verify the import

Instead of writing more, let's show you how easy it is to do it. 
 

Step 1 : Export the schema of your current database using expdp

Use the expdp on your current database to run export. Copy that dump file put it in a location from where you can upload it to object storage. 

Step 2 : Upload the .dmp file to object storage. 

In order to upload the .dmp file on object storage log in into your cloud console and click object storage: 

Once in object storage, select the compartment that you want to use and create a bucket. I am going to use compartment  "Liftandshift" and create bucket "LiftandShiftADW".

Bucket in Object Storage

 

Next click on the bucket and click upload to upload the .dmp file.

Upload .dmp file

At this point either you can use CLI (Command line Interface) or GUI (Graphic User interface) to upload the .dmp file. If your .dmp file is larger that 2Gib then you have to use CLI. I am going to use GUI since i have a small schema for the demonstration purpose. 

Select the .dmp file that you want to upload to object storage and then click upload object.

Once you're done, your .dmp file will show up under objects in your Bucket Details Section

Step 3 : Create Authentication Token

Authentication Token will help us access Object Storage from Autonomous DB. 

Under Governance and Administration Section, Click on Identity tab and go to users

 

 

Click on authorized user id and then click on Auth Token under resources on the left side to generate the Auth token.

Click Generate Token, give it a description, and then click Generate token again and it will create the token for you. Remember to save the token. Once the token is created and saved, you won't be able to retrieve it again. 

You can click on the copy button and copy the token to a notepad. Remember to save the token because you will not be able to see the token again. Once done, you can hit the close button on the screen. 

Step 4 : Login into ADW using SQLPLUS

Go to ADW homepage and click on the ADW database you have created. 

Once in the database page click on DB Connection.

Click on the Download button to download the wallet. Once the zip file is downloaded, hit the close button.

 

Download the latest version of instant-client from Oracle website : https://www.oracle.com/technetwork/database/database-technologies/instant-client/downloads/index.html

Unzip all the files in one location. I used the location "C:\instantclient\instantclient_18_3" on my system. Once unzipped you will be able to use sqlplus.exe and impdp.exe at that location. Also move the compressed wallet file to that location and unzip the file. 

Next update the entries in the sqlnet.ora file and point it to the location of your wallet. I have changed mine to "C:\instantclient\instantclient_18_3" as shown below.

Test the connectivity using sqlplus.exe and make sure you are able to connect using the user-id admin.

Step 5: Create credentials in Autonomous Data Warehouse to connect to Object Storage

Use the below script to create credentials in ADW, and use the Authentication token created earlier as the password.

BEGIN
  DBMS_CLOUD.CREATE_CREDENTIAL(
    credential_name => 'DEF_CRED_NAME',
    username => 'oracleidentitycloudservice/ankur.saini@oracle.com',
    password => '<password>'                                   <------------ (Use Authentication token Value here instead of the password)
  );
END;
/

Step 6 : Run import using Datapump

Since my ADW instance is built using 1 OCPU, I won't be using parallel as an option. I used the script below to run the import:

./impdp.exe admin/<Password>@liftandshift_high directory=data_pump_dir credential=def_cred_name dumpfile= https://swiftobjectstorage.us-ashburn-1.oraclecloud.com/v1/orasenatdoracledigital05/AnkurObject/hrapps.dmp exclude=index, cluster, indextype, materialized_view, materialized_view_log, materialized_zonemap, db_link

 

Step 7: Verify the import

Login into the database using sqlplus or sqldeveloper and verify the import. 

You can see how easy it is to move the data to ADW, and that there is not a huge learning curve. Now you can be more productive and focus on your business.

Reference:

https://docs.oracle.com/en/cloud/paas/autonomous-data-warehouse-cloud/user/load-data.html#GUID-297FE3E6-A823-4F98-AD50-959ED96E6969

 https://blogs.oracle.com/datawarehousing/post/loading-data-into-autonomous-data-warehouse-using-datapump

 

Oracle E-Business Suite Release 12.2 System Schema Migration

In This Document Section 1: Overview of the EBS System Schema Section 2: Requirements for Using the EBS System Schema Section 3: Migrating t...