Wednesday, August 17, 2022
How To Cleanup Orphaned DataPump Jobs In DBA_DATAPUMP_JOBS ? (Doc ID 336014.1)
SOLUTION
The jobs used in this example:
- Export job .EXPDP_20051121 is a schema level export that is running
- Export job .SYS_EXPORT_TABLE_01 is an orphaned table level export job
- Export job .SYS_EXPORT_TABLE_02 is a table level export job that was stopped
- Export job SYSTEM.SYS_EXPORT_FULL_01 is a full database export job that is temporary stopped
Step 1. Determine in SQL*Plus which Data Pump jobs exist in the database:
%sqlplus /nolog
CONNECT / as sysdba
SET lines 200
COL owner_name FORMAT a10;
COL job_name FORMAT a20
COL state FORMAT a12
COL operation LIKE state
COL job_mode LIKE state
COL owner.object for a50
-- locate Data Pump jobs:
SELECT owner_name, job_name, rtrim(operation) "OPERATION",
rtrim(job_mode) "JOB_MODE", state, attached_sessions
FROM dba_datapump_jobs
WHERE job_name NOT LIKE 'BIN$%'
ORDER BY 1,2;
OWNER_NAME JOB_NAME OPERATION JOB_MODE STATE ATTACHED
---------- ------------------- --------- --------- ----------- --------
EXPDP_20051121 EXPORT SCHEMA EXECUTING 1
SYS_EXPORT_TABLE_01 EXPORT TABLE NOT RUNNING 0
SYS_EXPORT_TABLE_02 EXPORT TABLE NOT RUNNING 0
SYSTEM SYS_EXPORT_FULL_01 EXPORT FULL NOT RUNNING 0
Step 2. Ensure that the listed jobs in dba_datapump_jobs are not export/import Data Pump jobs that are active: status should be 'NOT RUNNING'.
Step 3. Check with the job owner that the job with status 'NOT RUNNING' in dba_datapump_jobs is not an export/import Data Pump job that has been temporary stopped, but is actually a job that failed. (E.g. the full database export job by SYSTEM is not a job that failed, but was deliberately paused with STOP_JOB).
Step 4. Identify orphan DataPump external tables. To do this, run the following as SYSDBA in SQL*Plus:
set linesize 200 trimspool on
set pagesize 2000
col owner form a30
col created form a25
col last_ddl_time form a25
col object_name form a30
col object_type form a25
select OWNER,OBJECT_NAME,OBJECT_TYPE, status,
to_char(CREATED,'dd-mon-yyyy hh24:mi:ss') created ,to_char(LAST_DDL_TIME , 'dd-mon-yyyy hh24:mi:ss') last_ddl_time
from dba_objects
where object_name like 'ET$%'
/
select owner, TABLE_NAME, DEFAULT_DIRECTORY_NAME, ACCESS_TYPE
from dba_external_tables
order by 1,2
/
Correlate the information from DBA_OBJECTS and DBA_EXTERNAL TABLES above to identify the temporary external tables that belong to the DataPump orphaned jobs.
Drop the temporary external tables that belong to the DataPump orphaned job. eg:
SQL> drop table system.&1 purge;
Enter value for 1: ET$00654E1E0001
old 1: drop table system.&1 purge
new 1: drop table system.ET$00654E1E0001 purge
Step 5. Determine in SQL*Plus the related master tables:
-- locate Data Pump master tables:
COL owner.object FORMAT a50
SELECT o.status, o.object_id, o.object_type,
o.owner||'.'||object_name "OWNER.OBJECT"
FROM dba_objects o, dba_datapump_jobs j
WHERE o.owner=j.owner_name AND o.object_name=j.job_name
AND j.job_name NOT LIKE 'BIN$%' ORDER BY 4,2;
STATUS OBJECT_ID OBJECT_TYPE OWNER.OBJECT
------- ---------- ------------ -------------------------
VALID 85283 TABLE .EXPDP_20051121
VALID 85215 TABLE .SYS_EXPORT_TABLE_02
VALID 85162 TABLE SYSTEM.SYS_EXPORT_FULL_01
select table_name, owner from dba_external_tables;
Step 6. For jobs that were stopped in the past and won't be restarted anymore, delete the master table. E.g.:
DROP TABLE .sys_export_table_02;
-- For systems with recycle bin additionally run:
purge dba_recyclebin;
Note:
=====
Following statement can be used to generate the drop table statement for the master table:
SELECT 'DROP TABLE '||o.owner||'.'||object_name||' PURGE;'
FROM dba_objects o, dba_datapump_jobs j
WHERE o.owner=j.owner_name AND o.object_name=j.job_name
AND j.job_name NOT LIKE 'BIN$%';
NOTE:
In case the table name is mixed case, you can get errors on the drop, e.g.:
SQL> drop table SYSTEM.impdp_schema_TEST_10202014_0;
drop table SYSTEM.impdp_schema_TEST_10202014_0
*
ERROR at line 1:
ORA-00942: table or view does not exist
Because the table has a mixed case, try using these statements with double quotes around the table name, for instance:
drop table SYSTEM."impdp_SCHEMA_TEST_04102015_1";
drop table SYSTEM."impdp_schema_TEST_10202014_0";
Step 7. Re-run the query on dba_datapump_jobs and dba_objects (step 1 and 4). If there are still jobs listed in dba_datapump_jobs, and these jobs do not have a master table anymore, cleanup the job while connected as the job owner. E.g.:
CONNECT /
SET serveroutput on
SET lines 100
DECLARE
h1 NUMBER;
BEGIN
h1 := DBMS_DATAPUMP.ATTACH('SYS_EXPORT_TABLE_01','');
DBMS_DATAPUMP.STOP_JOB (h1);
END;
/
Note that after the call to the STOP_JOB procedure, it may take some time for the job to be removed. Query the view user_datapump_jobs to check whether the job has been removed:
CONNECT /
SELECT * FROM user_datapump_jobs;
Step 8. Confirm that the job has been removed:
CONNECT / as sysdba
SET lines 200
COL owner_name FORMAT a10;
COL job_name FORMAT a20
COL state FORMAT a12
COL operation LIKE state
COL job_mode LIKE state
COL owner.object for a50
-- locate Data Pump jobs:
SELECT owner_name, job_name, rtrim(operation) "OPERATION",
rtrim(job_mode) "JOB_MODE", state, attached_sessions
FROM dba_datapump_jobs
WHERE job_name NOT LIKE 'BIN$%'
ORDER BY 1,2;
OWNER_NAME JOB_NAME OPERATION JOB_MODE STATE ATTACHED
---------- ------------------- --------- --------- ----------- --------
EXPDP_20051121 EXPORT SCHEMA EXECUTING 1
SYSTEM SYS_EXPORT_FULL_01 EXPORT FULL NOT RUNNING 0
-- locate Data Pump master tables:
SELECT o.status, o.object_id, o.object_type,
o.owner||'.'||object_name "OWNER.OBJECT"
FROM dba_objects o, dba_datapump_jobs j
WHERE o.owner=j.owner_name AND o.object_name=j.job_name
AND j.job_name NOT LIKE 'BIN$%' ORDER BY 4,2;
STATUS OBJECT_ID OBJECT_TYPE OWNER.OBJECT
------- ---------- ------------ -------------------------
VALID 85283 TABLE .EXPDP_20051121
VALID 85162 TABLE SYSTEM.SYS_EXPORT_FULL_01
Remarks:
1. Orphaned Data Pump jobs do not have an impact on new Data Pump jobs. The view dba_datapump_jobs is a view, based on gv$datapump_job, obj$, com$, and user$. The view shows the Data Pump jobs that are still running, or jobs for which the master table was kept in the database, or in case of an abnormal end of the Data Pump job (the orphaned job). If a new Data Pump job is started, a new entry will be created, which has no relation to the old Data Pump jobs.
2. When starting the new Data Pump job and using a system generated name, we check the names of existing Data Pump jobs in the dba_datapump_job in order to obtain a unique new system generated jobname. Naturally, there needs to be enough free space for the new master table to be created in the schema that started the new Data Pump job.
3. A Data Pump job is not the same as a job that is defined with DBMS_JOBS. Jobs created with DBMS_JOBS use there own processes. Data Pump jobs use a master process and worker process(es). In case a Data Pump still is temporary stopped (STOP_JOB while in interactive command mode), the Data Pump job still exists in the database (status: NOT RUNNING), while the master and worker process(es) are stopped and do not exist anymore. The client can attach to the job at a later time, and continue the job execution (START_JOB).
4. The possibility of corruption when the master table of an active Data Pump job is deleted, depends on the Data Pump job.
4.a. If the job is an export job, corruption is unlikely as the drop of the master table will only cause the Data Pump master and worker processes to abort. This situation is similar to aborting an export of the original export client.
4.b. If the job is an import job then the situation is different. When dropping the master table, the Data Pump worker and master processes will abort. This will probably lead to an incomplete import: e.g. not all table data was imported, and/or table was imported incomplete, and indexes, views, etc. are missing. This situation is similar to aborting an import of the original import client.
The drop of the master table itself, does not lead to any data dictionary corruption. If you keep the master table after the job completes (using the undocumented parameter: KEEP_MASTER=Y), then a drop of the master table afterwards, will not cause any corruption.
5. Instead of the status 'NOT RUNNING' the status of a failed job could also be 'DEFINING'. When trying to attach to such a job, this would fail with:
$ expdp system/ attach=system.sys_export_schema_01
Export: Release 11.2.0.4.0 - Production on Tue Jan 27 10:14:27 2015
Copyright (c) 1982, 2011, Oracle and/or its affiliates. All rights reserved.
Connected to: Oracle Database 11g Enterprise Edition Release 11.2.0.4.0 - 64bit Production
With the Partitioning, OLAP, Data Mining and Real Application Testing options
ORA-31626: job does not exist
ORA-06512: at "SYS.DBMS_SYS_ERROR", line 79
ORA-06512: at "SYS.KUPV$FT", line 405
ORA-31638: cannot attach to job SYS_EXPORT_SCHEMA_01 for user SYSTEM
ORA-31632: master table "SYSTEM.SYS_EXPORT_SCHEMA_01" not found, invalid, or inaccessible
ORA-00942: table or view does not exist
The steps to cleanup these failed/orphaned jobs are the same as mentioned above.
Friday, August 12, 2022
RMAN DUPLICATE STANDBY
run
{
allocate channel tgt1 device type disk ;
allocate channel tgt2 device type disk ;
allocate channel tgt3 device type disk ;
allocate channel tgt4 device type disk ;
allocate channel tgt5 device type disk ;
allocate channel tgt6 device type disk ;
allocate channel tgt7 device type disk ;
allocate channel tgt8 device type disk ;
allocate channel tgt9 device type disk ;
allocate channel tgt10 device type disk ;
allocate channel tgt11 device type disk ;
allocate channel tgt12 device type disk ;
allocate channel tgt13 device type disk ;
allocate channel tgt14 device type disk ;
allocate channel tgt15 device type disk ;
allocate channel tgt16 device type disk ;
allocate auxiliary channel aux1 device type disk ;
allocate auxiliary channel aux2 device type disk ;
allocate auxiliary channel aux3 device type disk ;
allocate auxiliary channel aux4 device type disk ;
allocate auxiliary channel aux5 device type disk ;
allocate auxiliary channel aux6 device type disk ;
allocate auxiliary channel aux7 device type disk ;
allocate auxiliary channel aux8 device type disk ;
allocate auxiliary channel aux9 device type disk ;
allocate auxiliary channel aux10 device type disk ;
allocate auxiliary channel aux11 device type disk ;
allocate auxiliary channel aux12 device type disk ;
allocate auxiliary channel aux13 device type disk ;
allocate auxiliary channel aux14 device type disk ;
allocate auxiliary channel aux15 device type disk ;
allocate auxiliary channel aux16 device type disk ;
DUPLICATE TARGET DATABASE
FOR STANDBY
FROM ACTIVE DATABASE
DORECOVER
SPFILE
SET db_unique_name=‘cvs19qas_stby’ COMMENT ‘Is standby’
set DB_CREATE_FILE_DEST = ‘+DATAC1’
set DB_CREATE_ONLINE_LOG_DEST_1 = ‘+DATAC1’
set DB_RECOVERY_FILE_DEST = ‘+RECOC1’
set diagnostic_dest=‘/u02/app/oracle’
set DB_RECOVERY_FILE_DEST_SIZE=‘1500G’
set audit_file_dest=‘/u02/app/oracle/product/19.0.0.0/dbhome_1/admin/cvs19qas/adump’
SET job_queue_processes=‘0’
SET LOCAL_LISTENER=‘’
SET REMOTE_LISTENER=‘’
set cluster_database=‘false’
NOFILENAMECHECK;
}
Monday, August 1, 2022
How to drop other schema’s Database link using SYS user | Drop db_link from other user
SQL> select owner,db_link from dba_db_links;
OWNER DB_LINK
------------------------------ --------------------------------------
TEST TEST_DB_LINK
SQL> sho user
USER is "SYS"
SQL>
SQL> drop database link test.test_db_link;
drop database link test.test_db_link
*
ERROR at line 1:
ORA-02024: database link not found
[test@test.test.com ~]$ cat drop_schema_dblink.sh
username=$1
db_link=$2
sqlplus /nolog < SQL> Connected.
SQL> " DB Link Before Drop"
SQL> SQL> SQL>
OWNER DB_LINK
------------------------------ ------------------------------
TEST TEST_DB_LINK
SQL> 2 3 4 5
Procedure created.
SQL>
PL/SQL procedure successfully completed.
SQL>
Procedure dropped.
SQL> " DB Link After Drop"
SQL>
no rows selected
SQL>
Disconnected from Oracle Database 11g Enterprise Edition Release 11.2.0.3.0 - 64bit Production
With the Partitioning, Automatic Storage Management, OLAP, Data Mining
and Real Application Testing options
!
Saturday, July 2, 2022
Critical Patch Update (CPU) Patch Advisor for Oracle Fusion Middleware - Updated for April 2022 (Doc ID 2806740.2)
This document provides a cumulative list of security patches for Oracle Fusion Middleware products that are under error correction support and released with the Critical Patch Update (CPU) program. The intent of this document is to complement the most current Patch Availability Document to outline all patches released for an Oracle Fusion Middleware home; typically more than a single product patch and spanning multiple CPU releases.
TIP: Bookmark or Favorite this document! This document will always be used to supply the latest CPU patching advice.
ADVISOR WEBCAST: Doc ID 2849657.2 MW - Introducing the New Middleware Critical Patch Update (CPU) Patch Advisor on March 31, 2022
Wednesday, May 25, 2022
EBS Asserter IDCS
IDCS EBS Asserter Cheat Sheet
Introduction
I worked with a number of customers to resolve issues with their E-Business Suite single sign-on setup using the IDCS asserter solution and saw a pattern in the misconfigurations causing these similar types of issues.
In this blog, I want to share the basic EBS asserter flows to understand the runtime better and the commonly faced issues along with their resolutions.
I hope this post will save you a lot of time and effort should you have any of these issues or will serve as a first-time guide to get this integration working.
The starting point for anyone trying to set up EBS SSO using the asserter can be found here - https://www.oracle.com/webfolder/technetwork/tutorials/obe/cloud/idcs/ebs_asserter_obe/ebs-asserter.html
Most customers are comfortable configuring the asserter and making the login/logout flows work as per the document, but when it comes to troubleshooting, a deeper understanding of the asserter flows helps.
So, here are the 2 important flows :
Login flow
As you can see in the above figure, the EBS asserter makes a backend call to IDCS service and also to EBS DB to establish EBS session. So, make sure the asserter host can reach IDCS and EBS DB.
Logout Flow
This is a fairly simple flow. At the end of the logout flow, a user should land on the configured post logout URL.
Important Steps During Setup
Datasource connection
At setup time many customers may struggle a bit with the datasource configuration step.
Apart from the usual database connection parameters such as host, port and SID/Service name, following are the important things to verify when creating a datasource used by the asserter to connect to EBS DB
The driver class - If you use a non-XA datasource, type oracle.apps.fnd.ext.jdbc.datasource.AppsDataSource If you are using an XA data source, type oracle.apps.fnd.ext.jdbc.datasource.AppsXADataSource and make sure that the driver jar file is copied to /lib and the admin server has been restarted.
The user - EBS user with UMX|APPS_SCHEMA_CONNECT Role. For security reasons, do not use "APPS" DB user, this user has the highest privileges not required by the asserter.
DBC file - A customer reported getting a NullPointerException when creating the data source. The issue turned out to be a missing DBC file on the asserter machine because the DBC file was created on the EBS system but was never copied over to the asserter machine.
Asserter Bridge.properties
The document does a good job of explaining all the parameters configured.
Pay extra attention to -
whitelist.urls - Lists the URL E-Business Suite Asserter can accept as the requestUrl parameter value.
when a login session is initiated from a different page than the EBS home page, the asserter gets "requestUrl" parameter in the request and knows where to redirect upon successful authentication. The asserter keeps a list of the allowed URLs. If the white list does not contain the incoming "requestURL", the asserter will redirect the user to the configured homepage.
post.logout.url - Mention the URL where a user should be redirected to upon EBS app logout. This must match the post redirect URL configured in IDCS for the asserter application.
Context root and cookie path
Customers may need to deploy the asserter WAR file multiple times on the same Weblogic server with different context roots ( e.g. devsso, testsso etc.) or a single WAR, but with a different context root than the default one.
Make sure to match the cookie path with the new context root in Weblogic.xml before you deploy the asserter.
true
ASSERTERSESSIONID
/ebs
Common issues
1) The number one issue reported by the customers is with the WebADI templates. WebADI ( Web Applications Desktop Integrator ) provides the use of Microsoft Office applications such as Excel, Word, and Project to upload data to Oracle E-Business Suite. It initiates the login flow in an embedded browser. The authentication flow goes through as expected but the users don’t land on the expected page. So, when WebADI users click on the "upload document” button, they are taken to the EBS homepage rather than to the "user responsibilities" selection page.
This happens when the asserter whitelist URLs are not configured correctly, either the WebADI specific URLs (like "/OA_HTML/BneApplicationService") are missing or there is a typo in the configuration. Another reason could be that an older version of the asserter is being used.
Make sure you download the latest asserter from your IDCS tenancy and verify the whitelist URLs configured in bridge.properties.
2) User does not land on the expected URL post logout
The user clicks on “logout” button in EBS application and lands on the IDCS login page instead of the one configured as a “post.logout.url” in the asserter configuration.
The issue is that the asserter does not find the session cookie, but initiates IDCS logout anyway, without the necessary request data. IDCS session logout happens and the user is taken to the IDCS global session logout URL instead of the asserter application-specific post logout URL.
Verify the asserter cookie path configured in "weblogic.xml" file within the asserter WAR.
The cookie path must match the context root used for the asserter deployment. For example, if you used “/dev/asserter” as the context root, make sure the cookie path is set to “/dev/asserter” too.
Summary
Verify the asserter version and configurations mentioned in the tutorial and in this blog and your EBS SSO setup should be a cakewalk.
https://www.ateam-oracle.com/post/idcs-ebs-asserter-cheat-sheet
https://docs.rackspace.com/blog/set-up-oracle-access-manager-for-ebusiness-suite/
Friday, April 8, 2022
Implications of latest Oracle WebLogic Server connection filters for Oracle EBS r12.2 customers
Above screenshot might click many Apps DBAs who have recently applied -
1. April 2019 CPU Patch
2. Upgraded to Latest AD and TXK level (C.11)
Recently I applied Oracle E-Business Suite Technology Stack Delta 11 on an EBS r12.2.7 implementation.
After applying patch we went for sanity checks and when trying to open Weblogic Server console I got this -
The Server is not able to service this request: [Socket:000445]Connection rejected, filter blocked Socket, weblogic.security.net.FilterException: [Security:090220]rule 2
Below are metalink ids ideally Apps DBAs should refer to for resolving this-
Alternative Methods to Allow Access to Oracle WebLogic Server Administration Console from Trusted Hosts for Oracle E-Business Suite Release 12.2 (Doc ID 2542826.1)
ORA-12547 While Client Connecting Via SSH Tunnel (Doc ID 454252.1)
Oracle Community also has few tips for this-
https://community.oracle.com/message/15413970#15413970
Troubleshooting Part -
cd $FMW_HOME/user_projects/domains/EBS_domain_/config/
cat config.xml | grep connection-filte
oracle.apps.ad.tools.configuration.wls.filter.EBSConnectionFilterImpl< connection-filter>
appsnode * * allow
0.0.0.0/0 * * deny
We will be exploring all the 3 scenarios with real-time usecases.
Option 1: Adding Specific Trusted Hosts
1. This can be done by using context variable - s_wls_admin_console_access_nodes
A comma seperated set of ips/hostnames(fqdns) can be used like as follows to allow set of system administrators/weblogic administrators
to access console -
host1.domain.com,host2.domain.com
2. Execute autoconfig on run filesystem.
3. Stop and start Oracle Weblogic admin server
adadminsrvctl.sh stop
adadminsrvctl.sh start
4. perform fs_clone to synchronize filesystems
adop phase=fs_clone
Option 2: Allowing an IP Range
This option will be available after applying Patch 29781255:R12.TXK.C.
There will be requirements where you need to provide IP range and it is important to
first understandd how CIDR works.
According to CIDR ruling, you can have a factor set to 4^n
This implies we can have ip range as
4,16,64,256
Sample example to narrow down IP range -
195.168.1.32/24 ---> 256 IP Hosts
195.168.1.32/26 ---> 64 IP Hosts
195.168.1.32/28 ---> 16 IP Hosts
195.168.1.32/30 ---> 04 IP Hosts
I first checked for patch if already applied as a standard practice.
Query 1-
set lines 1000
col APPLIED_FILE_SYSTEM_BASE for a40
SELECT b.bug_number, asp.adop_session_id, asp.bug_number patch#,
asp.session_type, asp.applied_file_system_base,
asp.start_date, asp.end_date
FROM ad_bugs b, ad_patch_run_bugs prb, ad_patch_runs pr,
ad_patch_drivers pd, ad_adop_session_patches asp
WHERE b.bug_id = prb.bug_id
AND prb.patch_run_id = pr.patch_run_id
AND pr.patch_driver_id = pd.patch_driver_id
AND pr.patch_run_id = asp.patchrun_id
AND prb.applied_flag = 'Y'
AND b.bug_number = '&bug_num';
Enter value for bug_num: 29781255
old 11: AND b.bug_number = '&bug_num'
new 11: AND b.bug_number = '29781255'
no rows selected
Query 2-
SELECT adb.bug_number,ad_patch.is_patch_applied('122', 1045, adb.bug_number)
FROM ad_bugs adb
WHERE adb.bug_number in (29781255);
Query 3-
select ad_patch.is_patch_applied('R12',-1,29781255) from dual;
Once patch was applied, we update CONTEXT file on run fs as follows -
cat $CONTEXT_FILE | grep wls | grep nodes
195.168.1.32/30
This can me below set of 4 ips -
195.168.1.32-to-195.168.1.35
Note: Easy way to calculate range of Ips is using online calculator -
https://www.ipaddressguide.com/cidr
Executed autoconfig and started admin server to reflect changes.
Option 3: Adding Specific Trusted Hosts
SSH tunneling is pre-requisite here and I achieved it using putty.
For lab environment using static ips, this can be simply achieved using below -
1. Ssh -> Logging
Provide Destination IP address and keep port for ssh as 22.
Save it with some name to it can be loaded later for future reference.
2. Connection -> SSH -> Tunnels
Provide Source port, client machine's port which is open and not blocked or used by any other application.
In my case it was 81.
Provide Destination Hostname:Port and click on 'Add'
3. Go back to session and save it.
4. If you are not intending to logon to server, you can use option under 'Ssh' -
5. Login to saved session and monitor Event log for putty Session -
5. Few more settings are required on your web-browser, I used Firefox here -
127.0.0.1 is for Localhost.
Clear browser cache and try console again -
https://maazdba.blogspot.com/2019/09/implications-of-latest-oracle-weblogic.html
Wednesday, March 2, 2022
Here’s how a 22-year-old made $1 million by selling NFT selfies
Here’s how a 22-year-old made $1 million by selling NFT selfies
https://indianexpress.com/article/technology/crypto/heres-how-a-22-year-old-made-1-million-by-selling-nft-selfies-7728112/
https://photodoto.com/create-nft-from-photo/
Subscribe to:
Posts (Atom)
Database Options/Management Packs Usage Reporting for Oracle Databases 11.2 and later (Doc ID 1317265.1)
Database Options/Management Packs Usage Report You can determine whether an option is currently in use in a database by running options_pa...
-
In this Document Goal Ask Questions, Get Help, And Share Your Experiences With This Article Solution 12c TDE FAQ documentation Quick...
-
This document describes how to develop and deploy customizations in an Oracle E-Business Suite Release 12.2 environment. Follow thes...
-
This document also provides links to two presentations on the subject: Oracle OpenWorld presentation "Technical Upgrade Best Practice...