Friday, July 21, 2023

Out of Place (OOP) Patching of Oracle Restart 21c

Previous post showed OOP for 19c Oracle restart. Things are much simpler in 21c and can expect the same for 23c once released. The -switchGridHome option is supported for Oracle Restart in 21c (Oracle doc here). As such the OOP is simply installing new GI home in a different location with -switchGridHome option.
The current GI home is in /opt/codegen/app/oracle/product/21.x.0/grid and release patch is
crsctl query has releasepatch
Oracle Clusterware release patch level is [3414221900] and the complete list of patches [35132583 35134934 35134943 35149778 35222143 35226235 ] have been applied on the local node. The release patch string is [21.10.0.0.0].


Run the gridsetup with -switchGridHome option and any RU and one-off patches. This step doesn't result in downtime.
./gridSetup.sh -silent -switchGridHome  -applyRU /opt/installs/patches/35427907
Preparing the home to patch...
Applying the patch /opt/installs/patches/35427907...
Successfully applied the patch.
The log can be found at: /opt/codegen/app/oraInventory/logs/GridSetupActions2023-07-21_01-55-36PM/installerPatchActions_2023-07-21_01-55-36PM.log
Launching Oracle Grid Infrastructure Setup Wizard...

You can find the log of this install session at:
 /opt/codegen/app/oraInventory/logs/GridSetupActions2023-07-21_01-55-36PM/gridSetupActions2023-07-21_01-55-36PM.log

As a root user, execute the following script(s):
        1. /opt/codegen/app/oracle/product/21.11.0/grid/root.sh

Execute /opt/codegen/app/oracle/product/21.11.0/grid/root.sh on the following nodes:
[ip-172-31-10-193]
When prompted run the root.sh. This is where the grid home switching happens and results in down time. In the course of running root.sh the HAS stack is brought down in the old GI home and started in the new GI home.
/opt/codegen/app/oracle/product/21.11.0/grid/root.sh
Check /opt/codegen/app/oracle/product/21.11.0/grid/install/root_ip-172-31-10-193.eu-west-1.compute.internal_2023-07-21_14-06-15-742998108.log for the output of root script
The output on the log files shows prepatch and postpatch steps run on the new GI home.
Performing root user operation.

The following environment variables are set as:
    ORACLE_OWNER= oracle
    ORACLE_HOME=  /opt/codegen/app/oracle/product/21.11.0/grid
   Copying dbhome to /usr/local/bin ...
   Copying oraenv to /usr/local/bin ...
   Copying coraenv to /usr/local/bin ...

Entries will be added to the /etc/oratab file as needed by
Database Configuration Assistant when a database is created
Finished running generic part of root script.
Now product-specific root actions will be performed.
Using configuration parameter file: /opt/codegen/app/oracle/product/21.11.0/grid/crs/install/crsconfig_params
The log of current session can be found at:
  /opt/codegen/app/oracle/crsdata/ip-172-31-10-193/crsconfig/hapatch_2023-07-21_02-06-16PM.log
2023/07/21 14:06:18 CLSRSC-347: Successfully unlock /opt/codegen/app/oracle/product/21.11.0/grid
2023/07/21 14:06:18 CLSRSC-671: Pre-patch steps for patching GI home successfully completed.
Using configuration parameter file: /opt/codegen/app/oracle/product/21.11.0/grid/crs/install/crsconfig_params
The log of current session can be found at:
  /opt/codegen/app/oracle/crsdata/ip-172-31-10-193/crsconfig/hapatch_2023-07-21_02-06-18PM.log
2023/07/21 14:07:16 CLSRSC-329: Replacing Clusterware entries in file 'oracle-ohasd.service'
2023/07/21 14:08:01 CLSRSC-672: Post-patch steps for patching GI home successfully completed.


Looking at the currently running processes will show the GI related processes running out of new GI home.
  32070 ?        Ssl    0:02 /opt/codegen/app/oracle/product/21.11.0/grid/bin/ohasd.bin reboot _ORA_BLOCKING_STACK_LOCALE=AMERICAN_AMERICA.AL32UTF8
  32281 ?        Ssl    0:00 /opt/codegen/app/oracle/product/21.11.0/grid/bin/oraagent.bin
  32308 ?        Ssl    0:00 /opt/codegen/app/oracle/product/21.11.0/grid/bin/evmd.bin
  32312 ?        Ss     0:00 /opt/codegen/app/oracle/product/21.11.0/grid/bin/tnslsnr LISTENER -no_crs_notify -inherit
  32368 ?        Ssl    0:00 /opt/codegen/app/oracle/product/21.11.0/grid/bin/evmlogger.bin -o /opt/codegen/app/oracle/product/21.11.0/grid/log/[HOSTNAME]/evmd/evmlogger.info -l /opt/codegen/app/oracle/product/21.11.0/grid/log/[HOSTNAME]
  32384 ?        Ssl    0:00 /opt/codegen/app/oracle/product/21.11.0/grid/bin/cssdagent
  32423 ?        Ssl    0:00 /opt/codegen/app/oracle/product/21.11.0/grid/bin/onmd.bin
  32425 ?        Ssl    0:00 /opt/codegen/app/oracle/product/21.11.0/grid/bin/ocssd.bin
Release patch is 21.11
 crsctl query has releasepatch
Oracle Clusterware release patch level is [1435465441] and the complete list of patches [35428978 35442014 35442022 35442029 35550598 35589155 ] have been applied on the local node. The release patch string is [21.11.0.0.0].

Unlike in 19c no manual work is needed for updating the oracle inventory. New GI home is auto added with crs=true and crs=true is removed from old home during the GI home switch processes.
<HOME NAME="OraGI21Home1" LOC="/opt/codegen/app/oracle/product/21.x.0/grid" TYPE="O" IDX="1"/>
<HOME NAME="OraGI21Home2" LOC="/opt/codegen/app/oracle/product/21.11.0/grid" TYPE="O" IDX="4" CRS="true"/>

Related Posts
Out of Place (OOP) Patching of Oracle Restart

Wednesday, June 14, 2023

Upgrading 11.2.0.4 to 19c Using AutoUpgrade

There is an earlier post which shows how to upgrade 11.2 Oracle restart to 19c. However, in that post the database was upgraded using DBUA and left as a non-CDB. Autoupgrade (AU) could also be used to convert a non-CDB to CDB. This post shows how to do these two steps in one step using AU.
Below is the configuraiton file used in this case.
global.autoupg_log_dir=/home/oracle/upgr_log

# ----- NonCDB to PDB conversion -----
# To upgrade and convert an existing NonCDB database into a PDB of a target CDB,
# use the target_cdb parameter to specify the destination CDB.
# The target_pdb_name and target_pdb_copy_option parameters can be used
# to determine how each PDB is created on the target CDB.
#
# When neither of these options are used, a full upgrade of the source DB/CDB is performed.
#

upg1.sid=testupg
upg1.source_home=/opt/app/oracle/product/11.2.0/dbhome_1
upg1.target_home=/opt/app/oracle/product/19.x.0/dbhome_1
upg1.run_utlrp=yes
upg1.timezone_upg=yes

### NonCDB to PDB parameters ###
upg1.target_cdb=testcdb
upg1.target_pdb_name=devpdb
upg1.target_pdb_copy_option=file_name_convert=NONE
The 11.2 DB is called testupg and it's Oracle home is the source_home. The configuration already has a 19c CDB and its Oracle home is the target_home. This CDB is the target_cdb and called testcdb.
The 11.2 DB will be upgraded and plugged into the target CDB as a PDB named devpdb. The target_pdb_copy_option parameter is set to NONE so that upgraded PDB's data files are copied under the target CDB's data file structure.
The 11.2 DB has the final patch set update applied on it and 19c CDB has 19.19 RU applied.
As the first step run the AU in analyze mode to identify any issues.
java -jar autoupgrade.jar -config noncdb11g2_cdb.cfg -mode analyze

Some prechecks may require manual intervention to fix. For others AU provides a fixup mode.
java -jar autoupgrade.jar -config noncdb11g2_cdb.cfg -mode fixups




When all the fixups are done and prechceks have passed, run the AU in deploy mode to begin the upgrade and plugin as a PDB.
java -jar autoupgrade.jar -config noncdb11g2_cdb.cfg -mode deploy

The 11.2 DB is now a PDB inside the target CDB. If there was a database service associated with it then this has to be manaully created using srvctl.
One of the things noticed during this upgrade test is with regard to timezone. The 19c CDB had timezone file 32 as the current timezone file. However, the 19.19 RU has newer timezone files but they are simply copied to Oracle home and not applied. This is mentioned in the RU readme.html.
Applying this Release Update (RU) does not change the DST version in any of your databases.
The DST patches are only copied into $ORACLE_HOME/oracore/zoneinfo and no DST upgrade is performed. 
You can decide to upgrade the DST version in your database to a newer DST version independently of 
applying the RU at any time.
During the upgrade, as timezone_upg is set to yes the 11.2 DB's timezone is upgraded to 41. When this is plugged into the CDB then the PDB will have a higher timezone than the CDB. To avoid this from happening, either upgrade the CDB to latest timezone before the upgrade or set timezone_upg to no in the config file. Then run the timezone upgrade in the PDB by setting the timezone value used by the CDB.

The copies of the 11.2 datafiles remains in their original location after the upgrade. This database could be opened as a 19c non-CDB from a 19c home. On the otherhand it could be mounted using the 11.2 home and flashback to 11.2 using the restore point created by AU. In the later case (flashback to 11.2) following error may occur when archivelog clean up is initiated using rman
ORA-19633: control file record 25 is out of sync with recovery catalog
This is due to mismatch in letter cases (upper cases vs lower case use in 11.2 and 19c) in archivelog file paths. More on this is on MOS 1105924.1. To fix this recreate the controlfile from trace
ALTER DATABASE BACKUP CONTROLFILE TO TRACE;

Related Posts
Unplug-Plug-Upgrade from 19c to 21c Using Autoupgrade
Plugging non-CDB as a PDB - Manual vs Autoupgrade
Upgrading Oracle Single Instance with ASM (Oracle Restart) from 11.2.0.4 to 19c (19.6) on RHEL 7

Wednesday, May 31, 2023

Upgrade from 19c to 21c Using Autoupgrade

This post shows the steps for upgrading a 19c database (CDB + PDB) with autoupgrade tool. There's an earlier post which shows upgrading PDB form 19c to 21 using unplug-plug-upgrade method.
If any privileges were reovked from public due to security reason (CIS standard recommendations) then grant those privileges back before starting the ugprade process. Once the upgrade is completed then those privilges could be revoked again.
Generate a sample upgrade config file with
java -jar autoupgrade.jar -create_sample_file config normalupgrade.cfg
and customize the "Full DB/CDB upgrade" section.
global.autoupg_log_dir=/home/oracle/autoupgrade

#
# Database number 1 - Full DB/CDB upgrade
#
upg1.log_dir=/home/oracle/autoupgrade/testcdb             # Path of the log directory for the upgrade job
upg1.sid=testcdb                                              # ORACLE_SID of the source DB/CDB
upg1.source_home=/opt/app/oracle/product/19.x.0/dbhome_2  # Path of the source ORACLE_HOME
upg1.target_home=/opt/app/oracle/product/21.x.0/dbhome_1  # Path of the target ORACLE_HOME
upg1.start_time=NOW                                       # Optional. [NOW | +XhYm (X hours, Y minutes after launch) | dd/mm/yyyy hh:mm:ss]
upg1.upgrade_node=ip-172-31-10-91.eu-west-1.compute.internal                                # Optional. To find out the name of your node, run the hostname utility. Default is 'localhost'
upg1.run_utlrp=yes                                  # Optional. Whether or not to run utlrp after upgrade
upg1.timezone_upg=yes                               # Optional. Whether or not to run the timezone upgrade
upg1.target_version=21                      # Oracle version of the target ORACLE_HOME.  Only required when the target Oracle database version is 12.2
The databae being upgraded is called testcdb. The source home is the 19c home and the target home is the 21c home. DB timezone is also upgraded the same time.
Run the autoupgrade mode in analyze mode to check the ugprade readiness of the database.
java -jar autoupgrade.jar -config normalupgrade.cfg -mode analyze
This will generate a status report (status.html) at the end. Cheeck for any pre-req work that need to be carried out manually before the upgrade.

Run autougprade in fixup mode to execute preupgrade fixes.
java -jar autoupgrade.jar -config normalupgrade.cfg -mode fixups

Finally run the autoupgrade in deploy mode to commence the upgrade.
java -jar autoupgrade.jar -config normalupgrade.cfg -mode deploy

If upgrade is successful then drop the GRP associated with it. It is important that GRP is dropped before increasing the compatibility.
To change the compatibility update the compatibility parameter and restart the database.
SQL> alter system set compatible='21.0.0' scope=spfile;
Following will be shown in the alert log.
ALERT: Compatibility of the database is changed from 19.0.0.0.0 to 21.0.0.0.0.
Increased the record size of controlfile section 15 to 104 bytes
Control file expanded from 1156 blocks to 1158 blocks
One of the postupgrade tasks is to run the $ORACLE_HOME/rdbms/admin/auditpostupgrade.sql script (mentioned in 2659172.1 as well). However, this script is not available in 21c home (checked on 21.7 and 21.8 the last RU at the time of this post). After raising an SR oracle support confirmed that script is there on 23c and at the moment not avialable on 21c. So for the time being to ignore the execution of auditpostupgrade.sql script.

Related Posts
Upgrading Oracle Restart from 19c to 21c on RHEL 7
Unplug-Plug-Upgrade from 19c to 21c Using Autoupgrade

Thursday, May 11, 2023

Out of Place (OOP) Patching of Oracle Restart

Oracle grid infrastructure deployed in a RAC configuration has the option switchGridHome for out of place patching. But this option doesn't work with Oracle restart.
./gridSetup.sh -silent -switchGridHome -applyRU /opt/app/oracle/installs/19.19/35037840
Preparing the home to patch...
Preparing the home to apply the patch failed. For details look at the logs from /opt/app/oraInventory/logs.
The log can be found at: /opt/app/oraInventory/logs/GridSetupActions2023-05-11_10-17-15AM/installerPatchActions_2023-05-11_10-17-15AM.log
Launching Oracle Grid Infrastructure Setup Wizard...

[FATAL] [INS-45101] Clusterware is not running on the local node.
   ACTION: Ensure that the Clusterware is configured and is running on local node before proceeding.
MOS Doc 2764906.1 states that switchGridHome option is not supported for Oracle Restart.
However, there is a way to do OOP on Oracle Restart explained here.
This post is based on OOP of a Oracle restart using the steps mentioned in the link above. The current configuration consists of following resources
Resource Name             Type                      Target             State              Host
-------------             ------                    -------            --------           ----------
ora.DATA.dg               ora.diskgroup.type        ONLINE             ONLINE             ip-172-31-2-77
ora.FRA.dg                ora.diskgroup.type        ONLINE             ONLINE             ip-172-31-2-77
ora.LISTENER.lsnr         ora.listener.type         ONLINE             ONLINE             ip-172-31-2-77
ora.asm                   ora.asm.type              ONLINE             ONLINE             ip-172-31-2-77
ora.cssd                  ora.cssd.type             ONLINE             ONLINE             ip-172-31-2-77
ora.diskmon               ora.diskmon.type          OFFLINE            OFFLINE
ora.evmd                  ora.evm.type              ONLINE             ONLINE             ip-172-31-2-77
ora.ons                   ora.ons.type              ONLINE             ONLINE             ip-172-31-2-77
ora.testcdb.db            ora.database.type         ONLINE             ONLINE             ip-172-31-2-77
ora.testcdb.dbxrw.svc     ora.service.type          ONLINE             ONLINE             ip-172-31-2-77
ora.testcdb.testsrv.svc   ora.service.type          ONLINE             ONLINE             ip-172-31-2-77
The current GI Home is /opt/app/oracle/product/19.x.0/grid
The new GI home will be /opt/app/oracle/product/19.19.0/grid
1. First step is to install new GI home with the required patches using the software only option. How to do a software only Oracle restart installation was shown in a previous post. In this instance a response file is used and 19.19 RU is applied at install time.
$ ./gridSetup.sh -silent -responseFile /opt/app/oracle/installs/19.19/grid_sw_only.rsp -applyRU /opt/app/oracle/installs/19.19/35037840
Preparing the home to patch...
Applying the patch /opt/app/oracle/installs/19.19/35037840...
Successfully applied the patch.
The log can be found at: /opt/app/oraInventory/logs/GridSetupActions2023-05-11_11-37-14AM/installerPatchActions_2023-05-11_11-37-14AM.log
Launching Oracle Grid Infrastructure Setup Wizard...

[WARNING] [INS-32022] Grid infrastructure software for a cluster installation must not be under an Oracle base directory.
   CAUSE: Grid infrastructure for a cluster installation assigns root ownership to all parent directories of the Grid home location. As a result, ownership of all named directories in the software location path is changed to root, creating permissions errors for all subsequent installations into the same Oracle base.
   ACTION: Specify software location outside of an Oracle base directory for grid infrastructure for a cluster installation.
[WARNING] [INS-13014] Target environment does not meet some optional requirements.
   CAUSE: Some of the optional prerequisites are not met. See logs for details. /opt/app/oraInventory/logs/GridSetupActions2023-05-11_11-37-14AM/gridSetupActions2023-05-11_11-37-14AM.log
   ACTION: Identify the list of failed prerequisite checks from the log: /opt/app/oraInventory/logs/GridSetupActions2023-05-11_11-37-14AM/gridSetupActions2023-05-11_11-37-14AM.log. Then either from the log file or from installation manual find the appropriate configuration to meet the prerequisites and fix it manually.
The response file for this session can be found at:
 /opt/app/oracle/product/19.19.0/grid/install/response/grid_2023-05-11_11-37-14AM.rsp

You can find the log of this install session at:
 /opt/app/oraInventory/logs/GridSetupActions2023-05-11_11-37-14AM/gridSetupActions2023-05-11_11-37-14AM.log

As a root user, execute the following script(s):
        1. /opt/app/oracle/product/19.19.0/grid/root.sh

Execute /opt/app/oracle/product/19.19.0/grid/root.sh on the following nodes:
[ip-172-31-2-77]

Successfully Setup Software with warning(s).
Run the root.sh
/opt/app/oracle/product/19.19.0/grid/root.sh
Check /opt/app/oracle/product/19.19.0/grid/install/root_ip-172-31-2-77.eu-west-1.compute.internal_2023-05-11_11-52-11-416203678.log for the output of root script

# more /opt/app/oracle/product/19.19.0/grid/install/root_ip-172-31-12-6.eu-west-1.compute.internal_2023-05-11_11-52-11-416203678.log
Performing root user operation.

The following environment variables are set as:
    ORACLE_OWNER= oracle
    ORACLE_HOME=  /opt/app/oracle/product/19.19.0/grid
   Copying dbhome to /usr/local/bin ...
   Copying oraenv to /usr/local/bin ...
   Copying coraenv to /usr/local/bin ...

Entries will be added to the /etc/oratab file as needed by
Database Configuration Assistant when a database is created
Finished running generic part of root script.
Now product-specific root actions will be performed.

To configure Grid Infrastructure for a Cluster or Grid Infrastructure for a Stand-Alone Server execute the following command as oracle user:
/opt/app/oracle/product/19.19.0/grid/gridSetup.sh
This command launches the Grid Infrastructure Setup Wizard. The wizard also supports silent operation, and the parameters can be passed through the response file that is available in the installation media.
2. Verify the new GI home has the patches applied.
export ORACLE_HOME=/opt/app/oracle/product/19.19.0/grid
$ORACLE_HOME/OPatch/opatch lspatches
35107512;TOMCAT RELEASE UPDATE 19.0.0.0.0 (35107512)
35050331;OCW RELEASE UPDATE 19.19.0.0.0 (35050331)
35050325;ACFS RELEASE UPDATE 19.19.0.0.0 (35050325)
35042068;Database Release Update : 19.19.0.0.230418 (35042068)
33575402;DBWLM RELEASE UPDATE 19.0.0.0.0 (33575402)

OPatch succeeded.


3. As root run the prepatch steps on the new GI home. This does not bring any of the running services down.
# /opt/app/oracle/product/19.19.0/grid/crs/install/roothas.sh -prepatch -dstcrshome /opt/app/oracle/product/19.19.0/grid
Using configuration parameter file: /opt/app/oracle/product/19.19.0/grid/crs/install/crsconfig_params
The log of current session can be found at:
  /opt/app/oracle/crsdata/ip-172-31-2-77/crsconfig/hapatch_2023-05-11_11-48-38AM.log
2023/05/11 11:48:59 CLSRSC-347: Successfully unlock /opt/app/oracle/product/19.19.0/grid
2023/05/11 11:48:59 CLSRSC-671: Pre-patch steps for patching GI home successfully completed.
4. As root run the postpatch step on the new GI home. This step results in currently running services being brought down and started using the new GI home.
# /opt/app/oracle/product/19.19.0/grid/crs/install/roothas.sh -postpatch -dstcrshome /opt/app/oracle/product/19.19.0/grid
Using configuration parameter file: /opt/app/oracle/product/19.19.0/grid/crs/install/crsconfig_params
The log of current session can be found at:
  /opt/app/oracle/crsdata/ip-172-31-2-77/crsconfig/hapatch_2023-05-11_11-49-46AM.log
Redirecting to /bin/systemctl restart rsyslog.service
2023/05/11 11:50:15 CLSRSC-329: Replacing Clusterware entries in file 'oracle-ohasd.service'
2023/05/11 11:51:37 CLSRSC-672: Post-patch steps for patching GI home successfully completed.
On linux looking at the currently running processes will show the GI related processes running out of new GI home.
 9397 ?        Ssl    0:06 /opt/app/oracle/product/19.19.0/grid/bin/ohasd.bin reboot
 9685 ?        Ssl    0:06 /opt/app/oracle/product/19.19.0/grid/bin/oraagent.bin
 9713 ?        Ssl    0:02 /opt/app/oracle/product/19.19.0/grid/bin/evmd.bin
 9716 ?        Ssl    0:00 /opt/app/oracle/product/19.19.0/grid/bin/tnslsnr LISTENER -no_crs_notify -inherit
 9733 ?        Ss     0:00 /opt/app/oracle/product/19.19.0/grid/opmn/bin/ons -d
 9789 ?        Ssl    0:02 /opt/app/oracle/product/19.19.0/grid/bin/evmlogger.bin -o /opt/app/oracle/product/19.19.0/grid/log/[HOSTNAME]/evmd/evmlogger.info -l /opt/app/oracle/product/19.19.0/grid/log/[HOSTNAME]/evmd/evmlogger.log
 9805 ?        Ssl    0:02 /opt/app/oracle/product/19.19.0/grid/bin/cssdagent
 9845 ?        Ssl    0:02 /opt/app/oracle/product/19.19.0/grid/bin/ocssd.bin
 9929 ?        Sl     0:00 /opt/app/oracle/product/19.19.0/grid/opmn/bin/ons -d
Resources will be online as before
Resource Name             Type                      Target             State              Host
-------------             ------                    -------            --------           ----------
ora.DATA.dg               ora.diskgroup.type        ONLINE             ONLINE             ip-172-31-2-77
ora.FRA.dg                ora.diskgroup.type        ONLINE             ONLINE             ip-172-31-2-77
ora.LISTENER.lsnr         ora.listener.type         ONLINE             ONLINE             ip-172-31-2-77
ora.asm                   ora.asm.type              ONLINE             ONLINE             ip-172-31-2-77
ora.cssd                  ora.cssd.type             ONLINE             ONLINE             ip-172-31-2-77
ora.diskmon               ora.diskmon.type          OFFLINE            OFFLINE
ora.evmd                  ora.evm.type              ONLINE             ONLINE             ip-172-31-2-77
ora.ons                   ora.ons.type              ONLINE             ONLINE             ip-172-31-2-77
ora.testcdb.db            ora.database.type         ONLINE             ONLINE             ip-172-31-2-77
ora.testcdb.dbxrw.svc     ora.service.type          ONLINE             ONLINE             ip-172-31-2-77
ora.testcdb.testsrv.svc   ora.service.type          ONLINE             ONLINE             ip-172-31-2-77
5. Update the inventory by setting CRS=True for new GI home and False for old GI home.
$ /opt/app/oracle/product/19.19.0/grid/oui/bin/runInstaller -updateNodeList ORACLE_HOME=/opt/app/oracle/product/19.19.0/grid CRS=TRUE
Starting Oracle Universal Installer...

Checking swap space: must be greater than 500 MB.   Actual 599 MB    Passed
The inventory pointer is located at /etc/oraInst.loc

$ /opt/app/oracle/product/19.x.0/grid/oui/bin/runInstaller -updateNodeList ORACLE_HOME=/opt/app/oracle/product/19.x.0/grid CRS=FALSE
Starting Oracle Universal Installer...

Checking swap space: must be greater than 500 MB.   Actual 599 MB    Passed
The inventory pointer is located at /etc/oraInst.loc
If happy with the results then the old GI home could be deinstalled.



If for whatever reason need to rollback to old GI home after successfully moving to new GI home, then use the same pre and post patch steps using the old GI home.
# /opt/app/oracle/product/19.x.0/grid/crs/install/roothas.sh -prepatch -dstcrshome /opt/app/oracle/product/19.x.0/grid
Using configuration parameter file: /opt/app/oracle/product/19.x.0/grid/crs/install/crsconfig_params
The log of current session can be found at:
  /opt/app/oracle/crsdata/ip-172-31-2-77/crsconfig/hapatch_2023-05-11_10-45-53AM.log
Using configuration parameter file: /opt/app/oracle/product/19.x.0/grid/crs/install/crsconfig_params
The log of current session can be found at:
  /opt/app/oracle/crsdata/ip-172-31-2-77/crsconfig/hapatch_2023-05-11_10-45-54AM.log
2023/05/11 10:45:55 CLSRSC-347: Successfully unlock /opt/app/oracle/product/19.x.0/grid
2023/05/11 10:45:55 CLSRSC-671: Pre-patch steps for patching GI home successfully completed.

# /opt/app/oracle/product/19.x.0/grid/crs/install/roothas.sh -postpatch -dstcrshome /opt/app/oracle/product/19.x.0/grid
Using configuration parameter file: /opt/app/oracle/product/19.x.0/grid/crs/install/crsconfig_params
The log of current session can be found at:
  /opt/app/oracle/crsdata/ip-172-31-2-77/crsconfig/hapatch_2023-05-11_10-46-08AM.log
2023/05/11 10:46:21 CLSRSC-329: Replacing Clusterware entries in file 'oracle-ohasd.service'
2023/05/11 10:47:47 CLSRSC-672: Post-patch steps for patching GI home successfully completed.


The listener config shows new GI home.
srvctl config listener
Name: LISTENER
Type: Database Listener
Home: /opt/app/oracle/product/19.19.0/grid
End points: TCP:1521
Listener is enabled.
However, if the ASM SPfile is on a FS instead of on ASM them this location is not updated. File must be moved manually and spfile location need to be updated afterwards.
srvctl config asm
ASM home: <CRS home>
Password file:
Backup of Password file:
ASM listener: LISTENER
Spfile: /opt/app/oracle/product/19.x.0/grid/dbs/spfile+ASM.ora
ASM diskgroup discovery string: /dev/oracleasm/*

Related Posts
Out of Place (OOP) Patching of Oracle Restart 21c

Tuesday, March 14, 2023

Unplug-Plug-Upgrade from 19c to 21c Using Autoupgrade

This post shows the steps for upgrading a 19c PDB with autoupgrade tool using the unplug-plug-uprade method. The 21c CDB is called cdb21c and 19c CDB is called testcdb. The PDB that will be unpluged from 19c and plugged into 21c and upgraded is called testpdb2. Both 19c and 21c CDBs reside in the same Oracle restart configuration and the PDB has service dbxrw associated with it.
Resource Name             Type                      Target             State              Host
-------------             ------                    -------            --------           ----------
ora.cdb21c.db             ora.database.type         ONLINE             ONLINE             ip-172-31-10-91
ora.testcdb.db            ora.database.type         ONLINE             ONLINE             ip-172-31-10-91
ora.testcdb.dbxrw.svc     ora.service.type          ONLINE             ONLINE             ip-172-31-10-91

If any privileges were reovked from public due to security reason (CIS standard recommendations) then grant those privileges back before starting the ugprade process. Once the upgrade is completed then those privilges could be revoked again.
Generate a sample upgrade config file with
java -jar autoupgrade.jar -create_sample_file config sample_config.cfg
and customize the "Unplug/Plug upgrade" section.
global.autoupg_log_dir=/home/oracle/upgradelogs
global.keystore=/home/oracle/upgtde

#
# Database number 1 - Unplug/Plug upgrade
#
upg1.log_dir=/home/oracle/autoupgrade/cdb21c
upg1.sid=testcdb
upg1.source_home=/opt/app/oracle/product/19.x.0/dbhome_2
upg1.target_cdb=cdb21c
upg1.target_home=/opt/app/oracle/product/21.x.0/dbhome_1
upg1.pdbs=testpdb2                    # Comma delimited list of pdb names that will be upgraded and moved to the target CDB
upg1.run_utlrp=yes                   # Optional. Whether or not to run utlrp after upgrade
upg1.timezone_upg=yes                # Optional. Whether or not to run the timezone upgrade
upg1.target_version=21       # Oracle version of the target ORACLE_HOME.  Only required when the target Oracle database version is 12.2


# copies the pdb to inside target cdb directory structure. default is nocopy where datafiles remains in 19c cdb directory strcuture in ASM. 
upg1.target_pdb_copy_option=file_name_convert=NONE
global.keystore is used to give the location for the autoupgrade key store. Since both target and source CDBs use TDE, it helps to create a autoupgrade keystore and save the respective CDBs key store passowrd in it.
Secondly the "upg1.target_pdb_copy_option=file_name_convert=NONE" copies the data files inside the target CDB datafile directory structure. If not the PDB datafile will remain where they are (under the source CDB's directory structure).
To create autoupgrade keystore run the autoupgrade with load_password option.
java -jar autoupgrade.jar -config unplug_plug.cfg -load_password
Processing config file ...

Starting AutoUpgrade Password Loader - Type help for available options
Creating new AutoUpgrade keystore - Password required
Enter password:
Enter password again:
AutoUpgrade keystore was successfully created

TDE> list
+----------+----------------+------------------+-----------+------------------+
|ORACLE_SID| Action Required|      TDE Password|SEPS Status|Active Wallet Type|
+----------+----------------+------------------+-----------+------------------+
|    cdb21c|Add TDE password|No password loaded|   Inactive|               Any|
|   testcdb|Add TDE password|No password loaded|   Inactive|               Any|
+----------+----------------+------------------+-----------+------------------+
It detects the databases that are up and running in the host and could be listed with list command. Next add the each database's keystore password to autoupgrade keystore.
TDE> add testcdb
Enter your secret/Password:
Re-enter your secret/Password:
TDE> add cdb21c
Enter your secret/Password:
Re-enter your secret/Password:
TDE> list
+----------+---------------+------------+-----------+------------------+
|ORACLE_SID|Action Required|TDE Password|SEPS Status|Active Wallet Type|
+----------+---------------+------------+-----------+------------------+
|    cdb21c|               |    Verified|   Inactive|               Any|
|   testcdb|               |    Verified|   Inactive|               Any|
+----------+---------------+------------+-----------+------------------+
TDE> save
Convert the AutoUpgrade keystore to auto-login [YES|NO] ? YES
It helps to have a auto-login keystore so the upgrade could run smoothly without waiting for password to be entered.
Next run the preuprage checks. This is done using the analyze option.
java -jar autoupgrade.jar -config unplug_plug.cfg -mode analyze
AutoUpgrade 22.4.220712 launched with default internal options
Processing config file ...
Loading AutoUpgrade keystore
AutoUpgrade keystore was successfully loaded
+--------------------------------+
| Starting AutoUpgrade execution |
+--------------------------------+
1 PDB(s) will be analyzed
Type 'help' to list console commands
upg> lsj -a 10
upg> +----+-------+---------+---------+-------+----------+-------+----------------------------+
|Job#|DB_NAME|    STAGE|OPERATION| STATUS|START_TIME|UPDATED|                     MESSAGE|
+----+-------+---------+---------+-------+----------+-------+----------------------------+
| 100|testcdb|PRECHECKS|EXECUTING|RUNNING|  13:58:20| 7s ago|Loading database information|
+----+-------+---------+---------+-------+----------+-------+----------------------------+
Total jobs 1

The command lsj is running every 10 seconds. PRESS ENTER TO EXIT
Job 100 completed
------------------- Final Summary --------------------
Number of databases            [ 1 ]

Jobs finished                  [1]
Jobs failed                    [0]

Please check the summary report at:
/home/oracle/upgradelogs/cfgtoollogs/upgrade/auto/status/status.html
/home/oracle/upgradelogs/cfgtoollogs/upgrade/auto/status/status.log
Check the status html for any of the precheck failed. In this case there was no failures.

If any fixup are needed then run the autoupgrade with fixup option.
java -jar autoupgrade.jar -config unplug_plug.cfg -mode fixups
This will generate a fixup report.



Finally run the autoupgrade with deploy option to begin the upgrade.
java -jar autoupgrade.jar -config unplug_plug.cfg -mode deploy
When this is run on the source CDB alert log the PDB unplug could noticed.
2023-03-09T14:13:16.820993+00:00
Completed: alter pluggable database TESTPDB2 unplug into '/home/oracle/upgtde/testcdb-TESTPDB2.xml' encrypt using *
drop pluggable database TESTPDB2 keep datafiles
2023-03-09T14:13:17.607648+00:00
Deleted Oracle managed file +DATA/TESTCDB/9CBA2DF91A8C7012E053F4071FAC36E9/TEMPFILE/temp.329.1131016147
2023-03-09T14:13:17.635197+00:00
Stopped service testpdb2
Completed: drop pluggable database TESTPDB2 keep datafiles
On the target CDB's alert log the PDB plug-ing couldbe noticed.
2023-03-09T14:13:17.933191+00:00
create pluggable database "TESTPDB2"   using '/home/oracle/upgtde/testcdb-TESTPDB2.xml' COPY file_name_convert=NONE tempfile reuse keystore identified by * decrypt using *
Monitor the upgrade process and once completed check the status.html that all tasks ran without any issue.

The service associated with the PDB is not migrated as part of the upgrade. This will remain in the offline status.
Resource Name             Type                      Target             State              Host
-------------             ------                    -------            --------           ----------
ora.cdb21c.testpdb2.pdb   ora.pdb.type              ONLINE             ONLINE             ip-172-31-10-91
ora.testcdb.db            ora.database.type         ONLINE             ONLINE             ip-172-31-10-91
ora.testcdb.dbxrw.svc     ora.service.type          OFFLINE            OFFLINE
Oracle confirmed via SR that migration of service is not available with autoupgrade. So this has to be done manually. Remove the old service using the 19c Oracle Home's srvctl
srvctl remove service -db $ORACLE_SID -service dbxrw
Add the service again using 21c Oracle Home's srvctl.
srvctl add service -db $ORACLE_SID -pdb testpdb2 -service dbxrw -role PRIMARY -notification TRUE -failovertype NONE -failovermethod NONE -failoverdelay 0 -failoverretry 0 -drain_timeout 5 -stopoption IMMEDIATE
With this both PDB and service are now part of the target CDB.
Resource Name             Type                      Target             State              Host
-------------             ------                    -------            --------           ----------
ora.cdb21c.db             ora.database.type         ONLINE             ONLINE             ip-172-31-10-91
ora.cdb21c.dbxrw.svc      ora.service.type          ONLINE             ONLINE             ip-172-31-10-91
ora.cdb21c.testpdb2.pdb   ora.pdb.type              ONLINE             ONLINE             ip-172-31-10-91
Check the PDB data files reside inside the target CDB directory path.
    CON_ID CON_NAME                       OPEN MODE  RESTRICTED
---------- ------------------------------ ---------- ----------
         2 PDB$SEED                       READ ONLY  NO
         3 TESTPDB2                       READ WRITE NO

SQL>  select name from v$datafile where con_id=3;

NAME
----------------------------------------------------------------------------------------------------
+DATA/CDB21C/9CBA2DF91A8C7012E053F4071FAC36E9/DATAFILE/system.332.1131027199
+DATA/CDB21C/9CBA2DF91A8C7012E053F4071FAC36E9/DATAFILE/sysaux.326.1131027199
+DATA/CDB21C/9CBA2DF91A8C7012E053F4071FAC36E9/DATAFILE/undotbs1.280.1131027199
This conclude the upgrade of PDB using unplug-plug-upgrade method.