Upgrading from 11.2.0.2 to 11.2.0.3 with Physical Standby - 1 (Standby site upgrade)
Upgrading from 11.2.0.2 to 11.2.0.3 with Physical Standby - 2 (Primary site upgrade)
Upgrading from 11.2.0.2 to 11.2.0.3 is not much different from upgrading from 11.2.0.1 to 11.2.0.2 (part 2). Same as before (with upgrading to 11.2.0.2) there's some pre-reqs that need to be done. Continuing the tradition started with 11gR2 Oracle doesn't give a single read me file with all the necessary information for upgrade. One has to read several user guides and metalink notes to get the full picture. Below are some extracts taken from upgrade guide, asm guide and grid infrastructure installation guide.
Oracle Clusterware Upgrade Configuration Force Feature
If nodes become unreachable in the middle of an upgrade, starting with release 11.2.0.3, you can run the rootupgrade.sh script with the -force flag to force an upgrade to complete.
Starting with Oracle Grid Infrastructure 11g release 2 (11.2.0.3) and later, you can use the CVU healthcheck command option to check your Oracle Clusterware and Oracle Database installations for their compliance with mandatory requirements and best practices guidelines, and to check to ensure that they are functioning properly.
Known Issue with the Deinstallation Tool for This Release
Cause: After upgrading from 11.2.0.1 or 11.2.0.2 to 11.2.0.3, deinstallation of the Oracle home in the earlier release of Oracle Database may result in the deletion of the old Oracle base that was associated with it. This may also result in the deletion of data files, audit files, etc., which are stored under the old Oracle base.
Action: Before deinstalling the Oracle home in the earlier release, edit the orabase_cleanup.lst file found in the $Oracle_Home/utl directory and remove the "oradata" and "admin" entries. Then, deinstall the Oracle home using the 11.2.0.3 deinstallation tool.
To upgrade existing Oracle Grid Infrastructure installations from 11.2.0.2 to a later release, you must apply patch 11.2.0.2.1 (11.2.0.2 PSU 1) or later.
To upgrade existing 11.2.0.1 Oracle Grid Infrastructure installations to Oracle Grid Infrastructure to any later version (11.2.0.2 or 11.2.0.3), you must patch the release 11.2.0.1 Oracle Grid Infrastructure home (11.2.0.1.0) with the 9706490 patch.
To upgrade existing 11.1 Oracle Clusterware installations to Oracle Grid Infrastructure 11.2.0.3 or later, you must patch the release 11.1 Oracle Clusterware home with the patch for bug 7308467.
Oracle recommends that you leave Oracle RAC instances running. When you start the root script on each node, that node's instances are shut down and then started up again by the rootupgrade.sh script. If you upgrade from release 11.2.0.1 to any later version (11.2.0.2 or 11.2.0.3), then all nodes are selected by
default. You cannot select or de-select the nodes. For single instance Oracle Databases on the cluster, only those that use Oracle ASM need to be shut down. Listeners do not need to be shut down.
From metalink notes
Actions For DST Updates When Upgrading To Or Applying The 11.2.0.3 Patchset [ID 1358166.1]
If your current RDBMS time timezone version is 14 , install 11.2.0.3 in a new home and update the 11.2.0.2 or 11.2.0.1 database to 11.2.0.2.
You can skip any DST related sections in the patchset documentation , there is no need to apply DST patches or check for DST issues for the update to 11.2.0.3
If your current RDBMS time timezone version is lower than 14 , install 11.2.0.3 in a new home and update the 11.2.0.2 or 11.2.0.1 database to 11.2.0.3
You can skip any DST related sections in the patchset documentation , there is no need to apply DST patches or check for DST issues for the update to 11.2.0.3
If your current RDBMS time timezone version is higher than 14, then there's some work to be done read the above metalink note.
In this case database was 11.2.0.2
SQL> SELECT version FROM v$timezone_file; VERSION ---------- 14Before starting the upgrade using cluster verification tool it's now possible to validate the readiness to upgrade.
./runcluvfy.sh stage -pre crsinst -upgrade -n hpc3 -rolling -src_crshome /flash/11.2.0/grid11.2.0.2 -dest_crshome /flash/11.2.0/grid11.2.0.3 -dest_version 11.2.0.3.0 -fixup -fixupdir /home/oracle/fixupscript -verbosePrior to upgrading to 11.2.0.3 a patch should be applied to 11.2.0.2 (similar to 11.2.0.2) only difference is this patch number wasn't mentioned in any of the documents referenced (which means some more reading is required to get the complete picture) but luckily the pre-req check flags this.
Applying patch 12539000
The patch itself has some issues when it comes to applying.
1. Applying using opatch auto, which is suppse to apply to both GI Home and Oracle Home only applied to Oracle Home.
2. Read me says to run the /12539000/custom/server/12539000/custom/scripts/prepatch.sh but this directory structure and the file is missing.
3. So finally had to apply the patch with opatch auto but one home at a time. (opatch auto PATH_TO_PATCH_DIRECTORY -oh GI_HOME or OH
Another issue was shmmni was flaged as not configured (this already had an upgrade to 11.2.0.2 and value was set from the begining). Running the fixup script gives the output below which doesn't modify but makes the warning disappear
# /tmp/CVU_11.2.0.3.0_oracle/runfixup.sh Response file being used is :/tmp/CVU_11.2.0.3.0_oracle/fixup.response Enable file being used is :/tmp/CVU_11.2.0.3.0_oracle/fixup.enable Log file location: /tmp/CVU_11.2.0.3.0_oracle/orarun.log Setting Kernel Parameters... The value for shmmni in response file is not greater than value of shmmni for current session. Hence not changing it.
Apart from these upgrade is straightforward.
By default all the nodes will be selected for upgrade (this cluster only has one node). Oracle recommends to upgrade both GI and ASM at the same time. This option has been selected for this upgrade.
SCAN IP is set on the /etc/hosts but the pre-req checks if this IP could be resolved using nslookup and flags a warning when cannot. This could be ignored and proceeded. Metalink note 1212703.1 mentions the multicast requirements. In this upgrade all these were ignored (on production environments should resolve these issues before proceeding)
When prompted run the rootupgrade.
# /flash/11.2.0/grid11.2.0.3/rootupgrade.sh Performing root user operation for Oracle 11g The following environment variables are set as: ORACLE_OWNER= oracle ORACLE_HOME= /flash/11.2.0/grid11.2.0.3 Enter the full pathname of the local bin directory: [/usr/local/bin]: The contents of "dbhome" have not changed. No need to overwrite. The contents of "oraenv" have not changed. No need to overwrite. The contents of "coraenv" have not changed. No need to overwrite. Entries will be added to the /etc/oratab file as needed by Database Configuration Assistant when a database is created Finished running generic part of root script. Now product-specific root actions will be performed. Using configuration parameter file: /flash/11.2.0/grid11.2.0.3/crs/install/crsconfig_params Creating trace directory User ignored Prerequisites during installation ASM upgrade has started on first node. CRS-2791: Starting shutdown of Oracle High Availability Services-managed resources on 'hpc3' CRS-2673: Attempting to stop 'ora.crsd' on 'hpc3' CRS-2790: Starting shutdown of Cluster Ready Services-managed resources on 'hpc3' CRS-2673: Attempting to stop 'ora.CLUSTERDG.dg' on 'hpc3' CRS-2673: Attempting to stop 'ora.registry.acfs' on 'hpc3' CRS-2673: Attempting to stop 'ora.clusdb.db' on 'hpc3' CRS-2673: Attempting to stop 'ora.LISTENER_SCAN1.lsnr' on 'hpc3' CRS-2673: Attempting to stop 'ora.oc4j' on 'hpc3' CRS-2673: Attempting to stop 'ora.LISTENER.lsnr' on 'hpc3' CRS-2673: Attempting to stop 'ora.cvu' on 'hpc3' CRS-2677: Stop of 'ora.LISTENER.lsnr' on 'hpc3' succeeded CRS-2673: Attempting to stop 'ora.hpc3.vip' on 'hpc3' CRS-2677: Stop of 'ora.LISTENER_SCAN1.lsnr' on 'hpc3' succeeded CRS-2673: Attempting to stop 'ora.scan1.vip' on 'hpc3' CRS-2677: Stop of 'ora.scan1.vip' on 'hpc3' succeeded CRS-2677: Stop of 'ora.hpc3.vip' on 'hpc3' succeeded CRS-2677: Stop of 'ora.registry.acfs' on 'hpc3' succeeded CRS-2677: Stop of 'ora.clusdb.db' on 'hpc3' succeeded CRS-2673: Attempting to stop 'ora.DATA.dg' on 'hpc3' CRS-2673: Attempting to stop 'ora.FLASH.dg' on 'hpc3' CRS-2677: Stop of 'ora.DATA.dg' on 'hpc3' succeeded CRS-2677: Stop of 'ora.FLASH.dg' on 'hpc3' succeeded CRS-2677: Stop of 'ora.oc4j' on 'hpc3' succeeded CRS-2677: Stop of 'ora.cvu' on 'hpc3' succeeded CRS-2677: Stop of 'ora.CLUSTERDG.dg' on 'hpc3' succeeded CRS-2673: Attempting to stop 'ora.asm' on 'hpc3' CRS-2677: Stop of 'ora.asm' on 'hpc3' succeeded CRS-2673: Attempting to stop 'ora.ons' on 'hpc3' CRS-2677: Stop of 'ora.ons' on 'hpc3' succeeded CRS-2673: Attempting to stop 'ora.net1.network' on 'hpc3' CRS-2677: Stop of 'ora.net1.network' on 'hpc3' succeeded CRS-2792: Shutdown of Cluster Ready Services-managed resources on 'hpc3' has completed CRS-2677: Stop of 'ora.crsd' on 'hpc3' succeeded CRS-2673: Attempting to stop 'ora.ctssd' on 'hpc3' CRS-2673: Attempting to stop 'ora.evmd' on 'hpc3' CRS-2673: Attempting to stop 'ora.asm' on 'hpc3' CRS-2673: Attempting to stop 'ora.drivers.acfs' on 'hpc3' CRS-2673: Attempting to stop 'ora.mdnsd' on 'hpc3' CRS-2677: Stop of 'ora.asm' on 'hpc3' succeeded CRS-2673: Attempting to stop 'ora.cluster_interconnect.haip' on 'hpc3' CRS-2677: Stop of 'ora.drivers.acfs' on 'hpc3' succeeded CRS-2677: Stop of 'ora.evmd' on 'hpc3' succeeded CRS-2677: Stop of 'ora.cluster_interconnect.haip' on 'hpc3' succeeded CRS-2677: Stop of 'ora.mdnsd' on 'hpc3' succeeded CRS-2677: Stop of 'ora.ctssd' on 'hpc3' succeeded CRS-2673: Attempting to stop 'ora.cssd' on 'hpc3' CRS-2677: Stop of 'ora.cssd' on 'hpc3' succeeded CRS-2673: Attempting to stop 'ora.diskmon' on 'hpc3' CRS-2673: Attempting to stop 'ora.crf' on 'hpc3' CRS-2677: Stop of 'ora.diskmon' on 'hpc3' succeeded CRS-2677: Stop of 'ora.crf' on 'hpc3' succeeded CRS-2673: Attempting to stop 'ora.gipcd' on 'hpc3' CRS-2677: Stop of 'ora.gipcd' on 'hpc3' succeeded CRS-2673: Attempting to stop 'ora.gpnpd' on 'hpc3' CRS-2677: Stop of 'ora.gpnpd' on 'hpc3' succeeded CRS-2793: Shutdown of Oracle High Availability Services-managed resources on 'hpc3' has completed CRS-4133: Oracle High Availability Services has been stopped. OLR initialization - successful Replacing Clusterware entries in inittab clscfg: EXISTING configuration version 5 detected. clscfg: version 5 is 11g Release 2. Successfully accumulated necessary OCR keys. Creating OCR keys for user 'root', privgrp 'root'.. Operation successful. Started to upgrade the Oracle Clusterware. This operation may take a few minutes. Started to upgrade the CSS. Started to upgrade the CRS. The CRS was successfully upgraded. Oracle Clusterware operating version was successfully set to 11.2.0.3.0 ASM upgrade has finished on last node. PRKO-2116 : OC4J is already enabled Configure Oracle Grid Infrastructure for a Cluster ... succeededCheck the versions
crsctl query crs activeversion Oracle Clusterware active version on the cluster is [11.2.0.3.0] crsctl query crs softwareversion Oracle Clusterware version on node [hpc3] is [11.2.0.3.0] crsctl query crs releaseversion Oracle High Availability Services release version on the local node is [11.2.0.3.0]As mentioned in the begining of this blog cluvfy could be used to verify the compliance of clusterware and database.
./cluvfy comp healthcheck -collect cluster -bestpractice -deviations -html -save -savedir /home/oracleThis command will create a html file in /home/oracle listing the findings of the commands. There are serveral other options related to this and could use to verify the database as well. To verify the database must first run the $GI_HOME/cv/admin/cvusys.sql script to create the necessary user.
After rootupgrade continue rest of the configuration
This concludes the GI upgrade. Next is to upgrade the Oracle Home and the database. As with 11.2.0.2 upgrade this is done as an out of place upgrade. Both software and database is upgraded at the same time.
When prompted run the root.sh
/opt/app/oracle/product/11.2.0/clusdb_3/root.sh Performing root user operation for Oracle 11g The following environment variables are set as: ORACLE_OWNER= oracle ORACLE_HOME= /opt/app/oracle/product/11.2.0/clusdb_3 Enter the full pathname of the local bin directory: [/usr/local/bin]: The contents of "dbhome" have not changed. No need to overwrite. The contents of "oraenv" have not changed. No need to overwrite. The contents of "coraenv" have not changed. No need to overwrite. Entries will be added to the /etc/oratab file as needed by Database Configuration Assistant when a database is created Finished running generic part of root script. Now product-specific root actions will be performed. Finished product-specific root actions.Summary will be shown once the upgrade is completed.
COMP_NAME STATUS VERSION ----------------------------------- -------- ------------ OWB VALID 11.2.0.1.0 Oracle Application Express VALID 3.2.1.00.10 Oracle Enterprise Manager VALID 11.2.0.3.0 OLAP Catalog VALID 11.2.0.3.0 Spatial VALID 11.2.0.3.0 Oracle Multimedia VALID 11.2.0.3.0 Oracle XML Database VALID 11.2.0.3.0 Oracle Text VALID 11.2.0.3.0 Oracle Expression Filter VALID 11.2.0.3.0 Oracle Rules Manager VALID 11.2.0.3.0 Oracle Workspace Manager VALID 11.2.0.3.0 Oracle Database Catalog Views VALID 11.2.0.3.0 Oracle Database Packages and Types VALID 11.2.0.3.0 JServer JAVA Virtual Machine VALID 11.2.0.3.0 Oracle XDK VALID 11.2.0.3.0 Oracle Database Java Packages VALID 11.2.0.3.0 OLAP Analytic Workspace VALID 11.2.0.3.0 Oracle OLAP API VALID 11.2.0.3.0 Oracle Real Application Clusters VALID 11.2.0.3.0From patch history (using ADMon)
This concludes the upgrade to 11.2.0.3.
If automatic shared memory managment (ASMM) is used then read the post Multiple Shared Memory Segments Created by Default on 11.2.0.3
Useful metalink note
Things to Consider Before Upgrading to 11.2.0.3 Grid Infrastructure/ASM [ID 1363369.1]
Oracle Clusterware (CRS or GI) Rolling Upgrades [ID 338706.1]