Wednesday, March 9, 2011

Upgrading from 11.2.0.1 to 11.2.0.2

Updated 14th October 2011 : More metalink notes

Unlike patchsets of previous releases 11.2 patchsets are full releases, which means they could be used for a fresh installation (no need to go from 11.2.0.1 to 11.2.0.2).
Following text is from various Oracle documentations.

"Starting with the 11.2.0.2 patch set, Oracle Database patch sets are full installations of the Oracle Database software. This means that you do not need to install Oracle Database 11g Release 2 (11.2.0.1) before installing Oracle Database 11g Release 2 (11.2.0.2).

Note the following changes with the new patch set packaging:

New installations consist of installing the most recent patch set, rather than installing a base release and then upgrading to a patch release.

Direct upgrades from previous releases to the most recent patch set are supported.

Out-of-place patch set upgrades recommended, in which you install the patch set into a new, separate Oracle home. In-place upgrades are supported, but not recommended.

Oracle Clusterware and Oracle ASM upgrades are always out-of-place upgrades. With 11g release 2 (11.2), you cannot perform an in-place upgrade of Oracle Clusterware and Oracle ASM to existing homes.

Please note that 11.2 Patch Sets 11.2.0.2 and higher are supplied as full releases. See Note:1189783.1 for details. Also note that 11.2.0.2 software was reissued on 17th Nov 2010 as described in Note:1266978.1 (1179474.1)
"

Patchset comes in serveral bundles and don't have to download all of them unless required.

Oracle Database (includes Oracle Database and Oracle RAC) Note: you must download both zip files to install Oracle Database.
p10098816_112020_platform_1of7.zip
p10098816_112020_platform_2of7.zip

Oracle Grid Infrastructure (includes Oracle ASM, Oracle Clusterware, and Oracle Restart)
p10098816_112020_platform_3of7.zip

Oracle Database Client
p10098816_112020_platform_4of7.zip

Oracle Gateways
p10098816_112020_platform_5of7.zip

Oracle Examples
p10098816_112020_platform_6of7.zip

Deinstall
p10098816_112020_platform_7of7.zip


Another change is that patchsets don't have a readme.html or similar document with how to apply the patchset (which was avaibale with previous releases). For upgrade information Oracle (GI / Database) Administrator Guides must be refered. Admin Guide states "To upgrade existing 11.2.0.1 Oracle Grid Infrastructure installations to Oracle Grid Infrastructure 11.2.0.2 using a rolling upgrade, you must first do one of the following, depending on your platform:

Patch the release 11.2.0.1 Oracle Grid Infrastructure home with the 9413827 patch, and install Oracle Grid Infrastructure Patchset Update 1 (GI PSU1). When you apply patch 9413827, it shows up in the inventory as GIPSU2 bug 9655006.

Install Oracle Grid Infrastructure Patchset Update 2 (GI PSU2), which includes the 9413827 patch.
"

Failure to apply these patches could result in number of issues and these have been listed in following metalink notes.

Bug 9413827 - CRS rolling upgrade from 11.2.0.1 to 11.2.0.2 fails with OCR on ASM [ID 9413827.8]

Bug 10036834 - Linux Platforms: Patches not found upgrading Grid Infrastructure from 11.2.0.1 to 11.2.0.2 [ID 10036834.8]

Pre-requsite for 11.2.0.1 to 11.2.0.2 ASM Rolling Upgrade [ID 1274629.1]



If "Patch 9655006 - 11.2.0.1.2 for Grid Infrastructure (GI) Patch Set Update" is applied on the GI home then applying 9413827 or 9706490 (as per ID 1274629.1) will always try to rollback the 9655006.With 9413827
opatch napply -local -oh $CRS_HOME -id 9413827
..
..
Conflicts/Supersets for each patch are:

Patch : 9413827

Bug Superset of 9655006
Super set bugs are:
9655006,  9778840,  9343627,  9783609,  9262748,  9262722

Patches [   9655006 ] will be rolled back.
This is a conflict with the information given on 10036834.8. With 9706490
opatch napply -local -oh  $CRS_HOME -id 9706490
..
..
Conflicts/Supersets for each patch are:

Patch : 9706490

Bug Superset of 9655006
Super set bugs are:
9655006,  9778840,  9343627,  9783609,  9262748,  9262722

Patches [   9655006 ] will be rolled back.
For this upgrade the base installation of 11.2.0.1 was applied with the GI PSU 2 (Patch 9655006 - 11.2.0.1.2 for Grid Infrastructure (GI) Patch Set Update) and then 9706490 and finally 11.2.0.1.4 Patch Set Update was applied to the Oracle Home (Not GI Home). Steps below are from this point onwards.

Unzip the grid infrastructure patchset and execute runInstaller. Since this is an out of place upgrade the current cluster could be up and running.

On the second step select upgrade GI and ASM.
Select the nodes for upgrade, for this test it's a single node cluster

Select a new location to install the patchset.

Fix any pre-req warning and start the installation

After the installation is complete rootupgrade script needs to be run on the nodes involved. Up until this the cluster was running and during rootupgrade it will be brought down (via the rootupgrade script, no manual shutdown is required) and backup again once the upgrade is complete.

/flash/11.2.0/grid11.2.0.2/rootupgrade.sh
Running Oracle 11g root script...

The following environment variables are set as:
ORACLE_OWNER= oracle
ORACLE_HOME=  /flash/11.2.0/grid11.2.0.2

Enter the full pathname of the local bin directory: [/usr/local/bin]:
The contents of "dbhome" have not changed. No need to overwrite.
The file "oraenv" already exists in /usr/local/bin.  Overwrite it? (y/n)
[n]: y
Copying oraenv to /usr/local/bin ...
The file "coraenv" already exists in /usr/local/bin.  Overwrite it? (y/n)
[n]: y
Copying coraenv to /usr/local/bin ...

Entries will be added to the /etc/oratab file as needed by
Database Configuration Assistant when a database is created
Finished running generic part of root script.
Now product-specific root actions will be performed.
Using configuration parameter file: /flash/11.2.0/grid11.2.0.2/crs/install/crsconfig_params
Creating trace directory
Failed to add (property/value):('OLD_OCR_ID/'-1') for checkpoint:ROOTCRS_OLDHOMEINFO.Error code is 256

ASM upgrade has started on first node.

CRS-2791: Starting shutdown of Oracle High Availability Services-managed resources on 'hpc3'
CRS-2673: Attempting to stop 'ora.crsd' on 'hpc3'
CRS-2790: Starting shutdown of Cluster Ready Services-managed resources on 'hpc3'
CRS-2673: Attempting to stop 'ora.LISTENER.lsnr' on 'hpc3'
CRS-2673: Attempting to stop 'ora.LISTENER_SCAN1.lsnr' on 'hpc3'
CRS-2673: Attempting to stop 'ora.CLUSTERDG.dg' on 'hpc3'
CRS-2673: Attempting to stop 'ora.clusdb.db' on 'hpc3'
CRS-2673: Attempting to stop 'ora.registry.acfs' on 'hpc3'
CRS-2677: Stop of 'ora.LISTENER.lsnr' on 'hpc3' succeeded
CRS-2673: Attempting to stop 'ora.hpc3.vip' on 'hpc3'
CRS-2677: Stop of 'ora.LISTENER_SCAN1.lsnr' on 'hpc3' succeeded
CRS-2673: Attempting to stop 'ora.scan1.vip' on 'hpc3'
CRS-2677: Stop of 'ora.hpc3.vip' on 'hpc3' succeeded
CRS-2677: Stop of 'ora.scan1.vip' on 'hpc3' succeeded
CRS-2677: Stop of 'ora.registry.acfs' on 'hpc3' succeeded
CRS-2677: Stop of 'ora.CLUSTERDG.dg' on 'hpc3' succeeded
CRS-2677: Stop of 'ora.clusdb.db' on 'hpc3' succeeded
CRS-2673: Attempting to stop 'ora.DATA.dg' on 'hpc3'
CRS-2673: Attempting to stop 'ora.FLASH.dg' on 'hpc3'
CRS-2677: Stop of 'ora.DATA.dg' on 'hpc3' succeeded
CRS-2677: Stop of 'ora.FLASH.dg' on 'hpc3' succeeded
CRS-2673: Attempting to stop 'ora.asm' on 'hpc3'
CRS-2677: Stop of 'ora.asm' on 'hpc3' succeeded
CRS-2673: Attempting to stop 'ora.ons' on 'hpc3'
CRS-2673: Attempting to stop 'ora.eons' on 'hpc3'
CRS-2677: Stop of 'ora.ons' on 'hpc3' succeeded
CRS-2673: Attempting to stop 'ora.net1.network' on 'hpc3'
CRS-2677: Stop of 'ora.net1.network' on 'hpc3' succeeded
CRS-2677: Stop of 'ora.eons' on 'hpc3' succeeded
CRS-2792: Shutdown of Cluster Ready Services-managed resources on 'hpc3' has completed
CRS-2677: Stop of 'ora.crsd' on 'hpc3' succeeded
CRS-2673: Attempting to stop 'ora.cssdmonitor' on 'hpc3'
CRS-2673: Attempting to stop 'ora.ctssd' on 'hpc3'
CRS-2673: Attempting to stop 'ora.evmd' on 'hpc3'
CRS-2673: Attempting to stop 'ora.asm' on 'hpc3'
CRS-2673: Attempting to stop 'ora.drivers.acfs' on 'hpc3'
CRS-2673: Attempting to stop 'ora.mdnsd' on 'hpc3'
CRS-2677: Stop of 'ora.cssdmonitor' on 'hpc3' succeeded
CRS-2677: Stop of 'ora.evmd' on 'hpc3' succeeded
CRS-2677: Stop of 'ora.mdnsd' on 'hpc3' succeeded
CRS-2677: Stop of 'ora.ctssd' on 'hpc3' succeeded
CRS-2677: Stop of 'ora.drivers.acfs' on 'hpc3' succeeded
CRS-2677: Stop of 'ora.asm' on 'hpc3' succeeded
CRS-2673: Attempting to stop 'ora.cssd' on 'hpc3'
CRS-2677: Stop of 'ora.cssd' on 'hpc3' succeeded
CRS-2673: Attempting to stop 'ora.gpnpd' on 'hpc3'
CRS-2673: Attempting to stop 'ora.diskmon' on 'hpc3'
CRS-2677: Stop of 'ora.gpnpd' on 'hpc3' succeeded
CRS-2673: Attempting to stop 'ora.gipcd' on 'hpc3'
CRS-2677: Stop of 'ora.gipcd' on 'hpc3' succeeded
CRS-2677: Stop of 'ora.diskmon' on 'hpc3' succeeded
CRS-2793: Shutdown of Oracle High Availability Services-managed resources on 'hpc3' has completed
CRS-4133: Oracle High Availability Services has been stopped.
Successfully deleted 1 keys from OCR.
Creating OCR keys for user 'root', privgrp 'root'..
Operation successful.
OLR initialization - successful
Adding daemon to inittab
ACFS-9200: Supported
ACFS-9300: ADVM/ACFS distribution files found.
ACFS-9312: Existing ADVM/ACFS installation detected.
ACFS-9314: Removing previous ADVM/ACFS installation.
ACFS-9315: Previous ADVM/ACFS components successfully removed.
ACFS-9307: Installing requested ADVM/ACFS software.
ACFS-9308: Loading installed ADVM/ACFS drivers.
ACFS-9321: Creating udev for ADVM/ACFS.
ACFS-9323: Creating module dependencies - this may take some time.
ACFS-9327: Verifying ADVM/ACFS devices.
ACFS-9309: ADVM/ACFS installation correctness verified.
clscfg: EXISTING configuration version 5 detected.
clscfg: version 5 is 11g Release 2.
Successfully accumulated necessary OCR keys.
Creating OCR keys for user 'root', privgrp 'root'..
Operation successful.
Started to upgrade the Oracle Clusterware. This operation may take a few minutes.
Started to upgrade the CSS.
Started to upgrade the CRS.
The CRS was successfully upgraded.
Oracle Clusterware operating version was successfully set to 11.2.0.2.0
ASM upgrade has finished on last node.

Preparing packages for installation...
cvuqdisk-1.0.9-1
Configure Oracle Grid Infrastructure for a Cluster ... succeeded
The error line
Failed to add (property/value):('OLD_OCR_ID/'-1') for checkpoint:ROOTCRS_OLDHOMEINFO.Error code is 256
is similar to part of the error message give on 10036834.8 but after a SR was raised Oracle informed this is due to "Bug 10056593: FAIL TO ADD OLD_OCR_ID PROPERTY FOR ROOTCRS_OLDHOMEINFO. As per this bug the message can be ignored. You can continue with the installation."

Following commands shows that GI has been upgraded to 11.2.0.2
crsctl query crs softwareversion
Oracle Clusterware version on node [hpc3] is [11.2.0.2.0]

crsctl query crs releaseversion
Oracle High Availability Services release version on the local node is [11.2.0.2.0]

crsctl query crs activeversion
Oracle Clusterware active version on the cluster is [11.2.0.2.0]
It is important that if GI Home has been set on environment variables and PATH variable then to change it to point to new home, if not commands would be running out of the "old" GI Home.

Check the inventory.xml to see if the new GI Home has been set properly. Before upgrade
<HOME NAME="Ora11g_gridinfrahome1" LOC="/flash/11.2.0/grid11.2.0.1" TYPE="O" IDX="1" CRS="true">
<NODE_LIST>
<NODE NAME="hpc3"/>
</NODE_LIST>
</HOME>
After upgrade
<HOME NAME="Ora11g_gridinfrahome1" LOC="/flash/11.2.0/grid11.2.0.2" TYPE="O" IDX="1" CRS="true">
<NODE_LIST>
<NODE NAME="hpc3"/>
</NODE_LIST>
</HOME>



Next upgrade the Oracle Home and the database. With 11.2 patchset this could be done in one step where software is installed and at the end dbua will get run from the same session (no need to manually run dbua). Same as GI Home it is better to have a out of place upgrade for Oralce Home, trying to install on the same location would result in the following.

After running runInstaller from database patchset home, select upgrade existing database option

Select the nodes involved and start the install.

After the software has been installed (roughly around 86% mark on the progress bar) dbua would get run prompting to upgrade the database. This will have in addition to the usual option, an option to upgrade the time zone data, which will upgrade the time zone from 11 to 14.
Below is the summary before and after the upgrade.



Once the upgrade is complete the runInstaller will complete and concludes the upgrade process. As with GI Home set any environment variables to new Oracle Home.

Now it is possible to deinstall the "old" GI Home and Oracle Home. Either the deinstall within those Homes or an external deinstall installation (unzipping p10098816_112020_platform_7of7.zip) could be used for this.
cd /opt/app/oracle/product/11.2.0/clusdb_1/deinstall
unset ORACLE_HOME
./deinstall -local
..
Checking for existence of the Oracle home location /opt/app/oracle/product/11.2.0/clusdb_1
Oracle Home type selected for de-install is: RACDB
Oracle Base selected for de-install is: /opt/app/oracle
Checking for existence of central inventory location /opt/app/oraInventory
Checking for existence of the Oracle Grid Infrastructure home /opt/app/oracle/product/11.2.0/clusdb_1
The following nodes are part of this cluster: hpc3
..
Specify the list of database names that are configured in this Oracle home []:
Database Check Configuration END
..
Oracle Grid Infrastructure Home is: /opt/app/oracle/product/11.2.0/clusdb_1
The cluster node(s) on which the Oracle home exists are: (Please input nodes seperated by ",", eg: node1,node2,...)hpc3
Since -local option has been specified, the Oracle home will be de-installed only on the local node, 'hpc3', and the global configuration will be removed.
Oracle Home selected for de-install is: /opt/app/oracle/product/11.2.0/clusdb_1
It is imporant to note that database should not be running out of this Oracle Home. Similarly GI Home could also be deinstalled. In some cases during deinstall the directories inside GI Home may not get deleted and error message "directory in use" could be seen. But all the files inside the directories would be deleted (except in bin directory) and GI Home would be listed in inventory.xml as removed. In this case the "old" GI Home could be deleted with OS command.