Tuesday, July 6, 2010

Upgrading 11.1.0.7 Clusterware,ASM to 11.2.0.1

This blog gives highlights of upgrading a 11gR1 clusterware to 11gR2 grid infrastructure. The system that is used for upgrade is the one intially created as a 10gR2 cluster and then upgraded to 11gR1 where raw devices were moved to block devices and finally block device ASM was moved to ASMLib

1. Leave the existing RAC running. As of 11gR2 all upgrades are rolling upgrades

2. Unset ORACLE_BASE,ORACLE_HOME,ORACLE_SID,ORA_CRS_HOME,ORA_NLS10 and TNS_ADMIN. Remove from PATH and LD_LIBRARY_PATH variables any reference to existing system.

3. Prepare new locations for grid infrastructure home, scan IP, operating system user groups

4. runInstaller from the grid infrastructure software and select the upgrade option

5.Although Oracle says "Oracle recommends that you leave Oracle RAC instances running. When you start the root script on each node, that node's instances are shut down and then started up again by the rootupgrade.sh script" the installer complains but this could be ignored and proced to next step.

6. This test system only had a single node but if there were multiple nodes then Oracle "recommends that you select all cluster member nodes for the upgrade, and then shut down database instances on each node before you run the upgrade root script, starting the database instance up again on each node after the upgrade is complete"
Also Oracle "Oracle recommends that you upgrade Oracle ASM at the same time that you upgrade the Oracle Clusterware binaries. Until ASM is upgraded, Oracle databases that use ASM can't be created. Until ASM is upgraded, the 11g release 2 (11.2) ASM management tools in the Grid home (for example, srvctl) will not work."

7. Give the new SCAN IP

8. Oracle uses a less powerful asmsnmp user to monitor the asm upgrade. Give a password to be associated with this user.

9. New locations for the installation, 11gR2 grid upgrade is an out of place upgrade.

10. Summary


11. When prompted run the rootupgrade.sh script as root user

rootupgrade.sh output
/opt/app/11.2.0/grid/rootupgrade.sh
Running Oracle 11g root.sh script...

The following environment variables are set as:
ORACLE_OWNER= oracle
ORACLE_HOME=  /opt/app/11.2.0/grid

Enter the full pathname of the local bin directory: [/usr/local/bin]:
The file "dbhome" already exists in /usr/local/bin.  Overwrite it? (y/n)
[n]:
The file "oraenv" already exists in /usr/local/bin.  Overwrite it? (y/n)
[n]:
The file "coraenv" already exists in /usr/local/bin.  Overwrite it? (y/n)
[n]:

Entries will be added to the /etc/oratab file as needed by
Database Configuration Assistant when a database is created
Finished running generic part of root.sh script.
Now product-specific root actions will be performed.
2010-07-06 10:14:05: Parsing the host name
2010-07-06 10:14:05: Checking for super user privileges
2010-07-06 10:14:05: User has super user privileges
Using configuration parameter file: /opt/app/11.2.0/grid/crs/install/crsconfig_params
Creating trace directory
Cluster Synchronization Services appears healthy
Event Manager appears healthy
Cluster Ready Services appears healthy
Shutting down Oracle Cluster Ready Services (CRS):
Jul 06 10:14:21.900 | INF | daemon shutting down
Stopping resources.
This could take several minutes.
Successfully stopped Oracle Clusterware resources
Stopping Cluster Synchronization Services.
Shutting down the Cluster Synchronization Services daemon.
Shutdown request successfully issued.
Shutdown has begun. The daemons should exit soon.
LOCAL ADD MODE
Creating OCR keys for user 'root', privgrp 'root'.. 
Operation successful.
root wallet
root wallet cert
root cert export
peer wallet
profile reader wallet
pa wallet
peer wallet keys
pa wallet keys
peer cert request
pa cert request
peer cert
pa cert
peer root cert TP
profile reader root cert TP
pa root cert TP
peer pa cert TP
pa peer cert TP
profile reader pa cert TP
profile reader peer cert TP
peer user cert
pa user cert
Adding daemon to inittab
CRS-4123: Oracle High Availability Services has been started.
ohasd is starting
CRS-2672: Attempting to start 'ora.mdnsd' on 'hpc1'
CRS-2676: Start of 'ora.mdnsd' on 'hpc1' succeeded
CRS-2672: Attempting to start 'ora.gipcd' on 'hpc1'
CRS-2676: Start of 'ora.gipcd' on 'hpc1' succeeded
CRS-2672: Attempting to start 'ora.gpnpd' on 'hpc1'
CRS-2676: Start of 'ora.gpnpd' on 'hpc1' succeeded
CRS-2672: Attempting to start 'ora.cssdmonitor' on 'hpc1'
CRS-2676: Start of 'ora.cssdmonitor' on 'hpc1' succeeded
CRS-2672: Attempting to start 'ora.cssd' on 'hpc1'
CRS-2672: Attempting to start 'ora.diskmon' on 'hpc1'
CRS-2676: Start of 'ora.diskmon' on 'hpc1' succeeded
CRS-2676: Start of 'ora.cssd' on 'hpc1' succeeded
CRS-2672: Attempting to start 'ora.ctssd' on 'hpc1'
CRS-2676: Start of 'ora.ctssd' on 'hpc1' succeeded
CRS-2672: Attempting to start 'ora.crsd' on 'hpc1'
CRS-2676: Start of 'ora.crsd' on 'hpc1' succeeded
CRS-2672: Attempting to start 'ora.evmd' on 'hpc1'
CRS-2676: Start of 'ora.evmd' on 'hpc1' succeeded
clscfg: EXISTING configuration version 4 detected.
clscfg: version 4 is 11 Release 1.
Successfully accumulated necessary OCR keys.
Creating OCR keys for user 'root', privgrp 'root'..
Operation successful.

hpc1     2010/07/06 10:17:18     /opt/app/11.2.0/grid/cdata/crs_10g_cluster/backup_20100706_101718.ocr
Start upgrade invoked..
Started to upgrade the Oracle Clusterware. This operation may take a few minutes.
Started to upgrade the CSS.
The CSS was successfully upgraded.
Started to upgrade the OCR.
The OCR was successfully upgraded.
Started to upgrade the CRS.
The CRS was successfully upgraded.
Successfully upgraded the Oracle Clusterware.
Oracle Clusterware operating version was successfully set to 11.2.0.1.0

hpc1     2010/07/06 10:19:49     /opt/app/11.2.0/grid/cdata/hpc1/backup_20100706_101949.olr
Preparing packages for installation...
cvuqdisk-1.0.7-1
Configure Oracle Grid Infrastructure for a Cluster ... succeeded
Updating inventory properties for clusterware
Starting Oracle Universal Installer...

Checking swap space: must be greater than 500 MB.   Actual 12001 MB    Passed
The inventory pointer is located at /etc/oraInst.loc
The inventory is located at /opt/app/oraInventory
'UpdateNodeList' was successful.                     
Starting Oracle Universal Installer...

Checking swap space: must be greater than 500 MB.   Actual 12001 MB    Passed
The inventory pointer is located at /etc/oraInst.loc
The inventory is located at /opt/app/oraInventory
'UpdateNodeList' was successful.
12. Click on the OUI window which will then run the ASMCA to upgrade the ASM followed by the DB Control upgrade. Once these upgrades are done, upgrade of clusterware is completed.

13. crs_stat is deprecated in 11gR2 but still works. Instead should use crsctl
crs_stat -t
Name           Type           Target    State     Host
------------------------------------------------------------
ora.DATA.dg    ora....up.type ONLINE    ONLINE    hpc1
ora.FLASH.dg   ora....up.type ONLINE    ONLINE    hpc1
ora....ER.lsnr ora....er.type ONLINE    ONLINE    hpc1
ora....N1.lsnr ora....er.type ONLINE    ONLINE    hpc1
ora.asm        ora.asm.type   ONLINE    ONLINE    hpc1
ora....b1.inst application    ONLINE    ONLINE    hpc1
ora.clusdb.db  application    ONLINE    ONLINE    hpc1
ora.eons       ora.eons.type  ONLINE    ONLINE    hpc1
ora.gsd        ora.gsd.type   OFFLINE   OFFLINE
ora.hpc1.vip   ora....t1.type ONLINE    ONLINE    hpc1
ora....network ora....rk.type ONLINE    ONLINE    hpc1
ora.oc4j       ora.oc4j.type  OFFLINE   OFFLINE
ora.ons        ora.ons.type   ONLINE    ONLINE    hpc1
ora....ry.acfs ora....fs.type ONLINE    ONLINE    hpc1
ora.scan1.vip  ora....ip.type ONLINE    ONLINE    hpc1

crsctl check crs
CRS-4638: Oracle High Availability Services is online
CRS-4537: Cluster Ready Services is online
CRS-4529: Cluster Synchronization Services is online
CRS-4533: Event Manager is online
14. Vote disk is still in the block device, though not supported for new installation, it's still a valid location for upgrades
./crsctl query css votedisk
##  STATE    File Universal Id                File Name Disk group
--  -----    -----------------                --------- ---------
1. ONLINE   755a7dc1e2cfcf76bf5b79632d35b4a9 (/dev/sdc6) []
Located 1 voting disk(s).
15. OCR is also in a block device same as vote disk. Not supported as a location for new installation but valid for upgrades.
ocrcheck
Status of Oracle Cluster Registry is as follows :
Version                  :          3
Total space (kbytes)     :     148348
Used space (kbytes)      :       4308
Available space (kbytes) :     144040
ID                       :  552644455
Device/File Name         :  /dev/sdc5
Device/File integrity check succeeded

Device/File not configured

Device/File not configured

Device/File not configured

Device/File not configured

Cluster registry integrity check succeeded

Logical corruption check bypassed due to non-privileged user
16. /etc/oratab is auto upgraded with the new location of the ASM home, with 11gR2 it is same as grid infrastructure home
+ASM1:/opt/app/11.2.0/grid:N
17. /etc/inittab now has the oracle high availability service daemon entry not the three clusterware entires as before
tail /etc/inittab
h1:35:respawn:/etc/init.d/init.ohasd run >/dev/null 2>&1 </dev/null
18.Finally to confrim the active,release and software versions
crsctl query crs activeversion
Oracle Clusterware active version on the cluster is [11.2.0.1.0]

crsctl query crs releaseversion
Oracle High Availability Services release version on the local node is [11.2.0.1.0]

crsctl query crs softwareversion
Oracle Clusterware version on node [hpc1] is [11.2.0.1.0]