Relevant section for 11.1 upgrade to 11.2 from GI Install guide.
After you have completed the Oracle Clusterware 11g release 2 (11.2) upgrade, if you did not choose to upgrade Oracle ASM when you upgraded Oracle Clusterware, then you can do it separately using the Oracle Automatic Storage Management Configuration Assistant (asmca) to perform rolling upgrades. You can use asmca to complete the upgrade separately, but you should do it soon after you upgrade Oracle Clusterware, as Oracle ASM management tools such as srvctl will not work until Oracle ASM is upgraded.
ASMCA performs a rolling upgrade only if the earlier version of Oracle ASM is either 11.1.0.6 or 11.1.0.7. Otherwise, ASMCA performs a normal upgrade, in which ASMCA brings down all Oracle ASM instances on all nodes of the cluster, and then brings them all up in the new Grid home.
You can use Oracle Database release 9.2, release 10.x and release 11.1 with Oracle Clusterware 11g release 2 (11.2). However, placing Oracle Database homes on Oracle ACFS that are prior to Oracle Database release 11.2 is not supported, because earlier releases are not designed to use Oracle ACFS.
If you upgrade an existing version of Oracle Clusterware and Oracle ASM to Oracle Grid Infrastructure 11g release 11.2 (which includes Oracle Clusterware and Oracle ASM), and you also plan to upgrade your Oracle RAC database to 11.2, then the required configuration of existing databases is completed automatically when you complete the Oracle RAC upgrade, and this section does not concern you.
However, if you upgrade to Oracle Grid Infrastructure 11g release 11.2, and you have existing Oracle RAC installations you do not plan to upgrade, or if you install older versions of Oracle RAC (9.2, 10.2 or 11.1) on a release 11.2 Oracle Grid Infrastructure cluster, then you must complete additional configuration tasks or apply patches, or both, before the older databases will work correctly with Oracle Grid Infrastructure.
Before you start an Oracle RAC or Oracle Database installation on an Oracle Clusterware 11g release 11.2 installation, if you are upgrading from releases 11.1.0.7, 11.1.0.6, and 10.2.0.4, then Oracle recommends that you check for the latest recommended patches for the release you are upgrading from, and install those patches as needed on your existing database installations before upgrading.
During an upgrade, all cluster member nodes are pinned automatically, and no manual pinning is required for existing databases. This procedure is required only if you install older database versions after installing Oracle Grid Infrastructure release 11.2 software.
To upgrade existing 11.1 Oracle Clusterware installations to Oracle Grid Infrastructure 11.2.0.3 or later, you must patch the release 11.1 Oracle Clusterware home with the patch for bug 7308467. (Included in PSU 11.1.0.7.6 for CRS) This cluster has 11.1.0.7.7 CRS PSU installed and Jan 2012 PSU installed on RAC.
With Oracle Clusterware 11g release 1 and later releases, the same user that owned the Oracle Clusterware 10g software must perform the Oracle Clusterware 11g upgrade. Before Oracle Database 11g, either all Oracle software installations were owned by the Oracle user, typically oracle, or Oracle Database software was owned by oracle, and Oracle Clusterware software was owned by a separate user, typically crs.
During a major version upgrade to 11g release 2 (11.2), the software in the 11g release 2 (11.2) Oracle Grid Infrastructure home is not fully functional until the upgrade is completed. Running srvctl, crsctl, and other commands from the 11g release 2 (11.2) home is not supported until the final rootupgrade.sh script is run and the upgrade is complete across all nodes. To manage databases in the existing earlier version (release 10.x or 11.1) database homes during the Oracle Grid Infrastructure upgrade, use the srvctl from the existing database homes.
From the Upgrade guide
A subset of nodes cannot be selected when upgrading from an earlier release to 11.2.0.3. Before the new database release 11.2.0.3 software can be installed on the system, the root script for upgrading Oracle Grid Infrastructure invokes ASMCA to upgrade Oracle ASM to release 11.2.0.3.
The cluster verification tool allows a pre-upgrade test. Some noteworthey checks that failed are
Check: Kernel parameter for "shmmni" Node Name Current Configured Required Status Comment ---------------- ------------ ------------ ------------ ------------ ------------ rac2 4096 unknown 4096 failed Configured value too low. rac1 4096 unknown 4096 failed Configured value too low. Result: Kernel parameter check failed for "shmmni"Even though this is an existing 11gR1 system and this value is set it is considered as unknown. Running the fix script generated by clufy fixed this.
11gR2 has a different port range than 11gR1
Check: Kernel parameter for "ip_local_port_range"Also install cvuqdisk-1.0.9-1.rpm for ocr block device sharedness check otherwise this check will fail.
If ntp service is not used for time synchronization remove it, to pass the ntp service check
NTP Configuration file check started... The NTP configuration file "/etc/ntp.conf" is available on all nodes NTP Configuration file check passed No NTP Daemons or Services were found to be running PRVF-5507 : NTP daemon or service is not running on any node but NTP configuration file exists on the following node(s): rac2,rac1 Result: Clock synchronization check using Network Time Protocol(NTP) failedTo fix (if ntp is not used)
mv /etc/ntp.conf /etc/ntp.conf.orig /sbin/chkconfig ntp off /sbin/chkconfig --list | grep ntp ntpd 0:off 1:off 2:off 3:off 4:off 5:off 6:offcluvfy also check for patch 11724953. This is the 2011 April CRS PSU so if this is applied no additional patches are needed.
The full output from the cluvfy check is given below.
./runcluvfy.sh stage -pre crsinst -upgrade -n rac1,rac2 -rolling -src_crshome /opt/crs/oracle/product/11.1.0/crs -dest_crshome /opt/app/11.2.0/grid -dest_version 11.2.0.3.0 -fixup -fixupdir /home/oracle/fixupscript -verbose Performing pre-checks for cluster services setup Checking node reachability... Check: Node reachability from node "rac1" Destination Node Reachable? ------------------------------------ ------------------------ rac2 yes rac1 yes Result: Node reachability check passed from node "rac1" Checking user equivalence... Check: User equivalence for user "oracle" Node Name Status ------------------------------------ ------------------------ rac2 passed rac1 passed Result: User equivalence check passed for user "oracle" Checking CRS user consistency Result: CRS user consistency check successful Checking node connectivity... Checking hosts config file... Node Name Status ------------------------------------ ------------------------ rac2 passed rac1 passed Verification of the hosts config file successful Interface information for node "rac2" Name IP Address Subnet Gateway Def. Gateway HW Address MTU ------ --------------- --------------- --------------- --------------- ----------------- ------ eth0 192.168.0.86 192.168.0.0 0.0.0.0 192.168.0.100 08:00:27:DD:3E:76 1500 eth0 192.168.0.90 192.168.0.0 0.0.0.0 192.168.0.100 08:00:27:DD:3E:76 1500 eth1 192.168.0.88 192.168.0.0 0.0.0.0 192.168.0.100 08:00:27:F4:97:4E 1500 Interface information for node "rac1" Name IP Address Subnet Gateway Def. Gateway HW Address MTU ------ --------------- --------------- --------------- --------------- ----------------- ------ eth0 192.168.0.85 192.168.0.0 0.0.0.0 192.168.0.100 08:00:27:AA:B9:2B 1500 eth0 192.168.0.89 192.168.0.0 0.0.0.0 192.168.0.100 08:00:27:AA:B9:2B 1500 eth1 192.168.0.87 192.168.0.0 0.0.0.0 192.168.0.100 08:00:27:37:7D:FC 1500 Check: Node connectivity for interface "eth0" Source Destination Connected? ------------------------------ ------------------------------ ---------------- rac2[192.168.0.86] rac2[192.168.0.90] yes rac2[192.168.0.86] rac2[192.168.0.88] yes rac2[192.168.0.86] rac1[192.168.0.85] yes rac2[192.168.0.86] rac1[192.168.0.89] yes rac2[192.168.0.86] rac1[192.168.0.87] yes rac2[192.168.0.90] rac2[192.168.0.88] yes rac2[192.168.0.90] rac1[192.168.0.85] yes rac2[192.168.0.90] rac1[192.168.0.89] yes rac2[192.168.0.90] rac1[192.168.0.87] yes rac2[192.168.0.88] rac1[192.168.0.85] yes rac2[192.168.0.88] rac1[192.168.0.89] yes rac2[192.168.0.88] rac1[192.168.0.87] yes rac1[192.168.0.85] rac1[192.168.0.89] yes rac1[192.168.0.85] rac1[192.168.0.87] yes rac1[192.168.0.89] rac1[192.168.0.87] yes Result: Node connectivity passed for interface "eth0" Check: TCP connectivity of subnet "192.168.0.0" Source Destination Connected? ------------------------------ ------------------------------ ---------------- rac1:192.168.0.85 rac2:192.168.0.86 passed rac1:192.168.0.85 rac2:192.168.0.90 passed rac1:192.168.0.85 rac2:192.168.0.88 passed rac1:192.168.0.85 rac1:192.168.0.89 passed rac1:192.168.0.85 rac1:192.168.0.87 passed Result: TCP connectivity check passed for subnet "192.168.0.0" Check: Node connectivity for interface "eth1" Checking subnet mask consistency... Subnet mask consistency check passed for subnet "192.168.0.0". Subnet mask consistency check passed. Result: Node connectivity check passed Checking multicast communication... Checking subnet "192.168.0.0" for multicast communication with multicast group "230.0.1.0"... Check of subnet "192.168.0.0" for multicast communication with multicast group "230.0.1.0" passed. Check of multicast communication passed. Checking OCR integrity... Check for compatible storage device for OCR location "/dev/sdb1"... Checking OCR device "/dev/sdb1" for sharedness... OCR device "/dev/sdb1" is shared... Checking size of the OCR location "/dev/sdb1" ... Size check for OCR location "/dev/sdb1" successful... OCR integrity check passed Checking ASMLib configuration. Node Name Status ------------------------------------ ------------------------ rac2 passed rac1 passed Result: Check for ASMLib configuration passed. Check: Total memory Node Name Available Required Status ------------ ------------------------ ------------------------ ---------- rac2 1.9641GB (2059516.0KB) 1.5GB (1572864.0KB) passed rac1 1.9641GB (2059516.0KB) 1.5GB (1572864.0KB) passed Result: Total memory check passed Check: Available memory Node Name Available Required Status ------------ ------------------------ ------------------------ ---------- rac2 1.2643GB (1325668.0KB) 50MB (51200.0KB) passed rac1 1.0908GB (1143820.0KB) 50MB (51200.0KB) passed Result: Available memory check passed Check: Swap space Node Name Available Required Status ------------ ------------------------ ------------------------ ---------- rac2 3.9987GB (4192956.0KB) 2.9462GB (3089274.0KB) passed rac1 3.9987GB (4192956.0KB) 2.9462GB (3089274.0KB) passed Result: Swap space check passed Check: Free disk space for "rac2:/opt/app/11.2.0/grid,rac2:/tmp" Path Node Name Mount point Available Required Status ---------------- ------------ ------------ ------------ ------------ ------------ /opt/app/11.2.0/grid rac2 / 21.5547GB 7.5GB passed /tmp rac2 / 21.5547GB 7.5GB passed Result: Free disk space check passed for "rac2:/opt/app/11.2.0/grid,rac2:/tmp" Check: Free disk space for "rac1:/opt/app/11.2.0/grid,rac1:/tmp" Path Node Name Mount point Available Required Status ---------------- ------------ ------------ ------------ ------------ ------------ /opt/app/11.2.0/grid rac1 / 21.2863GB 7.5GB passed /tmp rac1 / 21.2863GB 7.5GB passed Result: Free disk space check passed for "rac1:/opt/app/11.2.0/grid,rac1:/tmp" Check: User existence for "oracle" Node Name Status Comment ------------ ------------------------ ------------------------ rac2 passed exists(501) rac1 passed exists(501) Checking for multiple users with UID value 501 Result: Check for multiple users with UID value 501 passed Result: User existence check passed for "oracle" Check: Group existence for "oinstall" Node Name Status Comment ------------ ------------------------ ------------------------ rac2 passed exists rac1 passed exists Result: Group existence check passed for "oinstall" Check: Membership of user "oracle" in group "oinstall" [as Primary] Node Name User Exists Group Exists User in Group Primary Status ---------------- ------------ ------------ ------------ ------------ ------------ rac2 yes yes yes yes passed rac1 yes yes yes yes passed Result: Membership check for user "oracle" in group "oinstall" [as Primary] passed Check: Run level Node Name run level Required Status ------------ ------------------------ ------------------------ ---------- rac2 3 3,5 passed rac1 3 3,5 passed Result: Run level check passed Check: Hard limits for "maximum open file descriptors" Node Name Type Available Required Status ---------------- ------------ ------------ ------------ ---------------- rac2 hard 65536 65536 passed rac1 hard 65536 65536 passed Result: Hard limits check passed for "maximum open file descriptors" Check: Soft limits for "maximum open file descriptors" Node Name Type Available Required Status ---------------- ------------ ------------ ------------ ---------------- rac2 soft 1024 1024 passed rac1 soft 1024 1024 passed Result: Soft limits check passed for "maximum open file descriptors" Check: Hard limits for "maximum user processes" Node Name Type Available Required Status ---------------- ------------ ------------ ------------ ---------------- rac2 hard 16384 16384 passed rac1 hard 16384 16384 passed Result: Hard limits check passed for "maximum user processes" Check: Soft limits for "maximum user processes" Node Name Type Available Required Status ---------------- ------------ ------------ ------------ ---------------- rac2 soft 2047 2047 passed rac1 soft 2047 2047 passed Result: Soft limits check passed for "maximum user processes" Checking for Oracle patch "11724953" in home "/opt/crs/oracle/product/11.1.0/crs". Node Name Applied Required Comment ------------ ------------------------ ------------------------ ---------- rac2 11724953 11724953 passed rac1 11724953 11724953 passed Result: Check for Oracle patch "11724953" in home "/opt/crs/oracle/product/11.1.0/crs" passed There are no oracle patches required for home "/opt/app/11.2.0/grid". Check: System architecture Node Name Available Required Status ------------ ------------------------ ------------------------ ---------- rac2 x86_64 x86_64 passed rac1 x86_64 x86_64 passed Result: System architecture check passed Check: Kernel version Node Name Available Required Status ------------ ------------------------ ------------------------ ---------- rac2 2.6.18-194.el5 2.6.18 passed rac1 2.6.18-194.el5 2.6.18 passed Result: Kernel version check passed Check: Kernel parameter for "semmsl" Node Name Current Configured Required Status Comment ---------------- ------------ ------------ ------------ ------------ ------------ rac2 3010 3010 250 passed rac1 3010 3010 250 passed Result: Kernel parameter check passed for "semmsl" Check: Kernel parameter for "semmns" Node Name Current Configured Required Status Comment ---------------- ------------ ------------ ------------ ------------ ------------ rac2 385280 385280 32000 passed rac1 385280 385280 32000 passed Result: Kernel parameter check passed for "semmns" Check: Kernel parameter for "semopm" Node Name Current Configured Required Status Comment ---------------- ------------ ------------ ------------ ------------ ------------ rac2 3010 3010 100 passed rac1 3010 3010 100 passed Result: Kernel parameter check passed for "semopm" Check: Kernel parameter for "semmni" Node Name Current Configured Required Status Comment ---------------- ------------ ------------ ------------ ------------ ------------ rac2 128 128 128 passed rac1 128 128 128 passed Result: Kernel parameter check passed for "semmni" Check: Kernel parameter for "shmmax" Node Name Current Configured Required Status Comment ---------------- ------------ ------------ ------------ ------------ ------------ rac2 68719476736 68719476736 1054472192 passed rac1 68719476736 68719476736 1054472192 passed Result: Kernel parameter check passed for "shmmax" Check: Kernel parameter for "shmmni" Node Name Current Configured Required Status Comment ---------------- ------------ ------------ ------------ ------------ ------------ rac2 4096 4096 4096 passed rac1 4096 4096 4096 passed Result: Kernel parameter check passed for "shmmni" Check: Kernel parameter for "shmall" Node Name Current Configured Required Status Comment ---------------- ------------ ------------ ------------ ------------ ------------ rac2 4294967296 4294967296 2097152 passed rac1 4294967296 4294967296 2097152 passed Result: Kernel parameter check passed for "shmall" Check: Kernel parameter for "file-max" Node Name Current Configured Required Status Comment ---------------- ------------ ------------ ------------ ------------ ------------ rac2 6815744 6815744 6815744 passed rac1 6815744 6815744 6815744 passed Result: Kernel parameter check passed for "file-max" Check: Kernel parameter for "ip_local_port_range" Node Name Current Configured Required Status Comment ---------------- ------------ ------------ ------------ ------------ ------------ rac2 between 9000.0 & 65500.0 between 9000.0 & 65500.0 between 9000.0 & 65500.0 passed rac1 between 9000.0 & 65500.0 between 9000.0 & 65500.0 between 9000.0 & 65500.0 passed Result: Kernel parameter check passed for "ip_local_port_range" Check: Kernel parameter for "rmem_default" Node Name Current Configured Required Status Comment ---------------- ------------ ------------ ------------ ------------ ------------ rac2 4194304 4194304 262144 passed rac1 4194304 4194304 262144 passed Result: Kernel parameter check passed for "rmem_default" Check: Kernel parameter for "rmem_max" Node Name Current Configured Required Status Comment ---------------- ------------ ------------ ------------ ------------ ------------ rac2 4194304 4194304 4194304 passed rac1 4194304 4194304 4194304 passed Result: Kernel parameter check passed for "rmem_max" Check: Kernel parameter for "wmem_default" Node Name Current Configured Required Status Comment ---------------- ------------ ------------ ------------ ------------ ------------ rac2 1048576 1048576 262144 passed rac1 1048576 1048576 262144 passed Result: Kernel parameter check passed for "wmem_default" Check: Kernel parameter for "wmem_max" Node Name Current Configured Required Status Comment ---------------- ------------ ------------ ------------ ------------ ------------ rac2 2097152 2097152 1048576 passed rac1 2097152 2097152 1048576 passed Result: Kernel parameter check passed for "wmem_max" Check: Kernel parameter for "aio-max-nr" Node Name Current Configured Required Status Comment ---------------- ------------ ------------ ------------ ------------ ------------ rac2 3145728 3145728 1048576 passed rac1 3145728 3145728 1048576 passed Result: Kernel parameter check passed for "aio-max-nr" Check: Package existence for "make" Node Name Available Required Status ------------ ------------------------ ------------------------ ---------- rac2 make-3.81-3.el5 make-3.81 passed rac1 make-3.81-3.el5 make-3.81 passed Result: Package existence check passed for "make" Check: Package existence for "binutils" Node Name Available Required Status ------------ ------------------------ ------------------------ ---------- rac2 binutils-2.17.50.0.6-14.el5 binutils-2.17.50.0.6 passed rac1 binutils-2.17.50.0.6-14.el5 binutils-2.17.50.0.6 passed Result: Package existence check passed for "binutils" Check: Package existence for "gcc(x86_64)" Node Name Available Required Status ------------ ------------------------ ------------------------ ---------- rac2 gcc(x86_64)-4.1.2-48.el5 gcc(x86_64)-4.1.2 passed rac1 gcc(x86_64)-4.1.2-48.el5 gcc(x86_64)-4.1.2 passed Result: Package existence check passed for "gcc(x86_64)" Check: Package existence for "libaio(x86_64)" Node Name Available Required Status ------------ ------------------------ ------------------------ ---------- rac2 libaio(x86_64)-0.3.106-5 libaio(x86_64)-0.3.106 passed rac1 libaio(x86_64)-0.3.106-5 libaio(x86_64)-0.3.106 passed Result: Package existence check passed for "libaio(x86_64)" Check: Package existence for "glibc(x86_64)" Node Name Available Required Status ------------ ------------------------ ------------------------ ---------- rac2 glibc(x86_64)-2.5-49 glibc(x86_64)-2.5-24 passed rac1 glibc(x86_64)-2.5-49 glibc(x86_64)-2.5-24 passed Result: Package existence check passed for "glibc(x86_64)" Check: Package existence for "compat-libstdc++-33(x86_64)" Node Name Available Required Status ------------ ------------------------ ------------------------ ---------- rac2 compat-libstdc++-33(x86_64)-3.2.3-61 compat-libstdc++-33(x86_64)-3.2.3 passed rac1 compat-libstdc++-33(x86_64)-3.2.3-61 compat-libstdc++-33(x86_64)-3.2.3 passed Result: Package existence check passed for "compat-libstdc++-33(x86_64)" Check: Package existence for "elfutils-libelf(x86_64)" Node Name Available Required Status ------------ ------------------------ ------------------------ ---------- rac2 elfutils-libelf(x86_64)-0.137-3.el5 elfutils-libelf(x86_64)-0.125 passed rac1 elfutils-libelf(x86_64)-0.137-3.el5 elfutils-libelf(x86_64)-0.125 passed Result: Package existence check passed for "elfutils-libelf(x86_64)" Check: Package existence for "elfutils-libelf-devel" Node Name Available Required Status ------------ ------------------------ ------------------------ ---------- rac2 elfutils-libelf-devel-0.137-3.el5 elfutils-libelf-devel-0.125 passed rac1 elfutils-libelf-devel-0.137-3.el5 elfutils-libelf-devel-0.125 passed Result: Package existence check passed for "elfutils-libelf-devel" Check: Package existence for "glibc-common" Node Name Available Required Status ------------ ------------------------ ------------------------ ---------- rac2 glibc-common-2.5-49 glibc-common-2.5 passed rac1 glibc-common-2.5-49 glibc-common-2.5 passed Result: Package existence check passed for "glibc-common" Check: Package existence for "glibc-devel(x86_64)" Node Name Available Required Status ------------ ------------------------ ------------------------ ---------- rac2 glibc-devel(x86_64)-2.5-49 glibc-devel(x86_64)-2.5 passed rac1 glibc-devel(x86_64)-2.5-49 glibc-devel(x86_64)-2.5 passed Result: Package existence check passed for "glibc-devel(x86_64)" Check: Package existence for "glibc-headers" Node Name Available Required Status ------------ ------------------------ ------------------------ ---------- rac2 glibc-headers-2.5-49 glibc-headers-2.5 passed rac1 glibc-headers-2.5-49 glibc-headers-2.5 passed Result: Package existence check passed for "glibc-headers" Check: Package existence for "gcc-c++(x86_64)" Node Name Available Required Status ------------ ------------------------ ------------------------ ---------- rac2 gcc-c++(x86_64)-4.1.2-48.el5 gcc-c++(x86_64)-4.1.2 passed rac1 gcc-c++(x86_64)-4.1.2-48.el5 gcc-c++(x86_64)-4.1.2 passed Result: Package existence check passed for "gcc-c++(x86_64)" Check: Package existence for "libaio-devel(x86_64)" Node Name Available Required Status ------------ ------------------------ ------------------------ ---------- rac2 libaio-devel(x86_64)-0.3.106-5 libaio-devel(x86_64)-0.3.106 passed rac1 libaio-devel(x86_64)-0.3.106-5 libaio-devel(x86_64)-0.3.106 passed Result: Package existence check passed for "libaio-devel(x86_64)" Check: Package existence for "libgcc(x86_64)" Node Name Available Required Status ------------ ------------------------ ------------------------ ---------- rac2 libgcc(x86_64)-4.1.2-48.el5 libgcc(x86_64)-4.1.2 passed rac1 libgcc(x86_64)-4.1.2-48.el5 libgcc(x86_64)-4.1.2 passed Result: Package existence check passed for "libgcc(x86_64)" Check: Package existence for "libstdc++(x86_64)" Node Name Available Required Status ------------ ------------------------ ------------------------ ---------- rac2 libstdc++(x86_64)-4.1.2-48.el5 libstdc++(x86_64)-4.1.2 passed rac1 libstdc++(x86_64)-4.1.2-48.el5 libstdc++(x86_64)-4.1.2 passed Result: Package existence check passed for "libstdc++(x86_64)" Check: Package existence for "libstdc++-devel(x86_64)" Node Name Available Required Status ------------ ------------------------ ------------------------ ---------- rac2 libstdc++-devel(x86_64)-4.1.2-48.el5 libstdc++-devel(x86_64)-4.1.2 passed rac1 libstdc++-devel(x86_64)-4.1.2-48.el5 libstdc++-devel(x86_64)-4.1.2 passed Result: Package existence check passed for "libstdc++-devel(x86_64)" Check: Package existence for "sysstat" Node Name Available Required Status ------------ ------------------------ ------------------------ ---------- rac2 sysstat-7.0.2-3.el5 sysstat-7.0.2 passed rac1 sysstat-7.0.2-3.el5 sysstat-7.0.2 passed Result: Package existence check passed for "sysstat" Check: Package existence for "ksh" Node Name Available Required Status ------------ ------------------------ ------------------------ ---------- rac2 ksh-20100202-1.el5 ksh-20060214 passed rac1 ksh-20100202-1.el5 ksh-20060214 passed Result: Package existence check passed for "ksh" Checking for multiple users with UID value 0 Result: Check for multiple users with UID value 0 passed Check: Current group ID Result: Current group ID check passed Starting check for consistency of primary group of root user Node Name Status ------------------------------------ ------------------------ rac2 passed rac1 passed Check for consistency of root user's primary group passed Check: Package existence for "cvuqdisk" Node Name Available Required Status ------------ ------------------------ ------------------------ ---------- rac2 cvuqdisk-1.0.9-1 cvuqdisk-1.0.9-1 passed rac1 cvuqdisk-1.0.9-1 cvuqdisk-1.0.9-1 passed Result: Package existence check passed for "cvuqdisk" Starting Clock synchronization checks using Network Time Protocol(NTP)... NTP Configuration file check started... Network Time Protocol(NTP) configuration file not found on any of the nodes. Oracle Cluster Time Synchronization Service(CTSS) can be used instead of NTP for time synchronization on the cluster nodes No NTP Daemons or Services were found to be running Result: Clock synchronization check using Network Time Protocol(NTP) passed Checking Core file name pattern consistency... Core file name pattern consistency check passed. Checking to make sure user "oracle" is not in "root" group Node Name Status Comment ------------ ------------------------ ------------------------ rac2 passed does not exist rac1 passed does not exist Result: User "oracle" is not part of "root" group. Check passed Check default user file creation mask Node Name Available Required Comment ------------ ------------------------ ------------------------ ---------- rac2 0022 0022 passed rac1 0022 0022 passed Result: Default user file creation mask check passed Checking consistency of file "/etc/resolv.conf" across nodes Checking the file "/etc/resolv.conf" to make sure only one of domain and search entries is defined File "/etc/resolv.conf" does not have both domain and search entries defined Checking if domain entry in file "/etc/resolv.conf" is consistent across the nodes... domain entry in file "/etc/resolv.conf" is consistent across nodes Checking if search entry in file "/etc/resolv.conf" is consistent across the nodes... search entry in file "/etc/resolv.conf" is consistent across nodes Checking file "/etc/resolv.conf" to make sure that only one search entry is defined All nodes have one search entry defined in file "/etc/resolv.conf" Checking all nodes to make sure that search entry is "domain.net" as found on node "rac2" All nodes of the cluster have same value for 'search' Checking DNS response time for an unreachable node Node Name Status ------------------------------------ ------------------------ rac2 passed rac1 passed The DNS response time for an unreachable node is within acceptable limit on all nodes File "/etc/resolv.conf" is consistent across nodes UDev attributes check for OCR locations started... Checking udev settings for device "/dev/sdb1" Device Owner Group Permissions Result ---------------- ------------ ------------ ------------ ---------------- sdb1 root oinstall 640 passed sdb1 root oinstall 640 passed Result: UDev attributes check passed for OCR locations UDev attributes check for Voting Disk locations started... Checking udev settings for device "/dev/sdb2" Device Owner Group Permissions Result ---------------- ------------ ------------ ------------ ---------------- sdb2 oracle oinstall 640 passed sdb2 oracle oinstall 640 passed Result: UDev attributes check passed for Voting Disk locations Check: Time zone consistency Result: Time zone consistency check passed Checking VIP configuration. Checking VIP Subnet configuration. Check for VIP Subnet configuration passed. Checking VIP reachability Check for VIP reachability passed. Checking Oracle Cluster Voting Disk configuration... Oracle Cluster Voting Disk configuration check passed Clusterware version consistency passed Pre-check for cluster services setup was successful.11gR2 GI uses a SCAN IP which is a pre-req for the upgrade.
Even though upgrade must be done with the same user new user groups could be created for ASM administration. Using the same DBA and OPER group used for oracle user would give a warning.
Create the new asm admin groups.
groupadd asmadmin groupadd asmdba groupadd asmoperModify the Oracle user from
id oracle uid=501(oracle) gid=501(oinstall) groups=501(oinstall),502(dba),503(oper)To
usermod -g oinstall -G dba,oper,asmdba,asmoper,asmadmin oracle id oracle uid=501(oracle) gid=501(oinstall) groups=501(oinstall),502(dba),503(oper),504(asmadmin),505(asmdba),506(asmoper)Start the clusterware upgrade while the cluster is up
crs_stat -t Name Type Target State Host ------------------------------------------------------------ ora....SM1.asm application ONLINE ONLINE rac1 ora....C1.lsnr application ONLINE ONLINE rac1 ora.rac1.gsd application ONLINE ONLINE rac1 ora.rac1.ons application ONLINE ONLINE rac1 ora.rac1.vip application ONLINE ONLINE rac1 ora.rac11g1.db application ONLINE ONLINE rac1 ora....11.inst application ONLINE ONLINE rac1 ora....12.inst application ONLINE ONLINE rac2 ora....SM2.asm application ONLINE ONLINE rac2 ora....C2.lsnr application ONLINE ONLINE rac2 ora.rac2.gsd application ONLINE ONLINE rac2 ora.rac2.ons application ONLINE ONLINE rac2 ora.rac2.vip application ONLINE ONLINE rac2Execute runInstaller and follow the wizard. (only key steps are shown).
Select both (or all) nodes.
Specify the scan ip
Password for less privilege user (asmsnmp) to monitor ASM
Use the newly created ASM admin OS groups.
New GI Home location for the out-of-place ugprade
Pre-req check.
Summary
Until the rootupgrade.sh is run cluster stack is up on all nodes. When rootupgrade.sh is run on one node cluster stack on that node is brought down while other node's clusterware stack remains up and open for use. This way by default the upgrades are rolling upgrades. Once rootupgrade.sh is finished running, cluster stack on the node it was run will be brought up and ready for use while the other node's cluster stack is brought down with the rootupgrade.sh
Run rootupgrade.sh on rac1
# /opt/app/11.2.0/grid/rootupgrade.sh Performing root user operation for Oracle 11g The following environment variables are set as: ORACLE_OWNER= oracle ORACLE_HOME= /opt/app/11.2.0/grid Enter the full pathname of the local bin directory: [/usr/local/bin]: The contents of "dbhome" have not changed. No need to overwrite. The file "oraenv" already exists in /usr/local/bin. Overwrite it? (y/n) [n]: The file "coraenv" already exists in /usr/local/bin. Overwrite it? (y/n) [n]: Entries will be added to the /etc/oratab file as needed by Database Configuration Assistant when a database is created Finished running generic part of root script. Now product-specific root actions will be performed. Using configuration parameter file: /opt/app/11.2.0/grid/crs/install/crsconfig_params Creating trace directory OLR initialization - successful root wallet root wallet cert root cert export peer wallet profile reader wallet pa wallet peer wallet keys pa wallet keys peer cert request pa cert request peer cert pa cert peer root cert TP profile reader root cert TP pa root cert TP peer pa cert TP pa peer cert TP profile reader pa cert TP profile reader peer cert TP peer user cert pa user cert Replacing Clusterware entries in inittab clscfg: EXISTING configuration version 4 detected. clscfg: version 4 is 11 Release 1. Successfully accumulated necessary OCR keys. Creating OCR keys for user 'root', privgrp 'root'.. Operation successful. Configure Oracle Grid Infrastructure for a Cluster ... succeeded crs_stat -t Name Type Target State Host ------------------------------------------------------------ ora....SM1.asm application ONLINE ONLINE rac1 ora....C1.lsnr application ONLINE ONLINE rac1 ora.rac1.gsd application ONLINE OFFLINE ora.rac1.ons application ONLINE ONLINE rac1 ora.rac1.vip application ONLINE ONLINE rac1 ora.rac11g1.db application ONLINE ONLINE rac2 ora....11.inst application ONLINE ONLINE rac1 ora....12.inst application ONLINE ONLINE rac2 ora....SM2.asm application ONLINE ONLINE rac2 ora....C2.lsnr application ONLINE ONLINE rac2 ora.rac2.gsd application ONLINE ONLINE rac2 ora.rac2.ons application ONLINE ONLINE rac2 ora.rac2.vip application ONLINE ONLINE rac2gsd is not up on 11gR2 this is normal.
Until all the nodes are upgraded the active version remains the lower version in this case 11.1.0.7 but the software version will be the new version on the upgraded node.
crsctl query crs activeversion Oracle Clusterware active version on the cluster is [11.1.0.7.0] crsctl query crs softwareversion Oracle Clusterware version on node [rac1] is [11.2.0.3.0]Upgrade rac2
# /opt/app/11.2.0/grid/rootupgrade.sh Performing root user operation for Oracle 11g The following environment variables are set as: ORACLE_OWNER= oracle ORACLE_HOME= /opt/app/11.2.0/grid Enter the full pathname of the local bin directory: [/usr/local/bin]: The contents of "dbhome" have not changed. No need to overwrite. The file "oraenv" already exists in /usr/local/bin. Overwrite it? (y/n) [n]: The file "coraenv" already exists in /usr/local/bin. Overwrite it? (y/n) [n]: Entries will be added to the /etc/oratab file as needed by Database Configuration Assistant when a database is created Finished running generic part of root script. Now product-specific root actions will be performed. Using configuration parameter file: /opt/app/11.2.0/grid/crs/install/crsconfig_params Creating trace directory OLR initialization - successful Replacing Clusterware entries in inittab clscfg: EXISTING configuration version 5 detected. clscfg: version 5 is 11g Release 2. Successfully accumulated necessary OCR keys. Creating OCR keys for user 'root', privgrp 'root'.. Operation successful. Start upgrade invoked.. Started to upgrade the Oracle Clusterware. This operation may take a few minutes. Started to upgrade the OCR. Started to upgrade the CSS. Started to upgrade the CRS. The CRS was successfully upgraded. Oracle Clusterware operating version was successfully set to 11.2.0.3.0 Configure Oracle Grid Infrastructure for a Cluster ... succeeded crs_stat -t Name Type Target State Host ------------------------------------------------------------ ora....N1.lsnr ora....er.type ONLINE ONLINE rac1 ora.cvu ora.cvu.type ONLINE ONLINE rac2 ora.gsd ora.gsd.type OFFLINE OFFLINE ora....network ora....rk.type ONLINE ONLINE rac1 ora.oc4j ora.oc4j.type ONLINE ONLINE rac2 ora.ons ora.ons.type ONLINE ONLINE rac1 ora....SM1.asm application ONLINE ONLINE rac1 ora....C1.lsnr application ONLINE ONLINE rac1 ora.rac1.gsd application OFFLINE OFFLINE ora.rac1.ons application ONLINE ONLINE rac1 ora.rac1.vip ora....t1.type ONLINE ONLINE rac1 ora.rac11g1.db application ONLINE ONLINE rac1 ora....11.inst application ONLINE ONLINE rac1 ora....12.inst application ONLINE ONLINE rac2 ora....SM2.asm application ONLINE ONLINE rac2 ora....C2.lsnr application ONLINE ONLINE rac2 ora.rac2.gsd application OFFLINE OFFLINE ora.rac2.ons application ONLINE ONLINE rac2 ora.rac2.vip ora....t1.type ONLINE ONLINE rac2 ora....ry.acfs ora....fs.type OFFLINE OFFLINE ora.scan1.vip ora....ip.type ONLINE ONLINE rac1After all the nodes are upgraded active version will updated
crsctl query crs activeversion Oracle Clusterware active version on the cluster is [11.2.0.3.0] crsctl query crs softwareversion Oracle Clusterware version on node [rac2] is [11.2.0.3.0]After rootupgrade.sh has finished on the last node click the OK button on the configuration script dialog. This would start configurations steps which also include the ASM upgrade.
ASM is upgraded in a rolling fashion, each database instance that is using the ASM instance being upgraded will be brought down automatically before the ASM upgrade and will be started once ASM is upgraded. Following could be observed on the ASM alert log
Reconfiguration complete Thu Mar 08 19:13:39 2012 ALTER SYSTEM START ROLLING MIGRATION TO 11.2.0.3.0The 11gR1 srvctl must be used to administer the database. Using the 11gR2 srvctl from the GI home would throw the following error
which srvctl /opt/app/11.2.0/grid/bin/srvctl stop database -d rac11g1 PRCD-1027 : Failed to retrieve database rac11g1 PRCD-1027 : Failed to retrieve database rac11g1 PRKP-1088 : Failed to retrieve configuration of cluster database rac11g1 PRKR-1078 : Database rac11g1 of version 11.0.0.0.0 cannot be administered using current version of srvctl. Instead run srvctl from /opt/app/oracle/product/11.1.0/db_1
Next step is to upgrade the RAC, and this will be done as an out of place upgrade.
It is possible to upgrade the software and database at the same time.
Actions For DST Updates When Upgrading To Or Applying The 11.2.0.3 Patchset [ID 1358166.1] gives details on timezone upgrade information for 11.2.0.3. In this upgrade path (11.1.0.7 -> 11.2.0.3) there's no need to apply any patches beforehand but after upgrade it's advised to ugprade the timezone. This could be done during DBUA exeuction by selecting the upgrade timezone option. Current timezone is
SQL> SELECT version FROM v$timezone_file; VERSION ---------- 4If there are lot of em console related files in $ORACLE_HOME/hostname_instance/sysman/emd/upload/ file copying could take long time.
Execute runInstaller
Select all nodes
New location for out of place upgrade
RAC admin with same OS groups
Summary
Once the root.sh is run and OK button is clicked DBUA runs and database upgrade starts. For the cluster database all instances are shutdown and upgrade continues only on one node with one instance. At the end of the upgrade all instances are started.
Upgrade timezone with database upgrade.
Upgrade summary
Upgrade result
Once the database is upgraded RAC upgrade concludes.
Once the upgrade is finished uninstall 11gR1 clusterware home and database software.
Post upgrade notes.
remote listener has both tnsnames.ora entries from 11gR1 and scan ip on all nodes
remote_listener string LISTENERS_RAC11G1, rac-scan:1521Even though 11gR1 RAC had Jan 2012 PSU applied and _external_scn_rejection_threshold_hours set it was commented during upgrade and was not present after the upgrade.
After the upgrade database service will not work with FCF create an application service.
Some of the resource types will have auto_start option set to never. Execute following in a shell script
awk \ 'BEGIN {printf "%-35s %-25s %-18s\n", "Resource Name", "Type", "Auto Start State"; printf "%-35s %-25s %-18s\n", "-----------", "------", "----------------";}' crsctl stat res -p | egrep -w "NAME|TYPE|AUTO_START" | grep -v DEFAULT_TEMPLATE | awk \ 'BEGIN { FS="="; state = 0; } $1~/NAME/ {appname = $2; state=1}; state == 0 {next;} $1~/TYPE/ && state == 1 {apptarget = $2; state=2;} $1~/AUTO_START/ && state == 2 {appstate = $2; state=3;} state == 3 {printf "%-35s %-25s %-18s\n", appname, apptarget, appstate; state=0;}' Resource Name Type Auto Start State ----------- ------ ---------------- ora.DATA.dg ora.diskgroup.type never ora.FLASH.dg ora.diskgroup.type never ora.LISTENER.lsnr ora.listener.type restore ora.LISTENER_SCAN1.lsnr ora.scan_listener.type restore ora.asm ora.asm.type never ora.cvu ora.cvu.type restore ora.gsd ora.gsd.type always ora.net1.network ora.network.type restore ora.oc4j ora.oc4j.type restore ora.ons ora.ons.type always ora.rac1.vip ora.cluster_vip_net1.type restore ora.rac11g1.db ora.database.type restore ora.rac11g1.sbx.svc ora.service.type restore ora.rac2.vip ora.cluster_vip_net1.type restore ora.registry.acfs ora.registry.acfs.type restore ora.scan1.vip ora.scan_vip.type restoreSet the desired auto_start option.
Useful metalink notes
RACcheck 11.2.0.3 Upgrade Readiness Assessment [ID 1457357.1]
Things to Consider Before Upgrading to 11.2.0.3 Grid Infrastructure/ASM [ID 1363369.1]
Things to Consider Before Upgrading to 11.2.0.3 to Avoid Poor Performance or Wrong Results [ID 1392633.1]
Related Posts
Upgrading from 10.2.0.4 to 10.2.0.5 (Clusterware, RAC, ASM)
Upgrade from 10.2.0.5 to 11.2.0.3 (Clusterware, RAC, ASM)
Upgrade from 11.1.0.7 to 11.2.0.4 (Clusterware, ASM & RAC)
Upgrading from 11.1.0.7 to 11.2.0.3 with Transient Logical Standby
Upgrading from 11.2.0.1 to 11.2.0.3 with in-place upgrade for RAC
In-place upgrade from 11.2.0.2 to 11.2.0.3
Upgrading from 11.2.0.2 to 11.2.0.3 with Physical Standby - 1
Upgrading from 11.2.0.2 to 11.2.0.3 with Physical Standby - 2
Upgrading from 11gR2 (11.2.0.3) to 12c (12.1.0.1) Grid Infrastructure
Upgrading RAC from 11.2.0.4 to 12.1.0.2 - Grid Infrastructure