Running the cluster verification (cluvfy) shows few failed pre-reqs which are new to 12.1.0.2 or known issues which could be ignored. First is the panic_on_oops kernel parameter.
Check: Kernel parameter for "panic_on_oops" Node Name Current Configured Required Status Comment ---------------- ------------ ------------ ------------ ------------ ------------ rhel12c1 1 unknown 1 failed (ignorable) Configured value incorrect. rhel12c2 1 unknown 1 failed (ignorable) Configured value incorrect. PRVG-1206 : Check cannot be performed for configured value of kernel parameter "panic_on_oops" on node "rhel12c1" PRVG-1206 : Check cannot be performed for configured value of kernel parameter "panic_on_oops" on node "rhel12c2"By default this is set to 1 so this could be safely ignored or panic_on_oops kernel parameter could be added to sysctl.conf. For more information refer PRVG-1206 : Check cannot be performed for configured value of kernel parameter "panic_on_oops" on node "racnode1" (Doc ID 1921982.1)
Second is with regard to DNS response times.
Checking DNS response time for an unreachable node Node Name Status ------------------------------------ ------------------------ rhel12c1 failed rhel12c2 failed PRVF-5636 : The DNS response time for an unreachable node exceeded "15000" ms on following nodes: rhel12c1,rhel12c2Refer an earlier post regarding PRVF-5636/PRVF-5637.
Also connected to DNS, make sure that public host names are resolved through the DNS. This could be done by adding entries to forward and reverse name look up files.
add host resolving to DNS cat /var/named/rev.domain.net.zone 93 IN PTR rhel12c1.domain.net. 94 IN PTR rhel12c2.domain.net. cat /var/named/domain.net.zone rhel12c1 A 192.168.0.93 rhel12c2 A 192.168.0.94The grid infrastructure management repository is no longer optional in 12.1.0.2. This by default will get installed in the same disk group where OCR and vote disks are as such this disk group need to be expanded.
ASMCMD> lsdg State Type Rebal Sector Block AU Total_MB Free_MB Voting_files Name MOUNTED EXTERN N 512 4096 1048576 1098 801 Y CLUSTERDG/ MOUNTED EXTERN N 512 4096 1048576 10236 8073 N DATA/ MOUNTED EXTERN N 512 4096 1048576 10236 8687 N FRA/Size of the disk group depends on the number of nodes and repository data retention time, which at the time of upgrade has no control over. If there's not enough space on the disk group then upgrade will not proceed and required amount of space will be given in the error details.Add more disks to the disk group containing clusterware files or move the clusterware files and ASM password file to a disk group with sufficiently large. Later, if needed this CHM repository database could be moved to a different disk group, refer 1589394.1
Since the CHM creates a database the dbca folder inside $ORACLE_BASE/cfgtoollogs needs write permission for oinstall group as this CHM related database is created as grid user. Without this write permission the CHM repository database creation will fail.
[root@rhel12c1 cfgtoollogs]# ls -l drwxr-x---. 3 oracle oinstall 4096 Jun 17 2014 dbca [root@rhel12c1 cfgtoollogs]# chmod 770 dbca [root@rhel12c1 cfgtoollogs]# ls -l drwxrwx---. 3 oracle oinstall 4096 Jun 17 2014 dbcaOne thing strange that was observed was complaining about the permission of the block devices used for ASM disk group.
Node Name Raw device Block device Permission Owner Group Comment ------ --------------- --------------- --------------- --------------- ----------------- ------ rhel12c1 /dev/sdb1 /dev/sdb1 0660 grid asmadmin failed rhel12c1 /dev/sdc1 /dev/sdc1 0660 grid asmadmin failed rhel12c1 /dev/sdd1 /dev/sdd1 0660 grid asmadmin failed rhel12c2 /dev/sdb1 /dev/sdb1 0660 grid asmadmin failed rhel12c2 /dev/sdc1 /dev/sdc1 0660 grid asmadmin failed rhel12c2 /dev/sdd1 /dev/sdd1 0660 grid asmadmin failed PRVG-4666 : The group for block devices "/dev/sdb1" are incorrect on node "rhel12c1" [Expected = oinstall, Actual = asmadmin] PRVG-4666 : The group for block devices "/dev/sdc1" are incorrect on node "rhel12c1" [Expected = oinstall, Actual = asmadmin] PRVG-4666 : The group for block devices "/dev/sdd1" are incorrect on node "rhel12c1" [Expected = oinstall, Actual = asmadmin] PRVG-4666 : The group for block devices "/dev/sdb1" are incorrect on node "rhel12c2" [Expected = oinstall, Actual = asmadmin] PRVG-4666 : The group for block devices "/dev/sdc1" are incorrect on node "rhel12c2" [Expected = oinstall, Actual = asmadmin] PRVG-4666 : The group for block devices "/dev/sdd1" are incorrect on node "rhel12c2" [Expected = oinstall, Actual = asmadmin]As this is an upgrade the ASM instance was created long ago and permission has been validated. This failed message appeared during several runs of the cluvfy but went away on its own. The correct permission group is asmadmin if it has been designated as the ASM admin group when cluster was created.
RACcheck has been replaced with orachk (refer ORAchk - Health Checks for the Oracle Stack (Doc ID 1268927.2)) which could also be used for pre upgrade checks with the command
./orachk -u -o preThe output from the cluvfy with upgrade option is given below
./runcluvfy.sh stage -pre crsinst -upgrade -rolling -src_crshome /opt/app/12.1.0/grid -dest_crshome /opt/app/12.1.0/grid2 -dest_version 12.1.0.2.0 -fixup -verbose Performing pre-checks for cluster services setup Checking node reachability... Check: Node reachability from node "rhel12c1" Destination Node Reachable? ------------------------------------ ------------------------ rhel12c1 yes rhel12c2 yes Result: Node reachability check passed from node "rhel12c1" Checking user equivalence... Check: User equivalence for user "grid" Node Name Status ------------------------------------ ------------------------ rhel12c2 passed rhel12c1 passed Result: User equivalence check passed for user "grid" Check: Package existence for "cvuqdisk" Node Name Available Required Status ------------ ------------------------ ------------------------ ---------- rhel12c2 cvuqdisk-1.0.9-1 cvuqdisk-1.0.9-1 passed rhel12c1 cvuqdisk-1.0.9-1 cvuqdisk-1.0.9-1 passed Result: Package existence check passed for "cvuqdisk" Check: Grid Infrastructure home writeability of path /opt/app/12.1.0/grid2 Grid Infrastructure home check passed Checking CRS user consistency Result: CRS user consistency check successful Checking network configuration consistency. Result: Check for network configuration consistency passed. Checking ASM disk size consistency All ASM disks are correctly sized Checking if default discovery string is being used by ASM ASM discovery string "/dev/sd*" is not the default discovery string Checking if ASM parameter file is in use by an ASM instance on the local node Result: ASM instance is using parameter file "+NEWCLUSTERDG/rhel12c/ASMPARAMETERFILE/REGISTRY.253.868893307" on node "rhel12c1" on which upgrade is requested. Checking OLR integrity... Check of existence of OLR configuration file "/etc/oracle/olr.loc" passed Check of attributes of OLR configuration file "/etc/oracle/olr.loc" passed WARNING: This check does not verify the integrity of the OLR contents. Execute 'ocrcheck -local' as a privileged user to verify the contents of OLR. OLR integrity check passed Checking node connectivity... Checking hosts config file... Node Name Status ------------------------------------ ------------------------ rhel12c1 passed rhel12c2 passed Verification of the hosts config file successful Interface information for node "rhel12c1" Name IP Address Subnet Gateway Def. Gateway HW Address MTU ------ --------------- --------------- --------------- --------------- ----------------- ------ eth0 192.168.0.93 192.168.0.0 0.0.0.0 192.168.0.100 08:00:27:CB:D8:AE 1500 eth0 192.168.0.89 192.168.0.0 0.0.0.0 192.168.0.100 08:00:27:CB:D8:AE 1500 eth0 192.168.0.90 192.168.0.0 0.0.0.0 192.168.0.100 08:00:27:CB:D8:AE 1500 eth1 192.168.1.87 192.168.1.0 0.0.0.0 192.168.0.100 08:00:27:92:0B:69 1500 eth1 169.254.145.58 169.254.0.0 0.0.0.0 192.168.0.100 08:00:27:92:0B:69 1500 Interface information for node "rhel12c2" Name IP Address Subnet Gateway Def. Gateway HW Address MTU ------ --------------- --------------- --------------- --------------- ----------------- ------ eth0 192.168.0.94 192.168.0.0 0.0.0.0 192.168.0.100 08:00:27:9C:66:81 1500 eth0 192.168.0.87 192.168.0.0 0.0.0.0 192.168.0.100 08:00:27:9C:66:81 1500 eth0 192.168.0.86 192.168.0.0 0.0.0.0 192.168.0.100 08:00:27:9C:66:81 1500 eth0 192.168.0.91 192.168.0.0 0.0.0.0 192.168.0.100 08:00:27:9C:66:81 1500 eth0 192.168.0.92 192.168.0.0 0.0.0.0 192.168.0.100 08:00:27:9C:66:81 1500 eth1 192.168.1.88 192.168.1.0 0.0.0.0 192.168.0.100 08:00:27:1E:0E:A8 1500 eth1 169.254.87.189 169.254.0.0 0.0.0.0 192.168.0.100 08:00:27:1E:0E:A8 1500 Check: Node connectivity using interfaces on subnet "192.168.0.0" Check: Node connectivity of subnet "192.168.0.0" Source Destination Connected? ------------------------------ ------------------------------ ---------------- rhel12c1[192.168.0.90] rhel12c1[192.168.0.93] yes rhel12c1[192.168.0.90] rhel12c1[192.168.0.89] yes rhel12c1[192.168.0.90] rhel12c2[192.168.0.94] yes rhel12c1[192.168.0.90] rhel12c2[192.168.0.87] yes ... Result: Node connectivity passed for subnet "192.168.0.0" with node(s) rhel12c1,rhel12c2 Check: TCP connectivity of subnet "192.168.0.0" Source Destination Connected? ------------------------------ ------------------------------ ---------------- rhel12c1 : 192.168.0.90 rhel12c1 : 192.168.0.90 passed rhel12c1 : 192.168.0.93 rhel12c1 : 192.168.0.90 passed rhel12c1 : 192.168.0.89 rhel12c1 : 192.168.0.90 passed rhel12c2 : 192.168.0.94 rhel12c1 : 192.168.0.90 passed ... Result: TCP connectivity check passed for subnet "192.168.0.0" Check: Node connectivity using interfaces on subnet "192.168.1.0" Check: Node connectivity of subnet "192.168.1.0" Source Destination Connected? ------------------------------ ------------------------------ ---------------- rhel12c1[192.168.1.87] rhel12c2[192.168.1.88] yes Result: Node connectivity passed for subnet "192.168.1.0" with node(s) rhel12c1,rhel12c2 Check: TCP connectivity of subnet "192.168.1.0" Source Destination Connected? ------------------------------ ------------------------------ ---------------- rhel12c1 : 192.168.1.87 rhel12c1 : 192.168.1.87 passed rhel12c2 : 192.168.1.88 rhel12c1 : 192.168.1.87 passed rhel12c1 : 192.168.1.87 rhel12c2 : 192.168.1.88 passed rhel12c2 : 192.168.1.88 rhel12c2 : 192.168.1.88 passed Result: TCP connectivity check passed for subnet "192.168.1.0" Checking subnet mask consistency... Subnet mask consistency check passed for subnet "192.168.0.0". Subnet mask consistency check passed for subnet "192.168.1.0". Subnet mask consistency check passed. Result: Node connectivity check passed Checking multicast communication... Checking subnet "192.168.1.0" for multicast communication with multicast group "224.0.0.251"... Check of subnet "192.168.1.0" for multicast communication with multicast group "224.0.0.251" passed. Check of multicast communication passed. Task ASM Integrity check started... Starting check to see if ASM is running on all cluster nodes... ASM Running check passed. ASM is running on all specified nodes Confirming that at least one ASM disk group is configured... Disk Group Check passed. At least one Disk Group configured Task ASM Integrity check passed... Checking OCR integrity... Disks "+NEWCLUSTERDG" are managed by ASM. OCR integrity check passed Check: Total memory Node Name Available Required Status ------------ ------------------------ ------------------------ ---------- rhel12c2 4.4199GB (4634568.0KB) 4GB (4194304.0KB) passed rhel12c1 4.4199GB (4634568.0KB) 4GB (4194304.0KB) passed Result: Total memory check passed Check: Available memory Node Name Available Required Status ------------ ------------------------ ------------------------ ---------- rhel12c2 3.2136GB (3369676.0KB) 50MB (51200.0KB) passed rhel12c1 2.979GB (3123740.0KB) 50MB (51200.0KB) passed Result: Available memory check passed Check: Swap space Node Name Available Required Status ------------ ------------------------ ------------------------ ---------- rhel12c2 5GB (5242872.0KB) 4.4199GB (4634568.0KB) passed rhel12c1 5GB (5242872.0KB) 4.4199GB (4634568.0KB) passed Result: Swap space check passed Check: Free disk space for "rhel12c2:/usr,rhel12c2:/var,rhel12c2:/etc,rhel12c2:/opt/app/12.1.0/grid,rhel12c2:/sbin,rhel12c2:/tmp" Path Node Name Mount point Available Required Status ---------------- ------------ ------------ ------------ ------------ ------------ /usr rhel12c2 / 9.1055GB 7.9635GB passed /var rhel12c2 / 9.1055GB 7.9635GB passed /etc rhel12c2 / 9.1055GB 7.9635GB passed /opt/app/12.1.0/grid rhel12c2 / 9.1055GB 7.9635GB passed /sbin rhel12c2 / 9.1055GB 7.9635GB passed /tmp rhel12c2 / 9.1055GB 7.9635GB passed Result: Free disk space check passed for "rhel12c2:/usr,rhel12c2:/var,rhel12c2:/etc,rhel12c2:/opt/app/12.1.0/grid,rhel12c2:/sbin,rhel12c2:/tmp" Check: Free disk space for "rhel12c1:/usr,rhel12c1:/var,rhel12c1:/etc,rhel12c1:/opt/app/12.1.0/grid,rhel12c1:/sbin,rhel12c1:/tmp" Path Node Name Mount point Available Required Status ---------------- ------------ ------------ ------------ ------------ ------------ /usr rhel12c1 / 8.3122GB 7.9635GB passed /var rhel12c1 / 8.3122GB 7.9635GB passed /etc rhel12c1 / 8.3122GB 7.9635GB passed /opt/app/12.1.0/grid rhel12c1 / 8.3122GB 7.9635GB passed /sbin rhel12c1 / 8.3122GB 7.9635GB passed /tmp rhel12c1 / 8.3122GB 7.9635GB passed Result: Free disk space check passed for "rhel12c1:/usr,rhel12c1:/var,rhel12c1:/etc,rhel12c1:/opt/app/12.1.0/grid,rhel12c1:/sbin,rhel12c1:/tmp" Check: User existence for "grid" Node Name Status Comment ------------ ------------------------ ------------------------ rhel12c2 passed exists(501) rhel12c1 passed exists(501) Checking for multiple users with UID value 501 Result: Check for multiple users with UID value 501 passed Result: User existence check passed for "grid" Check: Group existence for "oinstall" Node Name Status Comment ------------ ------------------------ ------------------------ rhel12c2 passed exists rhel12c1 passed exists Result: Group existence check passed for "oinstall" Check: Group existence for "dba" Node Name Status Comment ------------ ------------------------ ------------------------ rhel12c2 passed exists rhel12c1 passed exists Result: Group existence check passed for "dba" Check: Membership of user "grid" in group "oinstall" [as Primary] Node Name User Exists Group Exists User in Group Primary Status ---------------- ------------ ------------ ------------ ------------ ------------ rhel12c2 yes yes yes yes passed rhel12c1 yes yes yes yes passed Result: Membership check for user "grid" in group "oinstall" [as Primary] passed Check: Membership of user "grid" in group "dba" Node Name User Exists Group Exists User in Group Status ---------------- ------------ ------------ ------------ ---------------- rhel12c2 yes yes yes passed rhel12c1 yes yes yes passed Result: Membership check for user "grid" in group "dba" passed Check: Run level Node Name run level Required Status ------------ ------------------------ ------------------------ ---------- rhel12c2 3 3,5 passed rhel12c1 3 3,5 passed Result: Run level check passed Check: Hard limits for "maximum open file descriptors" Node Name Type Available Required Status ---------------- ------------ ------------ ------------ ---------------- rhel12c2 hard 65536 65536 passed rhel12c1 hard 65536 65536 passed Result: Hard limits check passed for "maximum open file descriptors" Check: Soft limits for "maximum open file descriptors" Node Name Type Available Required Status ---------------- ------------ ------------ ------------ ---------------- rhel12c2 soft 1024 1024 passed rhel12c1 soft 1024 1024 passed Result: Soft limits check passed for "maximum open file descriptors" Check: Hard limits for "maximum user processes" Node Name Type Available Required Status ---------------- ------------ ------------ ------------ ---------------- rhel12c2 hard 16384 16384 passed rhel12c1 hard 16384 16384 passed Result: Hard limits check passed for "maximum user processes" Check: Soft limits for "maximum user processes" Node Name Type Available Required Status ---------------- ------------ ------------ ------------ ---------------- rhel12c2 soft 2047 2047 passed rhel12c1 soft 2047 2047 passed Result: Soft limits check passed for "maximum user processes" There are no oracle patches required for home "/opt/app/12.1.0/grid". There are no oracle patches required for home "/opt/app/12.1.0/grid". Checking for suitability of source home "/opt/app/12.1.0/grid" for upgrading to version "12.1.0.2.0". Result: Source home "/opt/app/12.1.0/grid" is suitable for upgrading to version "12.1.0.2.0". Check: System architecture Node Name Available Required Status ------------ ------------------------ ------------------------ ---------- rhel12c2 x86_64 x86_64 passed rhel12c1 x86_64 x86_64 passed Result: System architecture check passed Check: Kernel version Node Name Available Required Status ------------ ------------------------ ------------------------ ---------- rhel12c2 2.6.32-358.el6.x86_64 2.6.32 passed rhel12c1 2.6.32-358.el6.x86_64 2.6.32 passed Result: Kernel version check passed Check: Kernel parameter for "semmsl" Node Name Current Configured Required Status Comment ---------------- ------------ ------------ ------------ ------------ ------------ rhel12c1 3010 3010 250 passed rhel12c2 3010 3010 250 passed Result: Kernel parameter check passed for "semmsl" Check: Kernel parameter for "semmns" Node Name Current Configured Required Status Comment ---------------- ------------ ------------ ------------ ------------ ------------ rhel12c1 385280 385280 32000 passed rhel12c2 385280 385280 32000 passed Result: Kernel parameter check passed for "semmns" Check: Kernel parameter for "semopm" Node Name Current Configured Required Status Comment ---------------- ------------ ------------ ------------ ------------ ------------ rhel12c1 3010 3010 100 passed rhel12c2 3010 3010 100 passed Result: Kernel parameter check passed for "semopm" Check: Kernel parameter for "semmni" Node Name Current Configured Required Status Comment ---------------- ------------ ------------ ------------ ------------ ------------ rhel12c1 128 128 128 passed rhel12c2 128 128 128 passed Result: Kernel parameter check passed for "semmni" Check: Kernel parameter for "shmmax" Node Name Current Configured Required Status Comment ---------------- ------------ ------------ ------------ ------------ ------------ rhel12c1 68719476736 68719476736 2372898816 passed rhel12c2 68719476736 68719476736 2372898816 passed Result: Kernel parameter check passed for "shmmax" Check: Kernel parameter for "shmmni" Node Name Current Configured Required Status Comment ---------------- ------------ ------------ ------------ ------------ ------------ rhel12c1 4096 4096 4096 passed rhel12c2 4096 4096 4096 passed Result: Kernel parameter check passed for "shmmni" Check: Kernel parameter for "shmall" Node Name Current Configured Required Status Comment ---------------- ------------ ------------ ------------ ------------ ------------ rhel12c1 4294967296 4294967296 463456 passed rhel12c2 4294967296 4294967296 463456 passed Result: Kernel parameter check passed for "shmall" Check: Kernel parameter for "file-max" Node Name Current Configured Required Status Comment ---------------- ------------ ------------ ------------ ------------ ------------ rhel12c1 6815744 6815744 6815744 passed rhel12c2 6815744 6815744 6815744 passed Result: Kernel parameter check passed for "file-max" Check: Kernel parameter for "ip_local_port_range" Node Name Current Configured Required Status Comment ---------------- ------------ ------------ ------------ ------------ ------------ rhel12c1 between 9000 & 65500 between 9000 & 65500 between 9000 & 65535 passed rhel12c2 between 9000 & 65500 between 9000 & 65500 between 9000 & 65535 passed Result: Kernel parameter check passed for "ip_local_port_range" Check: Kernel parameter for "rmem_default" Node Name Current Configured Required Status Comment ---------------- ------------ ------------ ------------ ------------ ------------ rhel12c1 4194304 4194304 262144 passed rhel12c2 4194304 4194304 262144 passed Result: Kernel parameter check passed for "rmem_default" Check: Kernel parameter for "rmem_max" Node Name Current Configured Required Status Comment ---------------- ------------ ------------ ------------ ------------ ------------ rhel12c1 4194304 4194304 4194304 passed rhel12c2 4194304 4194304 4194304 passed Result: Kernel parameter check passed for "rmem_max" Check: Kernel parameter for "wmem_default" Node Name Current Configured Required Status Comment ---------------- ------------ ------------ ------------ ------------ ------------ rhel12c1 1048576 1048576 262144 passed rhel12c2 1048576 1048576 262144 passed Result: Kernel parameter check passed for "wmem_default" Check: Kernel parameter for "wmem_max" Node Name Current Configured Required Status Comment ---------------- ------------ ------------ ------------ ------------ ------------ rhel12c1 2097152 2097152 1048576 passed rhel12c2 2097152 2097152 1048576 passed Result: Kernel parameter check passed for "wmem_max" Check: Kernel parameter for "aio-max-nr" Node Name Current Configured Required Status Comment ---------------- ------------ ------------ ------------ ------------ ------------ rhel12c1 3145728 3145728 1048576 passed rhel12c2 3145728 3145728 1048576 passed Result: Kernel parameter check passed for "aio-max-nr" Check: Kernel parameter for "panic_on_oops" Node Name Current Configured Required Status Comment ---------------- ------------ ------------ ------------ ------------ ------------ rhel12c1 1 1 1 passed rhel12c2 1 1 1 passed Result: Kernel parameter check passed for "panic_on_oops" Check: Package existence for "binutils" Node Name Available Required Status ------------ ------------------------ ------------------------ ---------- rhel12c2 binutils-2.20.51.0.2-5.36.el6 binutils-2.20.51.0.2 passed rhel12c1 binutils-2.20.51.0.2-5.36.el6 binutils-2.20.51.0.2 passed Result: Package existence check passed for "binutils" Check: Package existence for "compat-libcap1" Node Name Available Required Status ------------ ------------------------ ------------------------ ---------- rhel12c2 compat-libcap1-1.10-1 compat-libcap1-1.10 passed rhel12c1 compat-libcap1-1.10-1 compat-libcap1-1.10 passed Result: Package existence check passed for "compat-libcap1" Check: Package existence for "compat-libstdc++-33(x86_64)" Node Name Available Required Status ------------ ------------------------ ------------------------ ---------- rhel12c2 compat-libstdc++-33(x86_64)-3.2.3-69.el6 compat-libstdc++-33(x86_64)-3.2.3 passed rhel12c1 compat-libstdc++-33(x86_64)-3.2.3-69.el6 compat-libstdc++-33(x86_64)-3.2.3 passed Result: Package existence check passed for "compat-libstdc++-33(x86_64)" Check: Package existence for "libgcc(x86_64)" Node Name Available Required Status ------------ ------------------------ ------------------------ ---------- rhel12c2 libgcc(x86_64)-4.4.7-3.el6 libgcc(x86_64)-4.4.4 passed rhel12c1 libgcc(x86_64)-4.4.7-3.el6 libgcc(x86_64)-4.4.4 passed Result: Package existence check passed for "libgcc(x86_64)" Check: Package existence for "libstdc++(x86_64)" Node Name Available Required Status ------------ ------------------------ ------------------------ ---------- rhel12c2 libstdc++(x86_64)-4.4.7-3.el6 libstdc++(x86_64)-4.4.4 passed rhel12c1 libstdc++(x86_64)-4.4.7-3.el6 libstdc++(x86_64)-4.4.4 passed Result: Package existence check passed for "libstdc++(x86_64)" Check: Package existence for "libstdc++-devel(x86_64)" Node Name Available Required Status ------------ ------------------------ ------------------------ ---------- rhel12c2 libstdc++-devel(x86_64)-4.4.7-3.el6 libstdc++-devel(x86_64)-4.4.4 passed rhel12c1 libstdc++-devel(x86_64)-4.4.7-3.el6 libstdc++-devel(x86_64)-4.4.4 passed Result: Package existence check passed for "libstdc++-devel(x86_64)" Check: Package existence for "sysstat" Node Name Available Required Status ------------ ------------------------ ------------------------ ---------- rhel12c2 sysstat-9.0.4-20.el6 sysstat-9.0.4 passed rhel12c1 sysstat-9.0.4-20.el6 sysstat-9.0.4 passed Result: Package existence check passed for "sysstat" Check: Package existence for "gcc" Node Name Available Required Status ------------ ------------------------ ------------------------ ---------- rhel12c2 gcc-4.4.7-3.el6 gcc-4.4.4 passed rhel12c1 gcc-4.4.7-3.el6 gcc-4.4.4 passed Result: Package existence check passed for "gcc" Check: Package existence for "gcc-c++" Node Name Available Required Status ------------ ------------------------ ------------------------ ---------- rhel12c2 gcc-c++-4.4.7-3.el6 gcc-c++-4.4.4 passed rhel12c1 gcc-c++-4.4.7-3.el6 gcc-c++-4.4.4 passed Result: Package existence check passed for "gcc-c++" Check: Package existence for "ksh" Node Name Available Required Status ------------ ------------------------ ------------------------ ---------- rhel12c2 ksh ksh passed rhel12c1 ksh ksh passed Result: Package existence check passed for "ksh" Check: Package existence for "make" Node Name Available Required Status ------------ ------------------------ ------------------------ ---------- rhel12c2 make-3.81-20.el6 make-3.81 passed rhel12c1 make-3.81-20.el6 make-3.81 passed Result: Package existence check passed for "make" Check: Package existence for "glibc(x86_64)" Node Name Available Required Status ------------ ------------------------ ------------------------ ---------- rhel12c2 glibc(x86_64)-2.12-1.107.el6 glibc(x86_64)-2.12 passed rhel12c1 glibc(x86_64)-2.12-1.107.el6 glibc(x86_64)-2.12 passed Result: Package existence check passed for "glibc(x86_64)" Check: Package existence for "glibc-devel(x86_64)" Node Name Available Required Status ------------ ------------------------ ------------------------ ---------- rhel12c2 glibc-devel(x86_64)-2.12-1.107.el6 glibc-devel(x86_64)-2.12 passed rhel12c1 glibc-devel(x86_64)-2.12-1.107.el6 glibc-devel(x86_64)-2.12 passed Result: Package existence check passed for "glibc-devel(x86_64)" Check: Package existence for "libaio(x86_64)" Node Name Available Required Status ------------ ------------------------ ------------------------ ---------- rhel12c2 libaio(x86_64)-0.3.107-10.el6 libaio(x86_64)-0.3.107 passed rhel12c1 libaio(x86_64)-0.3.107-10.el6 libaio(x86_64)-0.3.107 passed Result: Package existence check passed for "libaio(x86_64)" Check: Package existence for "libaio-devel(x86_64)" Node Name Available Required Status ------------ ------------------------ ------------------------ ---------- rhel12c2 libaio-devel(x86_64)-0.3.107-10.el6 libaio-devel(x86_64)-0.3.107 passed rhel12c1 libaio-devel(x86_64)-0.3.107-10.el6 libaio-devel(x86_64)-0.3.107 passed Result: Package existence check passed for "libaio-devel(x86_64)" Check: Package existence for "nfs-utils" Node Name Available Required Status ------------ ------------------------ ------------------------ ---------- rhel12c2 nfs-utils-1.2.3-36.el6 nfs-utils-1.2.3-15 passed rhel12c1 nfs-utils-1.2.3-36.el6 nfs-utils-1.2.3-15 passed Result: Package existence check passed for "nfs-utils" Checking for multiple users with UID value 0 Result: Check for multiple users with UID value 0 passed Check: Current group ID Result: Current group ID check passed Starting check for consistency of primary group of root user Node Name Status ------------------------------------ ------------------------ rhel12c2 passed rhel12c1 passed Check for consistency of root user's primary group passed Starting Clock synchronization checks using Network Time Protocol(NTP)... Checking existence of NTP configuration file "/etc/ntp.conf" across nodes Node Name File exists? ------------------------------------ ------------------------ rhel12c2 no rhel12c1 no Network Time Protocol(NTP) configuration file not found on any of the nodes. Oracle Cluster Time Synchronization Service(CTSS) can be used instead of NTP for time synchronization on the cluster nodes No NTP Daemons or Services were found to be running Result: Clock synchronization check using Network Time Protocol(NTP) passed Checking Core file name pattern consistency... Core file name pattern consistency check passed. Checking to make sure user "grid" is not in "root" group Node Name Status Comment ------------ ------------------------ ------------------------ rhel12c2 passed does not exist rhel12c1 passed does not exist Result: User "grid" is not part of "root" group. Check passed Check default user file creation mask Node Name Available Required Comment ------------ ------------------------ ------------------------ ---------- rhel12c2 0022 0022 passed rhel12c1 0022 0022 passed Result: Default user file creation mask check passed Checking integrity of file "/etc/resolv.conf" across nodes Checking the file "/etc/resolv.conf" to make sure only one of 'domain' and 'search' entries is defined "domain" and "search" entries do not coexist in any "/etc/resolv.conf" file Checking if 'domain' entry in file "/etc/resolv.conf" is consistent across the nodes... "domain" entry does not exist in any "/etc/resolv.conf" file Checking if 'search' entry in file "/etc/resolv.conf" is consistent across the nodes... Checking file "/etc/resolv.conf" to make sure that only one 'search' entry is defined More than one "search" entry does not exist in any "/etc/resolv.conf" file All nodes have same "search" order defined in file "/etc/resolv.conf" Checking DNS response time for an unreachable node Node Name Status ------------------------------------ ------------------------ rhel12c1 failed rhel12c2 failed PRVF-5636 : The DNS response time for an unreachable node exceeded "15000" ms on following nodes: rhel12c1,rhel12c2 checking DNS response from all servers in "/etc/resolv.conf" checking response for name "rhel12c2" from each of the name servers specified in "/etc/resolv.conf" Node Name Source Comment Status ------------ ------------------------ ------------------------ ---------- rhel12c2 192.168.0.85 IPv4 passed checking response for name "rhel12c1" from each of the name servers specified in "/etc/resolv.conf" Node Name Source Comment Status ------------ ------------------------ ------------------------ ---------- rhel12c1 192.168.0.85 IPv4 passed Check for integrity of file "/etc/resolv.conf" failed UDev attributes check for OCR locations started... Result: UDev attributes check passed for OCR locations UDev attributes check for Voting Disk locations started... Result: UDev attributes check passed for Voting Disk locations Check: Time zone consistency Result: Time zone consistency check passed Checking VIP configuration. Checking VIP Subnet configuration. Check for VIP Subnet configuration passed. Checking Oracle Cluster Voting Disk configuration... Oracle Cluster Voting Disk configuration check passed Clusterware version consistency passed. Checking integrity of name service switch configuration file "/etc/nsswitch.conf" ... Checking if "hosts" entry in file "/etc/nsswitch.conf" is consistent across nodes... Checking file "/etc/nsswitch.conf" to make sure that only one "hosts" entry is defined More than one "hosts" entry does not exist in any "/etc/nsswitch.conf" file All nodes have same "hosts" entry defined in file "/etc/nsswitch.conf" Check for integrity of name service switch configuration file "/etc/nsswitch.conf" passed Checking daemon "avahi-daemon" is not configured and running Check: Daemon "avahi-daemon" not configured Node Name Configured Status ------------ ------------------------ ------------------------ rhel12c2 no passed rhel12c1 no passed Daemon not configured check passed for process "avahi-daemon" Check: Daemon "avahi-daemon" not running Node Name Running? Status ------------ ------------------------ ------------------------ rhel12c2 no passed rhel12c1 no passed Daemon not running check passed for process "avahi-daemon" Starting check for Network interface bonding status of private interconnect network interfaces ... Check for Network interface bonding status of private interconnect network interfaces passed Starting check for /dev/shm mounted as temporary file system ... Check for /dev/shm mounted as temporary file system passed Starting check for /boot mount ... Check for /boot mount passed Starting check for zeroconf check ... Check for zeroconf check passed Pre-check for cluster services setup was unsuccessful on all the nodes.
Run the installer for 12.1.0.2. (not all steps are shown). All instances are selected by default. ASM admin group has asmadmin as the OS group.Upgrade summary
Active,release and software versions are as given below
[grid@rhel12c1 grid]$ crsctl query crs activeversion Oracle Clusterware active version on the cluster is [12.1.0.1.0] [grid@rhel12c1 grid]$ crsctl query crs releaseversion Oracle High Availability Services release version on the local node is [12.1.0.1.0] [grid@rhel12c1 grid]$ crsctl query crs softwareversion Oracle Clusterware version on node [rhel12c1] is [12.1.0.1.0]When the root upgrade script is run the software version will change to 12.1.0.2 for the node. When all nodes are upgrade the active version will change to 12.1.0.2.
Rootupgrade.sh output from first node
# /opt/app/12.1.0/grid2/rootupgrade.sh Performing root user operation. The following environment variables are set as: ORACLE_OWNER= grid ORACLE_HOME= /opt/app/12.1.0/grid2 Enter the full pathname of the local bin directory: [/usr/local/bin]: The contents of "dbhome" have not changed. No need to overwrite. The file "oraenv" already exists in /usr/local/bin. Overwrite it? (y/n) [n]: y Copying oraenv to /usr/local/bin ... The contents of "coraenv" have not changed. No need to overwrite. Entries will be added to the /etc/oratab file as needed by Database Configuration Assistant when a database is created Finished running generic part of root script. Now product-specific root actions will be performed. Using configuration parameter file: /opt/app/12.1.0/grid2/crs/install/crsconfig_params 2015/01/13 16:32:00 CLSRSC-4015: Performing install or upgrade action for Oracle Trace File Analyzer (TFA) Collector. 2015/01/13 16:32:37 CLSRSC-4003: Successfully patched Oracle Trace File Analyzer (TFA) Collector. 2015/01/13 16:32:52 CLSRSC-464: Starting retrieval of the cluster configuration data 2015/01/13 16:33:13 CLSRSC-465: Retrieval of the cluster configuration data has successfully completed. 2015/01/13 16:33:14 CLSRSC-363: User ignored prerequisites during installation 2015/01/13 16:33:48 CLSRSC-515: Starting OCR manual backup. 2015/01/13 16:33:54 CLSRSC-516: OCR manual backup successful. 2015/01/13 16:34:08 CLSRSC-468: Setting Oracle Clusterware and ASM to rolling migration mode 2015/01/13 16:34:08 CLSRSC-482: Running command: '/opt/app/12.1.0/grid/bin/crsctl start rollingupgrade 12.1.0.2.0' CRS-1131: The cluster was successfully set to rolling upgrade mode. 2015/01/13 16:34:15 CLSRSC-482: Running command: '/opt/app/12.1.0/grid2/bin/asmca -silent -upgradeNodeASM -nonRolling false -oldCRSHome /opt/app/12.1.0/grid -oldCRSVersion 12.1.0.1.0 -nodeNumber 1 -firstNode true -startRolling false' ASM configuration upgraded in local node successfully. 2015/01/13 16:34:25 CLSRSC-469: Successfully set Oracle Clusterware and ASM to rolling migration mode 2015/01/13 16:34:25 CLSRSC-466: Starting shutdown of the current Oracle Grid Infrastructure stack 2015/01/13 16:35:52 CLSRSC-467: Shutdown of the current Oracle Grid Infrastructure stack has successfully completed. OLR initialization - successful 2015/01/13 16:39:11 CLSRSC-329: Replacing Clusterware entries in file 'oracle-ohasd.conf' CRS-4133: Oracle High Availability Services has been stopped. CRS-4123: Oracle High Availability Services has been started. 2015/01/13 16:44:27 CLSRSC-472: Attempting to export the OCR 2015/01/13 16:44:27 CLSRSC-482: Running command: 'ocrconfig -upgrade grid oinstall' 2015/01/13 16:44:42 CLSRSC-473: Successfully exported the OCR 2015/01/13 16:44:50 CLSRSC-486: At this stage of upgrade, the OCR has changed. Any attempt to downgrade the cluster after this point will require a complete cluster outage to restore the OCR. 2015/01/13 16:44:50 CLSRSC-541: To downgrade the cluster: 1. All nodes that have been upgraded must be downgraded. 2015/01/13 16:44:50 CLSRSC-542: 2. Before downgrading the last node, the Grid Infrastructure stack on all other cluster nodes must be down. 2015/01/13 16:44:50 CLSRSC-543: 3. The downgrade command must be run on the node rhel12c2 with the '-lastnode' option to restore global configuration data. 2015/01/13 16:45:16 CLSRSC-343: Successfully started Oracle Clusterware stack clscfg: EXISTING configuration version 5 detected. clscfg: version 5 is 12c Release 1. Successfully taken the backup of node specific configuration in OCR. Successfully accumulated necessary OCR keys. Creating OCR keys for user 'root', privgrp 'root'.. Operation successful. 2015/01/13 16:45:42 CLSRSC-474: Initiating upgrade of resource types 2015/01/13 16:45:54 CLSRSC-482: Running command: 'upgrade model -s 12.1.0.1.0 -d 12.1.0.2.0 -p first' 2015/01/13 16:45:54 CLSRSC-475: Upgrade of resource types successfully initiated. 2015/01/13 16:46:06 CLSRSC-325: Configure Oracle Grid Infrastructure for a Cluster ... succeededRootupgrade.sh run on the second (last) node
# /opt/app/12.1.0/grid2/rootupgrade.sh Performing root user operation. The following environment variables are set as: ORACLE_OWNER= grid ORACLE_HOME= /opt/app/12.1.0/grid2 Enter the full pathname of the local bin directory: [/usr/local/bin]: The contents of "dbhome" have not changed. No need to overwrite. The file "oraenv" already exists in /usr/local/bin. Overwrite it? (y/n) [n]: y Copying oraenv to /usr/local/bin ... The contents of "coraenv" have not changed. No need to overwrite. Entries will be added to the /etc/oratab file as needed by Database Configuration Assistant when a database is created Finished running generic part of root script. Now product-specific root actions will be performed. Using configuration parameter file: /opt/app/12.1.0/grid2/crs/install/crsconfig_params 2015/01/13 16:46:57 CLSRSC-4015: Performing install or upgrade action for Oracle Trace File Analyzer (TFA) Collector. 2015/01/13 16:47:36 CLSRSC-4003: Successfully patched Oracle Trace File Analyzer (TFA) Collector. 2015/01/13 16:47:42 CLSRSC-464: Starting retrieval of the cluster configuration data 2015/01/13 16:48:07 CLSRSC-465: Retrieval of the cluster configuration data has successfully completed. 2015/01/13 16:48:07 CLSRSC-363: User ignored prerequisites during installation ASM configuration upgraded in local node successfully. 2015/01/13 16:48:39 CLSRSC-466: Starting shutdown of the current Oracle Grid Infrastructure stack 2015/01/13 16:50:03 CLSRSC-467: Shutdown of the current Oracle Grid Infrastructure stack has successfully completed. OLR initialization - successful 2015/01/13 16:50:42 CLSRSC-329: Replacing Clusterware entries in file 'oracle-ohasd.conf' CRS-4133: Oracle High Availability Services has been stopped. CRS-4123: Oracle High Availability Services has been started. 2015/01/13 16:54:48 CLSRSC-343: Successfully started Oracle Clusterware stack clscfg: EXISTING configuration version 5 detected. clscfg: version 5 is 12c Release 1. Successfully taken the backup of node specific configuration in OCR. Successfully accumulated necessary OCR keys. Creating OCR keys for user 'root', privgrp 'root'.. Operation successful. 2015/01/13 16:55:06 CLSRSC-478: Setting Oracle Clusterware active version on the last node to be upgraded 2015/01/13 16:55:06 CLSRSC-482: Running command: '/opt/app/12.1.0/grid2/bin/crsctl set crs activeversion' Started to upgrade the Oracle Clusterware. This operation may take a few minutes. Started to upgrade the CSS. The CSS was successfully upgraded. Started to upgrade Oracle ASM. Started to upgrade the CRS. The CRS was successfully upgraded. Successfully upgraded the Oracle Clusterware. Oracle Clusterware operating version was successfully set to 12.1.0.2.0 2015/01/13 16:56:15 CLSRSC-479: Successfully set Oracle Clusterware active version 2015/01/13 16:56:24 CLSRSC-476: Finishing upgrade of resource types 2015/01/13 16:56:35 CLSRSC-482: Running command: 'upgrade model -s 12.1.0.1.0 -d 12.1.0.2.0 -p last' 2015/01/13 16:56:35 CLSRSC-477: Successfully completed upgrade of resource types 2015/01/13 16:57:15 CLSRSC-325: Configure Oracle Grid Infrastructure for a Cluster ... succeededWhen all nodes are upgraded the active version reflect the upgraded version (12.1.0.2)
[grid@rhel12c1 grid]$ crsctl query crs activeversion Oracle Clusterware active version on the cluster is [12.1.0.2.0] [grid@rhel12c1 grid]$ crsctl query crs releaseversion Oracle High Availability Services release version on the local node is [12.1.0.1.0] [grid@rhel12c1 grid]$ crsctl query crs softwareversion Oracle Clusterware version on node [rhel12c1] is [12.1.0.2.0]Use
cluvfy stage -post crsinstor
orachk -u -o postto check for any post upgrade issues. If satisfied with the upgrade uninstall the previous version of grid infrastructure software.
This conclude the upgrade of grid infrastructure from 12.1.0.1 to 12.1.0.2
Related Posts
Upgrading 12c CDB and PDB from 12.1.0.1 to 12.1.0.2
Upgrading from 11gR2 (11.2.0.3) to 12c (12.1.0.1) Grid Infrastructure
Upgrade Oracle Database 12c1 from 12.1.0.1 to 12.1.0.2
Upgrading RAC from 11.2.0.3 to 11.2.0.4 - Grid Infrastructure
Upgrading from 10.2.0.4 to 10.2.0.5 (Clusterware, RAC, ASM)
Upgrade from 10.2.0.5 to 11.2.0.3 (Clusterware, RAC, ASM)
Upgrade from 11.1.0.7 to 11.2.0.3 (Clusterware, ASM & RAC)
Upgrade from 11.1.0.7 to 11.2.0.4 (Clusterware, ASM & RAC)
Upgrading from 11.1.0.7 to 11.2.0.3 with Transient Logical Standby
Upgrading from 11.2.0.1 to 11.2.0.3 with in-place upgrade for RAC
In-place upgrade from 11.2.0.2 to 11.2.0.3
Upgrading from 11.2.0.2 to 11.2.0.3 with Physical Standby - 1
Upgrading from 11.2.0.2 to 11.2.0.3 with Physical Standby - 2
Upgrading from 11gR2 (11.2.0.3) to 12c (12.1.0.1) Grid Infrastructure
Useful Metalink Notes
On Linux, When Installing or Upgrading GI to 12.1.0.2, the Root Script on Remote Nodes May Fail with "error while loading shared libraries" Messages [ID 2032832.1]
How to Upgrade to/Downgrade from Grid Infrastructure 12c and Known Issues [ID 1579762.1]
Grid Infrastructure root script (root.sh etc) fails as remote node missing binaries [ID 1991928.1]