The environment used for upgrade is
11gR2 environment installed on RHEL 6 with role separation (with PSU 11.2.0.3.6 applied on the GI HOME). It had the
listener port changed from default 1521 and
scan listener and listener name changed. It was running a
standard edition database with COTS enabled. Also it had two
OS upgrades from RHEL6 U2 to RHEL 6 U3 and then finally upgraded to RHEL6 U4 on which this 12c upgrade is happening.
Oracle documentation mentions that Oracle 12c Upgrade Companion is available with metalink id
1462240.1 but at the time of this post this document has the status of "coming soon". So only
Oracle documentations were referred during this upgrade. Few of the relevant sections are (for full list of restrictions follow Oracle documentation)
If you have an existing Oracle ASM 11g Release 1 (11.1) or 10g release instance, with Oracle ASM in a separate home, then you can either upgrade it at the time that you install Oracle Grid Infrastructure, or you can upgrade it after the installation, using Oracle ASM Configuration Assistant (ASMCA). However, be aware that a number of Oracle ASM features are disabled until you upgrade Oracle ASM, and Oracle Clusterware management of Oracle ASM does not function correctly until Oracle ASM is upgraded, because Oracle Clusterware only manages Oracle ASM when it is running in the Oracle Grid Infrastructure home. For this reason, Oracle recommends that if you do not upgrade Oracle ASM at the same time as you upgrade Oracle Clusterware, then you should upgrade Oracle ASM immediately afterward. This issue does not apply to Oracle ASM 11g Release 2 (11.2) and later, as the Oracle Grid Infrastructure home contains Oracle ASM binaries as well.
After running the rootupgrade.sh script on the last node in the cluster, if you are upgrading from a release earlier than Oracle Grid Infrastructure 11g Release 2 (11.2.0.1), and left the check box labeled ASMCA checked, as is the default, then Oracle Automatic Storage Management Configuration Assistant ASMCA runs automatically, and the Oracle Grid Infrastructure upgrade is complete. If you unchecked the box during the interview stage of the upgrade, then ASMCA is not run automatically.
Oracle recommends that you upgrade Oracle ASM at the same time that you upgrade Oracle Clusterware. Until Oracle ASM is upgraded, Oracle Databases that use Oracle ASM cannot be created and the Oracle ASM management tools in the Oracle Grid Infrastructure 12c Release 1 (12.1) home (for example, srvctl) do not work.
ASMCA performs a rolling upgrade only if the earlier release of Oracle ASM is either 11.1.0.6 or 11.1.0.7. Otherwise, ASMCA
performs a non-rolling upgrade, in which ASMCA shuts down all Oracle ASM instances on all nodes of the cluster, and then starts an Oracle ASM instance on each node from the new Oracle Grid Infrastructure home. There are several versions that can have direct upgrade to 12c
To upgrade existing Oracle Clusterware installations to a standard configuration Oracle Grid Infrastructure 12c cluster, your release must be greater than or equal to Oracle Clusterware 10g Release 1 (10.1.0.5), Oracle Clusterware 10g Release 2 (10.2.0.3), Oracle Grid Infrastructure 11g Release 1 (11.1.0.6), or Oracle Grid Infrastructure 11g Release 2 (11.2).But there's a catch! which would make some setups to go from 10.* and 11.1 to upgrade to 11.2 before being upgraded to 12c. The catch is due to following restriction
If the Oracle Cluster Registry (OCR) and voting file locations for your current installation are on raw or block devices, then you must migrate them to Oracle ASM disk groups or shared file systems before upgrading to Oracle Grid Infrastructure 12c.
If you want to upgrade Oracle Grid Infrastructure releases before Oracle Grid Infrastructure 11g Release 2 (11.2), where the OCR and voting files are on raw or block devices, and you want to migrate these files to Oracle ASM rather than to a shared file system, then you must upgrade to Oracle Grid Infrastructure 11g Release 2 (11.2) before you upgrade to Oracle Grid Infrastructure 12c.
Migrate OCR files from RAW or Block devices to Oracle ASM or a supported file system. Direct use of RAW and Block devices is not supported.
If you upgrade from a previous version of Oracle Clusterware to Oracle Clusterware 12c and you want to store OCR in an Oracle ASM disk group, then you must set the ASM Compatibility compatibility attribute to 11.2.0.2, or later.This means versions where ocr and vote files are in
block devices or
raw devices must first migrate to 11.2 and then move the
ocr and
vote to ASM before upgrading to 12c.
The same user that owned the earlier release Oracle Grid Infrastructure software must perform the Oracle Grid Infrastructure 12c Release 1 (12.1) upgrade. Before Oracle Database 11g, either all Oracle software installations were owned by the Oracle user, typically oracle, or Oracle Database software was owned by oracle, and Oracle Clusterware software was owned by a separate user, typically crs.
Running the cluvfy from the 12c grid infrastructure installation media failed with the following message.
ERROR:
Reference data is not available for verifying prerequisites on this operating system distribution
Verification cannot proceed
Changing CV_ASSUME_DISTID=OEL6 didn't help in this case. After some searching found the following metalink note that gives a solution to a similar issue on 11.2.
runcluvfy stage -pre crsinst generates Reference Data is not available for verifying prerequisites on this operating system distribution on Redhat 6 - IBM: Linux on System z
[ID 1514012.1]The support note provides a solution pack and in this case only the redhat-release-6Server-1.noarch.rpm from the pack was installed and other steps were ignored. Afterwards the runcluvfy got executed without any issue. (update 15/07/2013: Oracle has a MOS note for this now
1567127.1) There are new pre-req checks added to the 12c, notably space requirements of /dev/shm are checked for grid infrastructure, space requirments for several linux directories are also checked not just the installation location and /tmp as before and lastly check for avahi-daemon. Full output is available below.
[grid@rhel6m1 grid]$ ./runcluvfy.sh stage -pre crsinst -upgrade -n rhel6m1,rhel6m2 -rolling -src_crshome /opt/app/11.2.0/grid -dest_crshome /opt/app/12.1.0/grid -dest_version 12.1.0.1.0 -fixup -verbose
Performing pre-checks for cluster services setup
Checking node reachability...
Check: Node reachability from node "rhel6m1"
Destination Node Reachable?
------------------------------------ ------------------------
rhel6m1 yes
rhel6m2 yes
Result: Node reachability check passed from node "rhel6m1"
Checking user equivalence...
Check: User equivalence for user "grid"
Node Name Status
------------------------------------ ------------------------
rhel6m2 passed
rhel6m1 passed
Result: User equivalence check passed for user "grid"
Checking CRS user consistency
Result: CRS user consistency check successful
Checking ASM disk size consistency
All ASM disks are correctly sized
Checking if default discovery string is being used by ASM
ASM discovery string "/dev/sd*" is not the default discovery string
Checking node connectivity...
Checking hosts config file...
Node Name Status
------------------------------------ ------------------------
rhel6m1 passed
rhel6m2 passed
Verification of the hosts config file successful
Interface information for node "rhel6m1"
Name IP Address Subnet Gateway Def. Gateway HW Address MTU
------ --------------- --------------- --------------- --------------- ----------------- ------
eth0 192.168.0.93 192.168.0.0 0.0.0.0 192.168.0.100 08:00:27:82:9F:00 1500
eth0 192.168.0.92 192.168.0.0 0.0.0.0 192.168.0.100 08:00:27:82:9F:00 1500
eth0 192.168.0.97 192.168.0.0 0.0.0.0 192.168.0.100 08:00:27:82:9F:00 1500
eth1 192.168.1.87 192.168.1.0 0.0.0.0 192.168.0.100 08:00:27:F9:87:77 1500
eth1 169.254.31.63 169.254.0.0 0.0.0.0 192.168.0.100 08:00:27:F9:87:77 1500
Interface information for node "rhel6m2"
Name IP Address Subnet Gateway Def. Gateway HW Address MTU
------ --------------- --------------- --------------- --------------- ----------------- ------
eth0 192.168.0.94 192.168.0.0 0.0.0.0 192.168.0.100 08:00:27:FA:8E:62 1500
eth0 192.168.0.98 192.168.0.0 0.0.0.0 192.168.0.100 08:00:27:FA:8E:62 1500
eth1 192.168.1.88 192.168.1.0 0.0.0.0 192.168.0.100 08:00:27:84:5D:A8 1500
eth1 169.254.160.199 169.254.0.0 0.0.0.0 192.168.0.100 08:00:27:84:5D:A8 1500
Check: Node connectivity using interfaces on subnet "192.168.1.0"
Check: Node connectivity of subnet "192.168.1.0"
Source Destination Connected?
------------------------------ ------------------------------ ----------------
rhel6m1[192.168.1.87] rhel6m2[192.168.1.88] yes
Result: Node connectivity passed for subnet "192.168.1.0" with node(s) rhel6m1,rhel6m2
Check: TCP connectivity of subnet "192.168.1.0"
Source Destination Connected?
------------------------------ ------------------------------ ----------------
rhel6m1:192.168.1.87 rhel6m2:192.168.1.88 passed
Result: TCP connectivity check passed for subnet "192.168.1.0"
Check: Node connectivity using interfaces on subnet "192.168.0.0"
Check: Node connectivity of subnet "192.168.0.0"
Source Destination Connected?
------------------------------ ------------------------------ ----------------
rhel6m2[192.168.0.94] rhel6m1[192.168.0.97] yes
rhel6m2[192.168.0.94] rhel6m2[192.168.0.98] yes
rhel6m2[192.168.0.94] rhel6m1[192.168.0.93] yes
rhel6m2[192.168.0.94] rhel6m1[192.168.0.92] yes
rhel6m1[192.168.0.97] rhel6m2[192.168.0.98] yes
rhel6m1[192.168.0.97] rhel6m1[192.168.0.93] yes
rhel6m1[192.168.0.97] rhel6m1[192.168.0.92] yes
rhel6m2[192.168.0.98] rhel6m1[192.168.0.93] yes
rhel6m2[192.168.0.98] rhel6m1[192.168.0.92] yes
rhel6m1[192.168.0.93] rhel6m1[192.168.0.92] yes
Result: Node connectivity passed for subnet "192.168.0.0" with node(s) rhel6m2,rhel6m1
Check: TCP connectivity of subnet "192.168.0.0"
Source Destination Connected?
------------------------------ ------------------------------ ----------------
rhel6m2:192.168.0.94 rhel6m1:192.168.0.97 passed
rhel6m2:192.168.0.94 rhel6m2:192.168.0.98 passed
rhel6m2:192.168.0.94 rhel6m1:192.168.0.93 passed
rhel6m2:192.168.0.94 rhel6m1:192.168.0.92 passed
Result: TCP connectivity check passed for subnet "192.168.0.0"
Checking subnet mask consistency...
Subnet mask consistency check passed for subnet "192.168.0.0".
Subnet mask consistency check passed for subnet "192.168.1.0".
Subnet mask consistency check passed.
Result: Node connectivity check passed
Checking multicast communication...
Checking subnet "192.168.1.0" for multicast communication with multicast group "224.0.0.251"...
Check of subnet "192.168.1.0" for multicast communication with multicast group "224.0.0.251" passed.
Check of multicast communication passed.
Task ASM Integrity check started...
Starting check to see if ASM is running on all cluster nodes...
ASM Running check passed. ASM is running on all specified nodes
Starting Disk Groups check to see if at least one Disk Group configured...
Disk Group Check passed. At least one Disk Group configured
Task ASM Integrity check passed...
Checking OCR integrity...
OCR integrity check passed
Checking ASMLib configuration.
Node Name Status
------------------------------------ ------------------------
rhel6m1 passed
rhel6m2 passed
Result: Check for ASMLib configuration passed.
Check: Total memory
Node Name Available Required Status
------------ ------------------------ ------------------------ ----------
rhel6m2 4.0009GB (4195276.0KB) 4GB (4194304.0KB) passed
rhel6m1 4.0009GB (4195276.0KB) 4GB (4194304.0KB) passed
Result: Total memory check passed
Check: Available memory
Node Name Available Required Status
------------ ------------------------ ------------------------ ----------
rhel6m2 3.045GB (3192940.0KB) 50MB (51200.0KB) passed
rhel6m1 2.6829GB (2813180.0KB) 50MB (51200.0KB) passed
Result: Available memory check passed
Check: Swap space
Node Name Available Required Status
------------ ------------------------ ------------------------ ----------
rhel6m2 4.0098GB (4204528.0KB) 4.0009GB (4195276.0KB) passed
rhel6m1 4.0098GB (4204528.0KB) 4.0009GB (4195276.0KB) passed
Result: Swap space check passed
Check: Free disk space for "rhel6m2:/usr,rhel6m2:/var,rhel6m2:/etc,rhel6m2:/opt/app/11.2.0/grid,rhel6m2:/sbin,rhel6m2:/tmp"
Path Node Name Mount point Available Required Status
---------------- ------------ ------------ ------------ ------------ ------------
/usr rhel6m2 / 24.6191GB 7.9635GB passed
/var rhel6m2 / 24.6191GB 7.9635GB passed
/etc rhel6m2 / 24.6191GB 7.9635GB passed
/opt/app/11.2.0/grid rhel6m2 / 24.6191GB 7.9635GB passed
/sbin rhel6m2 / 24.6191GB 7.9635GB passed
/tmp rhel6m2 / 24.6191GB 7.9635GB passed
Result: Free disk space check passed for "rhel6m2:/usr,rhel6m2:/var,rhel6m2:/etc,rhel6m2:/opt/app/11.2.0/grid,rhel6m2:/sbin,rhel6m2:/tmp"
Check: Free disk space for "rhel6m1:/usr,rhel6m1:/var,rhel6m1:/etc,rhel6m1:/opt/app/11.2.0/grid,rhel6m1:/sbin,rhel6m1:/tmp"
Path Node Name Mount point Available Required Status
---------------- ------------ ------------ ------------ ------------ ------------
/usr rhel6m1 / 19.7458GB 7.9635GB passed
/var rhel6m1 / 19.7458GB 7.9635GB passed
/etc rhel6m1 / 19.7458GB 7.9635GB passed
/opt/app/11.2.0/grid rhel6m1 / 19.7458GB 7.9635GB passed
/sbin rhel6m1 / 19.7458GB 7.9635GB passed
/tmp rhel6m1 / 19.7458GB 7.9635GB passed
Result: Free disk space check passed for "rhel6m1:/usr,rhel6m1:/var,rhel6m1:/etc,rhel6m1:/opt/app/11.2.0/grid,rhel6m1:/sbin,rhel6m1:/tmp"
Check: User existence for "grid"
Node Name Status Comment
------------ ------------------------ ------------------------
rhel6m2 passed exists(502)
rhel6m1 passed exists(502)
Checking for multiple users with UID value 502
Result: Check for multiple users with UID value 502 passed
Result: User existence check passed for "grid"
Check: Group existence for "oinstall"
Node Name Status Comment
------------ ------------------------ ------------------------
rhel6m2 passed exists
rhel6m1 passed exists
Result: Group existence check passed for "oinstall"
Check: Group existence for "dba"
Node Name Status Comment
------------ ------------------------ ------------------------
rhel6m2 passed exists
rhel6m1 passed exists
Result: Group existence check passed for "dba"
Check: Membership of user "grid" in group "oinstall" [as Primary]
Node Name User Exists Group Exists User in Group Primary Status
---------------- ------------ ------------ ------------ ------------ ------------
rhel6m2 yes yes yes yes passed
rhel6m1 yes yes yes yes passed
Result: Membership check for user "grid" in group "oinstall" [as Primary] passed
Check: Membership of user "grid" in group "dba"
Node Name User Exists Group Exists User in Group Status
---------------- ------------ ------------ ------------ ----------------
rhel6m2 yes yes yes passed
rhel6m1 yes yes yes passed
Result: Membership check for user "grid" in group "dba" passed
Check: Run level
Node Name run level Required Status
------------ ------------------------ ------------------------ ----------
rhel6m2 3 3,5 passed
rhel6m1 3 3,5 passed
Result: Run level check passed
Check: Hard limits for "maximum open file descriptors"
Node Name Type Available Required Status
---------------- ------------ ------------ ------------ ----------------
rhel6m2 hard 65536 65536 passed
rhel6m1 hard 65536 65536 passed
Result: Hard limits check passed for "maximum open file descriptors"
Check: Soft limits for "maximum open file descriptors"
Node Name Type Available Required Status
---------------- ------------ ------------ ------------ ----------------
rhel6m2 soft 1024 1024 passed
rhel6m1 soft 1024 1024 passed
Result: Soft limits check passed for "maximum open file descriptors"
Check: Hard limits for "maximum user processes"
Node Name Type Available Required Status
---------------- ------------ ------------ ------------ ----------------
rhel6m2 hard 16384 16384 passed
rhel6m1 hard 16384 16384 passed
Result: Hard limits check passed for "maximum user processes"
Check: Soft limits for "maximum user processes"
Node Name Type Available Required Status
---------------- ------------ ------------ ------------ ----------------
rhel6m2 soft 2047 2047 passed
rhel6m1 soft 2047 2047 passed
Result: Soft limits check passed for "maximum user processes"
There are no oracle patches required for home "/opt/app/11.2.0/grid".
There are no oracle patches required for home "/opt/app/11.2.0/grid".
Checking for suitability of source home "/opt/app/11.2.0/grid" for upgrading to version "12.1.0.1.0".
Result: Source home "/opt/app/11.2.0/grid" is suitable for upgrading to version "12.1.0.1.0".
Check: System architecture
Node Name Available Required Status
------------ ------------------------ ------------------------ ----------
rhel6m2 x86_64 x86_64 passed
rhel6m1 x86_64 x86_64 passed
Result: System architecture check passed
Check: Kernel version
Node Name Available Required Status
------------ ------------------------ ------------------------ ----------
rhel6m2 2.6.32-358.el6.x86_64 2.6.32 passed
rhel6m1 2.6.32-358.el6.x86_64 2.6.32 passed
Result: Kernel version check passed
Check: Kernel parameter for "semmsl"
Node Name Current Configured Required Status Comment
---------------- ------------ ------------ ------------ ------------ ------------
rhel6m1 3010 3010 250 passed
rhel6m2 3010 3010 250 passed
Result: Kernel parameter check passed for "semmsl"
Check: Kernel parameter for "semmns"
Node Name Current Configured Required Status Comment
---------------- ------------ ------------ ------------ ------------ ------------
rhel6m1 385280 385280 32000 passed
rhel6m2 385280 385280 32000 passed
Result: Kernel parameter check passed for "semmns"
Check: Kernel parameter for "semopm"
Node Name Current Configured Required Status Comment
---------------- ------------ ------------ ------------ ------------ ------------
rhel6m1 3010 3010 100 passed
rhel6m2 3010 3010 100 passed
Result: Kernel parameter check passed for "semopm"
Check: Kernel parameter for "semmni"
Node Name Current Configured Required Status Comment
---------------- ------------ ------------ ------------ ------------ ------------
rhel6m1 128 128 128 passed
rhel6m2 128 128 128 passed
Result: Kernel parameter check passed for "semmni"
Check: Kernel parameter for "shmmax"
Node Name Current Configured Required Status Comment
---------------- ------------ ------------ ------------ ------------ ------------
rhel6m1 68719476736 68719476736 2147981312 passed
rhel6m2 68719476736 68719476736 2147981312 passed
Result: Kernel parameter check passed for "shmmax"
Check: Kernel parameter for "shmmni"
Node Name Current Configured Required Status Comment
---------------- ------------ ------------ ------------ ------------ ------------
rhel6m1 4096 4096 4096 passed
rhel6m2 4096 4096 4096 passed
Result: Kernel parameter check passed for "shmmni"
Check: Kernel parameter for "shmall"
Node Name Current Configured Required Status Comment
---------------- ------------ ------------ ------------ ------------ ------------
rhel6m1 4294967296 4294967296 419527 passed
rhel6m2 4294967296 4294967296 419527 passed
Result: Kernel parameter check passed for "shmall"
Check: Kernel parameter for "file-max"
Node Name Current Configured Required Status Comment
---------------- ------------ ------------ ------------ ------------ ------------
rhel6m1 6815744 6815744 6815744 passed
rhel6m2 6815744 6815744 6815744 passed
Result: Kernel parameter check passed for "file-max"
Check: Kernel parameter for "ip_local_port_range"
Node Name Current Configured Required Status Comment
---------------- ------------ ------------ ------------ ------------ ------------
rhel6m1 between 9000 & 65500 between 9000 & 65500 between 9000 & 65535 passed
rhel6m2 between 9000 & 65500 between 9000 & 65500 between 9000 & 65535 passed
Result: Kernel parameter check passed for "ip_local_port_range"
Check: Kernel parameter for "rmem_default"
Node Name Current Configured Required Status Comment
---------------- ------------ ------------ ------------ ------------ ------------
rhel6m1 4194304 4194304 262144 passed
rhel6m2 4194304 4194304 262144 passed
Result: Kernel parameter check passed for "rmem_default"
Check: Kernel parameter for "rmem_max"
Node Name Current Configured Required Status Comment
---------------- ------------ ------------ ------------ ------------ ------------
rhel6m1 4194304 4194304 4194304 passed
rhel6m2 4194304 4194304 4194304 passed
Result: Kernel parameter check passed for "rmem_max"
Check: Kernel parameter for "wmem_default"
Node Name Current Configured Required Status Comment
---------------- ------------ ------------ ------------ ------------ ------------
rhel6m1 1048576 1048576 262144 passed
rhel6m2 1048576 1048576 262144 passed
Result: Kernel parameter check passed for "wmem_default"
Check: Kernel parameter for "wmem_max"
Node Name Current Configured Required Status Comment
---------------- ------------ ------------ ------------ ------------ ------------
rhel6m1 2097152 2097152 1048576 passed
rhel6m2 2097152 2097152 1048576 passed
Result: Kernel parameter check passed for "wmem_max"
Check: Kernel parameter for "aio-max-nr"
Node Name Current Configured Required Status Comment
---------------- ------------ ------------ ------------ ------------ ------------
rhel6m1 3145728 3145728 1048576 passed
rhel6m2 3145728 3145728 1048576 passed
Result: Kernel parameter check passed for "aio-max-nr"
Check: Package existence for "binutils"
Node Name Available Required Status
------------ ------------------------ ------------------------ ----------
rhel6m2 binutils-2.20.51.0.2-5.36.el6 binutils-2.20.51.0.2 passed
rhel6m1 binutils-2.20.51.0.2-5.36.el6 binutils-2.20.51.0.2 passed
Result: Package existence check passed for "binutils"
Check: Package existence for "compat-libcap1"
Node Name Available Required Status
------------ ------------------------ ------------------------ ----------
rhel6m2 compat-libcap1-1.10-1 compat-libcap1-1.10 passed
rhel6m1 compat-libcap1-1.10-1 compat-libcap1-1.10 passed
Result: Package existence check passed for "compat-libcap1"
Check: Package existence for "compat-libstdc++-33(x86_64)"
Node Name Available Required Status
------------ ------------------------ ------------------------ ----------
rhel6m2 compat-libstdc++-33(x86_64)-3.2.3-69.el6 compat-libstdc++-33(x86_64)-3.2.3 passed
rhel6m1 compat-libstdc++-33(x86_64)-3.2.3-69.el6 compat-libstdc++-33(x86_64)-3.2.3 passed
Result: Package existence check passed for "compat-libstdc++-33(x86_64)"
Check: Package existence for "libgcc(x86_64)"
Node Name Available Required Status
------------ ------------------------ ------------------------ ----------
rhel6m2 libgcc(x86_64)-4.4.7-3.el6 libgcc(x86_64)-4.4.4 passed
rhel6m1 libgcc(x86_64)-4.4.7-3.el6 libgcc(x86_64)-4.4.4 passed
Result: Package existence check passed for "libgcc(x86_64)"
Check: Package existence for "libstdc++(x86_64)"
Node Name Available Required Status
------------ ------------------------ ------------------------ ----------
rhel6m2 libstdc++(x86_64)-4.4.7-3.el6 libstdc++(x86_64)-4.4.4 passed
rhel6m1 libstdc++(x86_64)-4.4.7-3.el6 libstdc++(x86_64)-4.4.4 passed
Result: Package existence check passed for "libstdc++(x86_64)"
Check: Package existence for "libstdc++-devel(x86_64)"
Node Name Available Required Status
------------ ------------------------ ------------------------ ----------
rhel6m2 libstdc++-devel(x86_64)-4.4.7-3.el6 libstdc++-devel(x86_64)-4.4.4 passed
rhel6m1 libstdc++-devel(x86_64)-4.4.7-3.el6 libstdc++-devel(x86_64)-4.4.4 passed
Result: Package existence check passed for "libstdc++-devel(x86_64)"
Check: Package existence for "sysstat"
Node Name Available Required Status
------------ ------------------------ ------------------------ ----------
rhel6m2 sysstat-9.0.4-20.el6 sysstat-9.0.4 passed
rhel6m1 sysstat-9.0.4-20.el6 sysstat-9.0.4 passed
Result: Package existence check passed for "sysstat"
Check: Package existence for "gcc"
Node Name Available Required Status
------------ ------------------------ ------------------------ ----------
rhel6m2 gcc-4.4.7-3.el6 gcc-4.4.4 passed
rhel6m1 gcc-4.4.7-3.el6 gcc-4.4.4 passed
Result: Package existence check passed for "gcc"
Check: Package existence for "gcc-c++"
Node Name Available Required Status
------------ ------------------------ ------------------------ ----------
rhel6m2 gcc-c++-4.4.7-3.el6 gcc-c++-4.4.4 passed
rhel6m1 gcc-c++-4.4.7-3.el6 gcc-c++-4.4.4 passed
Result: Package existence check passed for "gcc-c++"
Check: Package existence for "ksh"
Node Name Available Required Status
------------ ------------------------ ------------------------ ----------
rhel6m2 ksh-20100621-19.el6 ksh-... passed
rhel6m1 ksh-20100621-19.el6 ksh-... passed
Result: Package existence check passed for "ksh"
Check: Package existence for "make"
Node Name Available Required Status
------------ ------------------------ ------------------------ ----------
rhel6m2 make-3.81-20.el6 make-3.81 passed
rhel6m1 make-3.81-20.el6 make-3.81 passed
Result: Package existence check passed for "make"
Check: Package existence for "glibc(x86_64)"
Node Name Available Required Status
------------ ------------------------ ------------------------ ----------
rhel6m2 glibc(x86_64)-2.12-1.107.el6 glibc(x86_64)-2.12 passed
rhel6m1 glibc(x86_64)-2.12-1.107.el6 glibc(x86_64)-2.12 passed
Result: Package existence check passed for "glibc(x86_64)"
Check: Package existence for "glibc-devel(x86_64)"
Node Name Available Required Status
------------ ------------------------ ------------------------ ----------
rhel6m2 glibc-devel(x86_64)-2.12-1.107.el6 glibc-devel(x86_64)-2.12 passed
rhel6m1 glibc-devel(x86_64)-2.12-1.107.el6 glibc-devel(x86_64)-2.12 passed
Result: Package existence check passed for "glibc-devel(x86_64)"
Check: Package existence for "libaio(x86_64)"
Node Name Available Required Status
------------ ------------------------ ------------------------ ----------
rhel6m2 libaio(x86_64)-0.3.107-10.el6 libaio(x86_64)-0.3.107 passed
rhel6m1 libaio(x86_64)-0.3.107-10.el6 libaio(x86_64)-0.3.107 passed
Result: Package existence check passed for "libaio(x86_64)"
Check: Package existence for "libaio-devel(x86_64)"
Node Name Available Required Status
------------ ------------------------ ------------------------ ----------
rhel6m2 libaio-devel(x86_64)-0.3.107-10.el6 libaio-devel(x86_64)-0.3.107 passed
rhel6m1 libaio-devel(x86_64)-0.3.107-10.el6 libaio-devel(x86_64)-0.3.107 passed
Result: Package existence check passed for "libaio-devel(x86_64)"
Check: Package existence for "nfs-utils"
Node Name Available Required Status
------------ ------------------------ ------------------------ ----------
rhel6m2 nfs-utils-1.2.3-36.el6 nfs-utils-1.2.3-15 passed
rhel6m1 nfs-utils-1.2.3-36.el6 nfs-utils-1.2.3-15 passed
Result: Package existence check passed for "nfs-utils"
Checking for multiple users with UID value 0
Result: Check for multiple users with UID value 0 passed
Check: Current group ID
Result: Current group ID check passed
Starting check for consistency of primary group of root user
Node Name Status
------------------------------------ ------------------------
rhel6m2 passed
rhel6m1 passed
Check for consistency of root user's primary group passed
Check: Package existence for "cvuqdisk"
Node Name Available Required Status
------------ ------------------------ ------------------------ ----------
rhel6m2 cvuqdisk-1.0.9-1 cvuqdisk-1.0.9-1 passed
rhel6m1 cvuqdisk-1.0.9-1 cvuqdisk-1.0.9-1 passed
Result: Package existence check passed for "cvuqdisk"
Starting Clock synchronization checks using Network Time Protocol(NTP)...
NTP Configuration file check started...
Network Time Protocol(NTP) configuration file not found on any of the nodes. Oracle Cluster Time Synchronization Service(CTSS) can be used instead of NTP for time synchronization on the cluster nodes
No NTP Daemons or Services were found to be running
Result: Clock synchronization check using Network Time Protocol(NTP) passed
Checking Core file name pattern consistency...
Core file name pattern consistency check passed.
Checking to make sure user "grid" is not in "root" group
Node Name Status Comment
------------ ------------------------ ------------------------
rhel6m2 passed does not exist
rhel6m1 passed does not exist
Result: User "grid" is not part of "root" group. Check passed
Check default user file creation mask
Node Name Available Required Comment
------------ ------------------------ ------------------------ ----------
rhel6m2 0022 0022 passed
rhel6m1 0022 0022 passed
Result: Default user file creation mask check passed
Checking integrity of file "/etc/resolv.conf" across nodes
Checking the file "/etc/resolv.conf" to make sure only one of domain and search entries is defined
"domain" and "search" entries do not coexist in any "/etc/resolv.conf" file
Checking if domain entry in file "/etc/resolv.conf" is consistent across the nodes...
"domain" entry does not exist in any "/etc/resolv.conf" file
Checking if search entry in file "/etc/resolv.conf" is consistent across the nodes...
Checking file "/etc/resolv.conf" to make sure that only one search entry is defined
More than one "search" entry does not exist in any "/etc/resolv.conf" file
All nodes have same "search" order defined in file "/etc/resolv.conf"
Checking DNS response time for an unreachable node
Node Name Status
------------------------------------ ------------------------
rhel6m1 passed
rhel6m2 passed
The DNS response time for an unreachable node is within acceptable limit on all nodes
Check for integrity of file "/etc/resolv.conf" passed
UDev attributes check for OCR locations started...
Result: UDev attributes check passed for OCR locations
UDev attributes check for Voting Disk locations started...
Result: UDev attributes check passed for Voting Disk locations
Check: Time zone consistency
Result: Time zone consistency check passed
Checking VIP configuration.
Checking VIP Subnet configuration.
Check for VIP Subnet configuration passed.
Checking VIP reachability
Check for VIP reachability passed.
Checking Oracle Cluster Voting Disk configuration...
Oracle Cluster Voting Disk configuration check passed
Clusterware version consistency passed.
Checking integrity of name service switch configuration file "/etc/nsswitch.conf" ...
Checking if "hosts" entry in file "/etc/nsswitch.conf" is consistent across nodes...
Checking file "/etc/nsswitch.conf" to make sure that only one "hosts" entry is defined
More than one "hosts" entry does not exist in any "/etc/nsswitch.conf" file
All nodes have same "hosts" entry defined in file "/etc/nsswitch.conf"
Check for integrity of name service switch configuration file "/etc/nsswitch.conf" passed
Checking daemon "avahi-daemon" is not configured and running
Check: Daemon "avahi-daemon" not configured
Node Name Configured Status
------------ ------------------------ ------------------------
rhel6m2 no passed
rhel6m1 no passed
Daemon not configured check passed for process "avahi-daemon"
Check: Daemon "avahi-daemon" not running
Node Name Running? Status
------------ ------------------------ ------------------------
rhel6m2 no passed
rhel6m1 no passed
Daemon not running check passed for process "avahi-daemon"
Starting check for /dev/shm mounted as temporary file system ...
Check for /dev/shm mounted as temporary file system passed
NOTE:
No fixable verification failures to fix
Pre-check for cluster services setup was successful.
Since GI upgrades are output place upgrades a new directory structure could be created with the new version.
mkdir -p /opt/app/12.1.0/grid
chown -R grid:oinstall /opt/app/12.1.0
chmod -R 775 /opt/app/12.1.0
Unset ORACLE_HOME AND ORACLE_HOME/bin from path
The current softwareversion and activeversion values are
[grid@rhel6m1 app]$ crsctl query crs activeversion
Oracle Clusterware active version on the cluster is [11.2.0.3.0]
[grid@rhel6m1 app]$ crsctl query crs softwareversion
Oracle Clusterware version on node [rhel6m1] is [11.2.0.3.0]
Start the install by executing the runInstaller
GI Management repository is a new feature introduced with 12c. When this is selected with the GI installation a database is created (named -MGMTDB) and it's data files are stored in the same disk group where the OCR and vote disks are stored. The command used by the OUI is
INFO: Command /opt/app/12.1.0/grid/bin/dbca -silent -createDatabase -templateName MGMTSeed_Database.dbc -sid -MGMTDB -gdbName _mgmtdb -stora
geType ASM -diskGroupName CLUSTER_DG -datafileJarLocation /opt/app/12.1.0/grid/assistants/dbca/templates -characterset AL32UTF8 -autoGeneratePasswords -oui_internal
This disk group must have at least 3358MB. If there's less space GI management repository installation will fail
INFO: Read: +CLUSTER_DG does not have enough space. Required space is 3358 MB , available space is 2261 MB.
WARNING: Skipping line: +CLUSTER_DG does not have enough space. Required space is 3358 MB , available space is 2261 MB.
Confirm there's enough space in the disk group with
SQL> select name,total_Mb,free_mb,USABLE_FILE_MB from v$asm_diskgroup;
NAME TOTAL_MB FREE_MB USABLE_FILE_MB
------------------------------ ---------- ---------- --------------
CLUSTER_DG 15342 14716 2261
FLASH 10236 3468 3468
DATA 10236 6593 6593
If there's not enough space expand the space available in the disk group or decide not to have the repository and continue with rest of the upgrade. In this case additional disk were added to the disk group
SQL> alter diskgroup cluster_dg add disk '/dev/sdg1','/dev/sdh1','/dev/sdi1';
Diskgroup altered.
SQL> select name,total_Mb,free_mb,USABLE_FILE_MB from v$asm_diskgroup;
NAME TOTAL_MB FREE_MB USABLE_FILE_MB
------------------------------ ---------- ---------- --------------
CLUSTER_DG 30684 30025 9915
FLASH 10236 3468 3468
DATA 10236 6593 6593
Since a database is created several location need write permission for oinstall group so grid user could write to those locations. One of those location is dbca directory under cfgtoollogs.
cd /opt/app/oracle/cfgtoollogs
# chmod 770 dbca
Another location is admin folder to avoid
Cannot create directory "/opt/app/oracle/admin/_mgmtdb/dpdump".cd /opt/app/oracle
#chmod 770 admin
More on
GI management repository and usage is available here and after
upgrade considerations.
With 12c it is possible to let the installer execute script that requires root privilege either by providing the root password or specifying sudo. If neither is checked then rootupgrade.sh is run manually as with previous version. In this case it was decided to run the root scripts manually (screenshot shows auto option checked but it was unchecked later on).
When prompted execute the rootupgrade.sh script. As mentioned earlier the auto execution of root was unchecked so manual root script execution was prompted. Executing on first node
[root@rhel6m1 grid]# /opt/app/12.1.0/grid/rootupgrade.sh
Performing root user operation for Oracle 12c
The following environment variables are set as:
ORACLE_OWNER= grid
ORACLE_HOME= /opt/app/12.1.0/grid
Enter the full pathname of the local bin directory: [/usr/local/bin]:
The file "dbhome" already exists in /usr/local/bin. Overwrite it? (y/n)
[n]:
The file "oraenv" already exists in /usr/local/bin. Overwrite it? (y/n)
[n]:
The file "coraenv" already exists in /usr/local/bin. Overwrite it? (y/n)
[n]:
Entries will be added to the /etc/oratab file as needed by
Database Configuration Assistant when a database is created
Finished running generic part of root script.
Now product-specific root actions will be performed.
Using configuration parameter file: /opt/app/12.1.0/grid/crs/install/crsconfig_params
ASM upgrade has started on first node.
OLR initialization - successful
2013/07/05 14:34:35 CLSRSC-329: Replacing Clusterware entries in file 'oracle-ohasd.conf'
CRS-4133: Oracle High Availability Services has been stopped.
CRS-4123: Oracle High Availability Services has been started.
2013/07/05 14:44:38 CLSRSC-343: Successfully started Oracle clusterware stack
clscfg: EXISTING configuration version 5 detected.
clscfg: version 5 is 11g Release 2.
Successfully accumulated necessary OCR keys.
Creating OCR keys for user 'root', privgrp 'root'..
Operation successful.
2013/07/05 14:46:16 CLSRSC-325: Configure Oracle Grid Infrastructure for a Cluster ... succeeded
After this the software version gets upgraded but the active version remains the lower 11.2 version until second node is upgraded as well
[root@rhel6m1 grid]# crsctl query crs softwareversion
Oracle Clusterware version on node [rhel6m1] is [12.1.0.1.0]
[root@rhel6m1 grid]# crsctl query crs activeversion
Oracle Clusterware active version on the cluster is [11.2.0.3.0]
Executing on second node
[root@rhel6m2 grid]# /opt/app/12.1.0/grid/rootupgrade.sh
Performing root user operation for Oracle 12c
The following environment variables are set as:
ORACLE_OWNER= grid
ORACLE_HOME= /opt/app/12.1.0/grid
Enter the full pathname of the local bin directory: [/usr/local/bin]:
The file "dbhome" already exists in /usr/local/bin. Overwrite it? (y/n)
[n]:
The file "oraenv" already exists in /usr/local/bin. Overwrite it? (y/n)
[n]:
The file "coraenv" already exists in /usr/local/bin. Overwrite it? (y/n)
[n]:
Entries will be added to the /etc/oratab file as needed by
Database Configuration Assistant when a database is created
Finished running generic part of root script.
Now product-specific root actions will be performed.
Using configuration parameter file: /opt/app/12.1.0/grid/crs/install/crsconfig_params
OLR initialization - successful
2013/07/05 14:50:06 CLSRSC-329: Replacing Clusterware entries in file 'oracle-ohasd.conf'
CRS-4133: Oracle High Availability Services has been stopped.
CRS-4123: Oracle High Availability Services has been started.
2013/07/05 14:58:53 CLSRSC-343: Successfully started Oracle clusterware stack
clscfg: EXISTING configuration version 5 detected.
clscfg: version 5 is 12c Release 1.
Successfully accumulated necessary OCR keys.
Creating OCR keys for user 'root', privgrp 'root'..
Operation successful.
Start upgrade invoked..
Started to upgrade the Oracle Clusterware. This operation may take a few minutes.
Started to upgrade the OCR.
Started to upgrade the CSS.
The CSS was successfully upgraded.
Started to upgrade Oracle ASM.
Started to upgrade the CRS.
The CRS was successfully upgraded.
Oracle Clusterware operating version was successfully set to 12.1.0.1.0
2013/07/05 15:03:00 CLSRSC-325: Configure Oracle Grid Infrastructure for a Cluster ... succeeded
After the root script is executed on the last node the active version is ugpraded.
[root@rhel6m2 grid]# crsctl query crs activeversion
Oracle Clusterware active version on the cluster is [12.1.0.1.0]
Clicking OK on the execute root script dialog will proceed to GI configuration tasks. The repository database mentioned earlier is created in these steps. In case of failures (not enough space, unable to write to directory location as mentioned earlier) refer the logs and rectify the problems and retry.
This concludes the upgrade, if desired the old GI home could be uninstalled at this time. As root change the ownership of the files and directories in the old GI homes in all the nodes
cd /opt/app/11.2.0
[root@rhel6m1 11.2.0]# chmod -R 775 grid
[root@rhel6m1 11.2.0]# chown -R grid grid
[root@rhel6m1 11.2.0]# cd /opt/app
[root@rhel6m1 app]# chown grid 11.2.0
Use standalone deinstall tool to remove the old GI home.
This concludes the upgrade of 11gR2 to 12c (11.2.0.3 to 12.1.0.1) grid infrastrcuture.
Next step is to upgrade the RAC.
Update 08 July 2013
It was mentioned earlier that the environment being upgraded was running the listener on a non-default port. However after the upgrade it seem that listener running out of the 12c home is running on the default port of 1521.
[grid@rhel6m1 admin]$ srvctl config listener
Name: MYLISTENER
Network: 1, Owner: grid
Home:
End points: TCP:1521
But the scan listener port changes remained intact.
[grid@rhel6m1 admin]$ srvctl config scan_listener
SCAN Listener MYLISTENER_SCAN1 exists. Port: TCP:9120/TCPS:1523
Registration invited nodes:
Registration invited subnets:
Because of this only the management repository database as registering with the listener and not the main database.
[grid@rhel6m1 admin]$ lsnrctl status mylistener
LSNRCTL for Linux: Version 12.1.0.1.0 - Production on 08-JUL-2013 14:53:45
Copyright (c) 1991, 2013, Oracle. All rights reserved.
Connecting to (DESCRIPTION=(ADDRESS=(PROTOCOL=IPC)(KEY=MYLISTENER)))
STATUS of the LISTENER
------------------------
Alias MYLISTENER
Version TNSLSNR for Linux: Version 12.1.0.1.0 - Production
Start Date 05-JUL-2013 15:02:50
Uptime 2 days 23 hr. 50 min. 55 sec
Trace Level off
Security ON: Local OS Authentication
SNMP OFF
Listener Parameter File /opt/app/12.1.0/grid/network/admin/listener.ora
Listener Log File /opt/app/oracle/diag/tnslsnr/rhel6m1/mylistener/alert/log.xml
Listening Endpoints Summary...
(DESCRIPTION=(ADDRESS=(PROTOCOL=ipc)(KEY=MYLISTENER)))
(DESCRIPTION=(ADDRESS=(PROTOCOL=tcp)(HOST=192.168.0.93)(PORT=1521)))
(DESCRIPTION=(ADDRESS=(PROTOCOL=tcp)(HOST=192.168.0.97)(PORT=1521)))
Services Summary...
Service "-MGMTDBXDB" has 1 instance(s).
Instance "-MGMTDB", status READY, has 1 handler(s) for this service...
Service "_mgmtdb" has 1 instance(s).
Instance "-MGMTDB", status READY, has 2 handler(s) for this service...
The command completed successfully
To fix this
change the listener port again to the non-default port
srvctl modify listener -l MYLISTENER -p 9120
and also set the local_listener variable on the repository database
SQL> alter system set local_listener='(ADDRESS=(PROTOCOL=TCP)(HOST=192.168.0.97)(PORT=9120))' scope=both sid='*';
Once the listener is restarted all instances (database and ASM) registers with the listener running on the non-default port.
grid@rhel6m1 admin]$ lsnrctl status mylistener
LSNRCTL for Linux: Version 12.1.0.1.0 - Production on 08-JUL-2013 14:56:55
Copyright (c) 1991, 2013, Oracle. All rights reserved.
Connecting to (DESCRIPTION=(ADDRESS=(PROTOCOL=IPC)(KEY=MYLISTENER)))
STATUS of the LISTENER
------------------------
Alias MYLISTENER
Version TNSLSNR for Linux: Version 12.1.0.1.0 - Production
Start Date 08-JUL-2013 14:56:42
Uptime 0 days 0 hr. 0 min. 12 sec
Trace Level off
Security ON: Local OS Authentication
SNMP OFF
Listener Parameter File /opt/app/12.1.0/grid/network/admin/listener.ora
Listener Log File /opt/app/oracle/diag/tnslsnr/rhel6m1/mylistener/alert/log.xml
Listening Endpoints Summary...
(DESCRIPTION=(ADDRESS=(PROTOCOL=ipc)(KEY=MYLISTENER)))
(DESCRIPTION=(ADDRESS=(PROTOCOL=tcp)(HOST=192.168.0.93)(PORT=9120)))
(DESCRIPTION=(ADDRESS=(PROTOCOL=tcp)(HOST=192.168.0.97)(PORT=9120)))
Services Summary...
Service "+ASM" has 1 instance(s).
Instance "+ASM1", status READY, has 1 handler(s) for this service...
Service "-MGMTDBXDB" has 1 instance(s).
Instance "-MGMTDB", status READY, has 1 handler(s) for this service...
Service "_mgmtdb" has 1 instance(s).
Instance "-MGMTDB", status READY, has 1 handler(s) for this service...
Service "myservice" has 1 instance(s).
Instance "std11g21", status READY, has 1 handler(s) for this service...
Service "std11g2" has 1 instance(s).
Instance "std11g21", status READY, has 1 handler(s) for this service...
Service "std11g2XDB" has 1 instance(s).
Instance "std11g21", status READY, has 1 handler(s) for this service...
The command completed successfully
It must also be noted that the COST related values and files had been moved to new 12c home during the upgrade process. However the wallet file locations in files $GI_HOME/network/admin/listener.ora and $ORACLE_HOME/network/admin/sqlnet.ora must be edited manually to reflect the new location of the wallet files. (
read update 31/07/2013 below)
Useful metalink note
Oracle 12c Upgrade Companion
[1462240.1]
RHEL6: 12c CVU Fails: Reference data is not available for verifying prerequisites on this operating system distribution
[ID 1567127.1]
Related Posts
Upgrading from 10.2.0.4 to 10.2.0.5 (Clusterware, RAC, ASM)
Upgrade from 10.2.0.5 to 11.2.0.3 (Clusterware, RAC, ASM)
Upgrade from 11.1.0.7 to 11.2.0.3 (Clusterware, ASM & RAC)
Upgrading from 11.1.0.7 to 11.2.0.3 with Transient Logical Standby
Upgrading RAC from 11.2.0.3 to 11.2.0.4 - Grid Infrastructure
Upgrading from 11.2.0.1 to 11.2.0.3 with in-place upgrade for RAC
In-place upgrade from 11.2.0.2 to 11.2.0.3
Upgrading from 11.2.0.2 to 11.2.0.3 with Physical Standby - 1
Upgrading from 11.2.0.2 to 11.2.0.3 with Physical Standby - 2
Upgrading RAC from 11.2.0.4 to 12.1.0.2 - Grid Infrastructure
Update 31 July 2013
When asked if 12c is vulnerable to the security issue mentioned in the
alert CVE-2012-1675 Oracle states that 12c is not vulnerable to this security issue and any steps done as per
1340831.1 could be revoked when upgraded to 12c.
The steps done to implement COST as per 1340831.1 are inbuilt in 12c such as
Restricting Service Registration for Oracle RAC Deployments and
Valid Node Checking for Registration for RAC configurations.
For
single instances too this has been adjusted as such according to Oracle no extra steps are required to secure 12c against the vulnerability mentioned in CVE-2012-1675.