Sunday, October 8, 2017

Upgrading RAC from 12.1.0.2 to 12.2.0.1 - Grid Infrastructure

This post gives highlights of upgrading grid infrastructure from 12.1.0.2 to 12.2.0.2. The 12.1.0.2 is running on on RHEL 6.4 (the minimum version for RHEL 6 supported for 12.2) in a role separate configuration. The current cluster mode and versions are given below.
$ crsctl get cluster mode status
Cluster is running in "standard" mode

$ crsctl get node role config -all
Node 'rhel12c1' configured role is 'hub'
Node 'rhel12c2' configured role is 'hub'

$ crsctl query crs activeversion
Oracle Clusterware active version on the cluster is [12.1.0.2.0]
$  crsctl query crs releaseversion
Oracle High Availability Services release version on the local node is [12.1.0.2.0]
$  crsctl query crs softwareversion -all
Oracle Clusterware version on node [rhel12c1] is [12.1.0.2.0]
Oracle Clusterware version on node [rhel12c2] is [12.1.0.2.0]
What is different from previous upgrades to 12.2 is that in this case the cluster is using GNS even though cluster mode is not flex.
$ srvctl config gns
GNS is enabled.
GNS VIP addresses: 192.168.0.87
This has some naming consequences. For 12.1 the install doc states "If you specify a GNS domain, then the SCAN name defaults to clustername-scan.GNS_ domain. Otherwise, it defaults to clustername-scan.current_domain. For example, if you start Oracle Grid Infrastructure installation from the server node1, the cluster name is mycluster, and the GNS domain is grid.example.com, then the SCAN Name is mycluster-scan.grid.example.com."
However this has been changed in 12.2. From the 12.2 install doc "If you specify a GNS domain, then the SCAN name defaults to clusternamescan.cluster_name.GNS_domain. Otherwise, it defaults to clusternamescan.current_domain. For example, if you start Oracle Grid Infrastructure installation from the server node1, the cluster name is mycluster, and the GNS domain is grid.example.com, then the SCAN Name is myclusterscan.mycluster.grid.example.com." So when upgrading to 12.2 with GNS the SCAN name would go from
clustername-scan.GNS_domain
to
clusternamescan.cluster_name.GNS_domain
This had few issues which are described later in the post.
Similar to upgrade from 11.2.0.4 to 12.2 the diskgroup size needs to be increased to fit the MGMTBD. The size of the diskgroup where current MGMTBD was 33GB with 16GB free space.
NAME                             TOTAL_MB    FREE_MB TYPE
------------------------------ ---------- ---------- ------
CLUSFS                              30708      16852 NORMAL
FRA                                 10236       4648 EXTERN
DATA                                10236        960 EXTERN
However this storage was inadequate but neither orachk nor cluvfy detected this. It was only detected during the upgrade.
As a solution to this a new 40GB external redundant disk group was created and OCR and vote disk were moved to it.
NAME                             TOTAL_MB    FREE_MB TYPE
------------------------------ ---------- ---------- ------
CLUSFS                              30708      16852 NORMAL
GIMR                                40954      40858 EXTERN
FRA                                 10236       4648 EXTERN
DATA                                10236        960 EXTERN

crsctl replace votedisk +GIMR
# ocrconfig -add +GIMR
# ocrconfig -delete +CLUSFS
The existing MGMTDB remains at the same location (CLUSFS).
ASMCMD> pwd
+clusfs/_MGMTDB/datafile
ASMCMD> ls
SYSAUX.257.894473747
SYSTEM.258.894473779
UNDOTBS1.259.894473815
During the upgrade the 12.2 OUI checks the space available at the location where OCR and vote disks are currently residing and not in the location of the current MGMTDB. When upgrade happens the old 12.1 MGMTDB is dropped and a new MGMTBD is created in the (new) diskgroup where OCR currently resides.

1. Apply the latest PSU available. In this case 12.1.0.2.170814 was applied before readiness check. Check the upgrade readiness with the latest orachk version. The version used for this was 12.2.0.1.3_20170719. The upgrade readiness report flagged following patches as needed
WARNING => Oracle patch 19855835 is not applied on RDBMS_HOME /opt/app/oracle/product/12.1.0/dbhome_2
WARNING => Oracle patch 20348910 is not applied on RDBMS_HOME /opt/app/oracle/product/12.1.0/dbhome_2
WARNING => Oracle patch 21856522 is not applied on RDBMS_HOME /opt/app/oracle/product/12.1.0/dbhome_2
WARNING => Oracle patch 20958816 is not applied on RDBMS_HOME /opt/app/oracle/product/12.1.0/dbhome_2
WARNING => GI PSU containing Oracle Patch 17617807 is not installed in GI_HOME /opt/app/12.1.0/grid2
The description of patches
 Patch 19855835: UPGRADE FROM 11.2.0.2 TO 11.2.0.4 IS SLOW
 Patch 20348910: ALTER TYPE REPLACE IN PRVTAQJI.SQL TO BE REPLACE WITH CREATE OR REPLACE TYPE
 Patch 21856522: UPGRADE OF 12.1 TO 12.2 CAUSE XOQ COMPONENT TO BE INVALID
 Patch 20958816: INVALID OBJECTS AFTER DOWNGRADE FROM 12.2.0.1 TO 12.1.0.2
If aforementioned PSU is applied then patch 21856522 would be skipped. Also patch 17617807 is applicable to 12.1.0.1 not 12.1.0.2. (Refer 2180188.1) Another patch that is needed and flagged by cluvfy (but not flagged in orachk)is
Patch 21255373: CSSD : DUPLICATE RESPONSE IN GROUP DATA UPDATE
2. Run cluvfy with upgrade option. The failures in cluvfy are due to smaller SWAP size and know issue related to DNS which could be ignored.
$ ./runcluvfy.sh stage -pre crsinst -upgrade -rolling -src_crshome /opt/app/12.1.0/grid2 -dest_crshome /opt/app/12.2.0/grid -dest_version 12.2.0.1.0 -fixup -verbose

Verifying Physical Memory ...
  Node Name     Available                 Required                  Status
  ------------  ------------------------  ------------------------  ----------
  rhel12c2      7.6873GB (8060736.0KB)    8GB (8388608.0KB)         passed
  rhel12c1      7.6873GB (8060736.0KB)    8GB (8388608.0KB)         passed
Verifying Physical Memory ...PASSED
Verifying Available Physical Memory ...
  Node Name     Available                 Required                  Status
  ------------  ------------------------  ------------------------  ----------
  rhel12c2      6.2128GB (6514544.0KB)    50MB (51200.0KB)          passed
  rhel12c1      5.3689GB (5629748.0KB)    50MB (51200.0KB)          passed
Verifying Available Physical Memory ...PASSED
Verifying Swap Size ...
  Node Name     Available                 Required                  Status
  ------------  ------------------------  ------------------------  ----------
  rhel12c2      5GB (5242872.0KB)         7.6873GB (8060736.0KB)    failed
  rhel12c1      5GB (5242872.0KB)         7.6873GB (8060736.0KB)    failed
Verifying Swap Size ...FAILED (PRVF-7573)
Verifying Free Space: rhel12c2:/usr,rhel12c2:/var,rhel12c2:/etc,rhel12c2:/sbin,rhel12c2:/tmp ...
  Path              Node Name     Mount point   Available     Required      Status
  ----------------  ------------  ------------  ------------  ------------  ------------
  /usr              rhel12c2      /             13.582GB      25MB          passed
  /var              rhel12c2      /             13.582GB      5MB           passed
  /etc              rhel12c2      /             13.582GB      25MB          passed
  /sbin             rhel12c2      /             13.582GB      10MB          passed
  /tmp              rhel12c2      /             13.582GB      1GB           passed
Verifying Free Space: rhel12c2:/usr,rhel12c2:/var,rhel12c2:/etc,rhel12c2:/sbin,rhel12c2:/tmp ...PASSED
Verifying Free Space: rhel12c1:/usr,rhel12c1:/var,rhel12c1:/etc,rhel12c1:/sbin,rhel12c1:/tmp ...
  Path              Node Name     Mount point   Available     Required      Status
  ----------------  ------------  ------------  ------------  ------------  ------------
  /usr              rhel12c1      /             7.2881GB      25MB          passed
  /var              rhel12c1      /             7.2881GB      5MB           passed
  /etc              rhel12c1      /             7.2881GB      25MB          passed
  /sbin             rhel12c1      /             7.2881GB      10MB          passed
  /tmp              rhel12c1      /             7.2881GB      1GB           passed
Verifying Free Space: rhel12c1:/usr,rhel12c1:/var,rhel12c1:/etc,rhel12c1:/sbin,rhel12c1:/tmp ...PASSED
Verifying User Existence: grid ...
  Node Name     Status                    Comment
  ------------  ------------------------  ------------------------
  rhel12c2      passed                    exists(501)
  rhel12c1      passed                    exists(501)

  Verifying Users With Same UID: 501 ...PASSED
Verifying User Existence: grid ...PASSED
Verifying Group Existence: asmadmin ...
  Node Name     Status                    Comment
  ------------  ------------------------  ------------------------
  rhel12c2      passed                    exists
  rhel12c1      passed                    exists
Verifying Group Existence: asmadmin ...PASSED
Verifying Group Existence: asmoper ...
  Node Name     Status                    Comment
  ------------  ------------------------  ------------------------
  rhel12c2      passed                    exists
  rhel12c1      passed                    exists
Verifying Group Existence: asmoper ...PASSED
Verifying Group Existence: asmdba ...
  Node Name     Status                    Comment
  ------------  ------------------------  ------------------------
  rhel12c2      passed                    exists
  rhel12c1      passed                    exists
Verifying Group Existence: asmdba ...PASSED
Verifying Group Existence: oinstall ...
  Node Name     Status                    Comment
  ------------  ------------------------  ------------------------
  rhel12c2      passed                    exists
  rhel12c1      passed                    exists
Verifying Group Existence: oinstall ...PASSED
Verifying Group Membership: asmdba ...
  Node Name         User Exists   Group Exists  User in Group  Status
  ----------------  ------------  ------------  ------------  ----------------
  rhel12c2          yes           yes           yes           passed
  rhel12c1          yes           yes           yes           passed
Verifying Group Membership: asmdba ...PASSED
Verifying Group Membership: asmadmin ...
  Node Name         User Exists   Group Exists  User in Group  Status
  ----------------  ------------  ------------  ------------  ----------------
  rhel12c2          yes           yes           yes           passed
  rhel12c1          yes           yes           yes           passed
Verifying Group Membership: asmadmin ...PASSED
Verifying Group Membership: oinstall(Primary) ...
  Node Name         User Exists   Group Exists  User in Group  Primary       Status
  ----------------  ------------  ------------  ------------  ------------  ------------
  rhel12c2          yes           yes           yes           yes           passed
  rhel12c1          yes           yes           yes           yes           passed
Verifying Group Membership: oinstall(Primary) ...PASSED
Verifying Group Membership: asmoper ...
  Node Name         User Exists   Group Exists  User in Group  Status
  ----------------  ------------  ------------  ------------  ----------------
  rhel12c2          yes           yes           yes           passed
  rhel12c1          yes           yes           yes           passed
Verifying Group Membership: asmoper ...PASSED
Verifying Run Level ...
  Node Name     run level                 Required                  Status
  ------------  ------------------------  ------------------------  ----------
  rhel12c2      3                         3,5                       passed
  rhel12c1      3                         3,5                       passed
Verifying Run Level ...PASSED
Verifying Hard Limit: maximum open file descriptors ...
  Node Name         Type          Available     Required      Status
  ----------------  ------------  ------------  ------------  ----------------
  rhel12c2          hard          65536         65536         passed
  rhel12c1          hard          65536         65536         passed
Verifying Hard Limit: maximum open file descriptors ...PASSED
Verifying Soft Limit: maximum open file descriptors ...
  Node Name         Type          Available     Required      Status
  ----------------  ------------  ------------  ------------  ----------------
  rhel12c2          soft          1024          1024          passed
  rhel12c1          soft          1024          1024          passed
Verifying Soft Limit: maximum open file descriptors ...PASSED
Verifying Hard Limit: maximum user processes ...
  Node Name         Type          Available     Required      Status
  ----------------  ------------  ------------  ------------  ----------------
  rhel12c2          hard          16384         16384         passed
  rhel12c1          hard          16384         16384         passed
Verifying Hard Limit: maximum user processes ...PASSED
Verifying Soft Limit: maximum user processes ...
  Node Name         Type          Available     Required      Status
  ----------------  ------------  ------------  ------------  ----------------
  rhel12c2          soft          2047          2047          passed
  rhel12c1          soft          2047          2047          passed
Verifying Soft Limit: maximum user processes ...PASSED
Verifying Soft Limit: maximum stack size ...
  Node Name         Type          Available     Required      Status
  ----------------  ------------  ------------  ------------  ----------------
  rhel12c2          soft          10240         10240         passed
  rhel12c1          soft          10240         10240         passed
Verifying Soft Limit: maximum stack size ...PASSED
Verifying Oracle patch:21255373 ...
  Node Name     Applied                   Required                  Comment
  ------------  ------------------------  ------------------------  ----------
  rhel12c1      21255373                  21255373                  passed
  rhel12c2      21255373                  21255373                  passed
Verifying Oracle patch:21255373 ...PASSED
Verifying This test checks that the source home "/opt/app/12.1.0/grid2" is suitable for upgrading to version "12.2.0.1.0". ...PASSED
Verifying Architecture ...
  Node Name     Available                 Required                  Status
  ------------  ------------------------  ------------------------  ----------
  rhel12c2      x86_64                    x86_64                    passed
  rhel12c1      x86_64                    x86_64                    passed
Verifying Architecture ...PASSED
Verifying OS Kernel Version ...
  Node Name     Available                 Required                  Status
  ------------  ------------------------  ------------------------  ----------
  rhel12c2      2.6.32-358.el6.x86_64     2.6.32                    passed
  rhel12c1      2.6.32-358.el6.x86_64     2.6.32                    passed
Verifying OS Kernel Version ...PASSED
Verifying OS Kernel Parameter: semmsl ...
  Node Name         Current       Configured    Required      Status        Comment
  ----------------  ------------  ------------  ------------  ------------  ------------
  rhel12c1          3010          3010          250           passed
  rhel12c2          3010          3010          250           passed
Verifying OS Kernel Parameter: semmsl ...PASSED
Verifying OS Kernel Parameter: semmns ...
  Node Name         Current       Configured    Required      Status        Comment
  ----------------  ------------  ------------  ------------  ------------  ------------
  rhel12c1          385280        385280        32000         passed
  rhel12c2          385280        385280        32000         passed
Verifying OS Kernel Parameter: semmns ...PASSED
Verifying OS Kernel Parameter: semopm ...
  Node Name         Current       Configured    Required      Status        Comment
  ----------------  ------------  ------------  ------------  ------------  ------------
  rhel12c1          3010          3010          100           passed
  rhel12c2          3010          3010          100           passed
Verifying OS Kernel Parameter: semopm ...PASSED
Verifying OS Kernel Parameter: semmni ...
  Node Name         Current       Configured    Required      Status        Comment
  ----------------  ------------  ------------  ------------  ------------  ------------
  rhel12c1          128           128           128           passed
  rhel12c2          128           128           128           passed
Verifying OS Kernel Parameter: semmni ...PASSED
Verifying OS Kernel Parameter: shmmax ...
  Node Name         Current       Configured    Required      Status        Comment
  ----------------  ------------  ------------  ------------  ------------  ------------
  rhel12c1          68719476736   68719476736   4127096832    passed
  rhel12c2          68719476736   68719476736   4127096832    passed
Verifying OS Kernel Parameter: shmmax ...PASSED
Verifying OS Kernel Parameter: shmmni ...
  Node Name         Current       Configured    Required      Status        Comment
  ----------------  ------------  ------------  ------------  ------------  ------------
  rhel12c1          4096          4096          4096          passed
  rhel12c2          4096          4096          4096          passed
Verifying OS Kernel Parameter: shmmni ...PASSED
Verifying OS Kernel Parameter: shmall ...
  Node Name         Current       Configured    Required      Status        Comment
  ----------------  ------------  ------------  ------------  ------------  ------------
  rhel12c1          4294967296    4294967296    806073        passed
  rhel12c2          4294967296    4294967296    806073        passed
Verifying OS Kernel Parameter: shmall ...PASSED
Verifying OS Kernel Parameter: file-max ...
  Node Name         Current       Configured    Required      Status        Comment
  ----------------  ------------  ------------  ------------  ------------  ------------
  rhel12c1          6815744       6815744       6815744       passed
  rhel12c2          6815744       6815744       6815744       passed
Verifying OS Kernel Parameter: file-max ...PASSED
Verifying OS Kernel Parameter: ip_local_port_range ...
  Node Name         Current       Configured    Required      Status        Comment
  ----------------  ------------  ------------  ------------  ------------  ------------
  rhel12c1          between 9000 & 65500  between 9000 & 65500  between 9000 & 65535  passed
  rhel12c2          between 9000 & 65500  between 9000 & 65500  between 9000 & 65535  passed
Verifying OS Kernel Parameter: ip_local_port_range ...PASSED
Verifying OS Kernel Parameter: rmem_default ...
  Node Name         Current       Configured    Required      Status        Comment
  ----------------  ------------  ------------  ------------  ------------  ------------
  rhel12c1          4194304       4194304       262144        passed
  rhel12c2          4194304       4194304       262144        passed
Verifying OS Kernel Parameter: rmem_default ...PASSED
Verifying OS Kernel Parameter: rmem_max ...
  Node Name         Current       Configured    Required      Status        Comment
  ----------------  ------------  ------------  ------------  ------------  ------------
  rhel12c1          4194304       4194304       4194304       passed
  rhel12c2          4194304       4194304       4194304       passed
Verifying OS Kernel Parameter: rmem_max ...PASSED
Verifying OS Kernel Parameter: wmem_default ...
  Node Name         Current       Configured    Required      Status        Comment
  ----------------  ------------  ------------  ------------  ------------  ------------
  rhel12c1          1048576       1048576       262144        passed
  rhel12c2          1048576       1048576       262144        passed
Verifying OS Kernel Parameter: wmem_default ...PASSED
Verifying OS Kernel Parameter: wmem_max ...
  Node Name         Current       Configured    Required      Status        Comment
  ----------------  ------------  ------------  ------------  ------------  ------------
  rhel12c1          2097152       2097152       1048576       passed
  rhel12c2          2097152       2097152       1048576       passed
Verifying OS Kernel Parameter: wmem_max ...PASSED
Verifying OS Kernel Parameter: aio-max-nr ...
  Node Name         Current       Configured    Required      Status        Comment
  ----------------  ------------  ------------  ------------  ------------  ------------
  rhel12c1          3145728       3145728       1048576       passed
  rhel12c2          3145728       3145728       1048576       passed
Verifying OS Kernel Parameter: aio-max-nr ...PASSED
Verifying OS Kernel Parameter: panic_on_oops ...
  Node Name         Current       Configured    Required      Status        Comment
  ----------------  ------------  ------------  ------------  ------------  ------------
  rhel12c1          1             1             1             passed
  rhel12c2          1             1             1             passed
Verifying OS Kernel Parameter: panic_on_oops ...PASSED
Verifying Package: binutils-2.20.51.0.2 ...
  Node Name     Available                 Required                  Status
  ------------  ------------------------  ------------------------  ----------
  rhel12c2      binutils-2.20.51.0.2-5.36.el6  binutils-2.20.51.0.2      passed
  rhel12c1      binutils-2.20.51.0.2-5.36.el6  binutils-2.20.51.0.2      passed
Verifying Package: binutils-2.20.51.0.2 ...PASSED
Verifying Package: compat-libcap1-1.10 ...
  Node Name     Available                 Required                  Status
  ------------  ------------------------  ------------------------  ----------
  rhel12c2      compat-libcap1-1.10-1     compat-libcap1-1.10       passed
  rhel12c1      compat-libcap1-1.10-1     compat-libcap1-1.10       passed
Verifying Package: compat-libcap1-1.10 ...PASSED
Verifying Package: compat-libstdc++-33-3.2.3 (x86_64) ...
  Node Name     Available                 Required                  Status
  ------------  ------------------------  ------------------------  ----------
  rhel12c2      compat-libstdc++-33(x86_64)-3.2.3-69.el6  compat-libstdc++-33(x86_64)-3.2.3  passed
  rhel12c1      compat-libstdc++-33(x86_64)-3.2.3-69.el6  compat-libstdc++-33(x86_64)-3.2.3  passed
Verifying Package: compat-libstdc++-33-3.2.3 (x86_64) ...PASSED
Verifying Package: libgcc-4.4.7 (x86_64) ...
  Node Name     Available                 Required                  Status
  ------------  ------------------------  ------------------------  ----------
  rhel12c2      libgcc(x86_64)-4.4.7-3.el6  libgcc(x86_64)-4.4.7      passed
  rhel12c1      libgcc(x86_64)-4.4.7-3.el6  libgcc(x86_64)-4.4.7      passed
Verifying Package: libgcc-4.4.7 (x86_64) ...PASSED
Verifying Package: libstdc++-4.4.7 (x86_64) ...
  Node Name     Available                 Required                  Status
  ------------  ------------------------  ------------------------  ----------
  rhel12c2      libstdc++(x86_64)-4.4.7-3.el6  libstdc++(x86_64)-4.4.7   passed
  rhel12c1      libstdc++(x86_64)-4.4.7-3.el6  libstdc++(x86_64)-4.4.7   passed
Verifying Package: libstdc++-4.4.7 (x86_64) ...PASSED
Verifying Package: libstdc++-devel-4.4.7 (x86_64) ...
  Node Name     Available                 Required                  Status
  ------------  ------------------------  ------------------------  ----------
  rhel12c2      libstdc++-devel(x86_64)-4.4.7-3.el6  libstdc++-devel(x86_64)-4.4.7  passed
  rhel12c1      libstdc++-devel(x86_64)-4.4.7-3.el6  libstdc++-devel(x86_64)-4.4.7  passed
Verifying Package: libstdc++-devel-4.4.7 (x86_64) ...PASSED
Verifying Package: sysstat-9.0.4 ...
  Node Name     Available                 Required                  Status
  ------------  ------------------------  ------------------------  ----------
  rhel12c2      sysstat-9.0.4-20.el6      sysstat-9.0.4             passed
  rhel12c1      sysstat-9.0.4-20.el6      sysstat-9.0.4             passed
Verifying Package: sysstat-9.0.4 ...PASSED
Verifying Package: gcc-4.4.7 ...
  Node Name     Available                 Required                  Status
  ------------  ------------------------  ------------------------  ----------
  rhel12c2      gcc-4.4.7-3.el6           gcc-4.4.7                 passed
  rhel12c1      gcc-4.4.7-3.el6           gcc-4.4.7                 passed
Verifying Package: gcc-4.4.7 ...PASSED
Verifying Package: gcc-c++-4.4.7 ...
  Node Name     Available                 Required                  Status
  ------------  ------------------------  ------------------------  ----------
  rhel12c2      gcc-c++-4.4.7-3.el6       gcc-c++-4.4.7             passed
  rhel12c1      gcc-c++-4.4.7-3.el6       gcc-c++-4.4.7             passed
Verifying Package: gcc-c++-4.4.7 ...PASSED
Verifying Package: ksh ...
  Node Name     Available                 Required                  Status
  ------------  ------------------------  ------------------------  ----------
  rhel12c2      ksh                       ksh                       passed
  rhel12c1      ksh                       ksh                       passed
Verifying Package: ksh ...PASSED
Verifying Package: make-3.81 ...
  Node Name     Available                 Required                  Status
  ------------  ------------------------  ------------------------  ----------
  rhel12c2      make-3.81-20.el6          make-3.81                 passed
  rhel12c1      make-3.81-20.el6          make-3.81                 passed
Verifying Package: make-3.81 ...PASSED
Verifying Package: glibc-2.12 (x86_64) ...
  Node Name     Available                 Required                  Status
  ------------  ------------------------  ------------------------  ----------
  rhel12c2      glibc(x86_64)-2.12-1.107.el6  glibc(x86_64)-2.12        passed
  rhel12c1      glibc(x86_64)-2.12-1.107.el6  glibc(x86_64)-2.12        passed
Verifying Package: glibc-2.12 (x86_64) ...PASSED
Verifying Package: glibc-devel-2.12 (x86_64) ...
  Node Name     Available                 Required                  Status
  ------------  ------------------------  ------------------------  ----------
  rhel12c2      glibc-devel(x86_64)-2.12-1.107.el6  glibc-devel(x86_64)-2.12  passed
  rhel12c1      glibc-devel(x86_64)-2.12-1.107.el6  glibc-devel(x86_64)-2.12  passed
Verifying Package: glibc-devel-2.12 (x86_64) ...PASSED
Verifying Package: libaio-0.3.107 (x86_64) ...
  Node Name     Available                 Required                  Status
  ------------  ------------------------  ------------------------  ----------
  rhel12c2      libaio(x86_64)-0.3.107-10.el6  libaio(x86_64)-0.3.107    passed
  rhel12c1      libaio(x86_64)-0.3.107-10.el6  libaio(x86_64)-0.3.107    passed
Verifying Package: libaio-0.3.107 (x86_64) ...PASSED
Verifying Package: libaio-devel-0.3.107 (x86_64) ...
  Node Name     Available                 Required                  Status
  ------------  ------------------------  ------------------------  ----------
  rhel12c2      libaio-devel(x86_64)-0.3.107-10.el6  libaio-devel(x86_64)-0.3.107  passed
  rhel12c1      libaio-devel(x86_64)-0.3.107-10.el6  libaio-devel(x86_64)-0.3.107  passed
Verifying Package: libaio-devel-0.3.107 (x86_64) ...PASSED
Verifying Package: nfs-utils-1.2.3-15 ...
  Node Name     Available                 Required                  Status
  ------------  ------------------------  ------------------------  ----------
  rhel12c2      nfs-utils-1.2.3-36.el6    nfs-utils-1.2.3-15        passed
  rhel12c1      nfs-utils-1.2.3-36.el6    nfs-utils-1.2.3-15        passed
Verifying Package: nfs-utils-1.2.3-15 ...PASSED
Verifying Package: smartmontools-5.43-1 ...
  Node Name     Available                 Required                  Status
  ------------  ------------------------  ------------------------  ----------
  rhel12c2      smartmontools-5.43-1.el6  smartmontools-5.43-1      passed
  rhel12c1      smartmontools-5.43-1.el6  smartmontools-5.43-1      passed
Verifying Package: smartmontools-5.43-1 ...PASSED
Verifying Package: net-tools-1.60-110 ...
  Node Name     Available                 Required                  Status
  ------------  ------------------------  ------------------------  ----------
  rhel12c2      net-tools-1.60-110.el6_2  net-tools-1.60-110        passed
  rhel12c1      net-tools-1.60-110.el6_2  net-tools-1.60-110        passed
Verifying Package: net-tools-1.60-110 ...PASSED
Verifying Users With Same UID: 0 ...PASSED
Verifying Current Group ID ...PASSED
Verifying Root user consistency ...
  Node Name                             Status
  ------------------------------------  ------------------------
  rhel12c2                              passed
  rhel12c1                              passed
Verifying Root user consistency ...PASSED
Verifying correctness of ASM disk group files ownership ...PASSED
Verifying selectivity of ASM discovery string ...PASSED
Verifying ASM spare parameters ...PASSED
Verifying Disk group ASM compatibility setting ...PASSED
Verifying Package: cvuqdisk-1.0.10-1 ...
  Node Name     Available                 Required                  Status
  ------------  ------------------------  ------------------------  ----------
  rhel12c2      cvuqdisk-1.0.10-1         cvuqdisk-1.0.10-1         passed
  rhel12c1      cvuqdisk-1.0.10-1         cvuqdisk-1.0.10-1         passed
Verifying Package: cvuqdisk-1.0.10-1 ...PASSED
Verifying Node Connectivity ...
  Verifying Hosts File ...
  Node Name                             Status
  ------------------------------------  ------------------------
  rhel12c1                              passed
  rhel12c2                              passed
  Verifying Hosts File ...PASSED

Interface information for node "rhel12c1"

 Name   IP Address      Subnet          Gateway         Def. Gateway    HW Address        MTU
 ------ --------------- --------------- --------------- --------------- ----------------- ------
 eth0   192.168.0.93    192.168.0.0     0.0.0.0         192.168.0.102   08:00:27:CB:D8:AE 1500
 eth0   192.168.0.87    192.168.0.0     0.0.0.0         192.168.0.102   08:00:27:CB:D8:AE 1500
 eth0   192.168.0.90    192.168.0.0     0.0.0.0         192.168.0.102   08:00:27:CB:D8:AE 1500
 eth0   192.168.0.91    192.168.0.0     0.0.0.0         192.168.0.102   08:00:27:CB:D8:AE 1500
 eth0   192.168.0.97    192.168.0.0     0.0.0.0         192.168.0.102   08:00:27:CB:D8:AE 1500
 eth1   192.168.1.87    192.168.1.0     0.0.0.0         192.168.0.102   08:00:27:92:0B:69 1500

Interface information for node "rhel12c2"

 Name   IP Address      Subnet          Gateway         Def. Gateway    HW Address        MTU
 ------ --------------- --------------- --------------- --------------- ----------------- ------
 eth0   192.168.0.94    192.168.0.0     0.0.0.0         192.168.0.102   08:00:27:9C:66:81 1500
 eth0   192.168.0.89    192.168.0.0     0.0.0.0         192.168.0.102   08:00:27:9C:66:81 1500
 eth0   192.168.0.88    192.168.0.0     0.0.0.0         192.168.0.102   08:00:27:9C:66:81 1500
 eth1   192.168.1.88    192.168.1.0     0.0.0.0         192.168.0.102   08:00:27:1E:0E:A8 1500

Check: MTU consistency on the private interfaces of subnet "192.168.1.0"

  Node              Name          IP Address    Subnet        MTU
  ----------------  ------------  ------------  ------------  ----------------
  rhel12c1          eth1          192.168.1.87  192.168.1.0   1500
  rhel12c2          eth1          192.168.1.88  192.168.1.0   1500

Check: MTU consistency of the subnet "192.168.0.0".

  Node              Name          IP Address    Subnet        MTU
  ----------------  ------------  ------------  ------------  ----------------
  rhel12c1          eth0          192.168.0.93  192.168.0.0   1500
  rhel12c1          eth0          192.168.0.87  192.168.0.0   1500
  rhel12c1          eth0          192.168.0.90  192.168.0.0   1500
  rhel12c1          eth0          192.168.0.91  192.168.0.0   1500
  rhel12c1          eth0          192.168.0.97  192.168.0.0   1500
  rhel12c2          eth0          192.168.0.94  192.168.0.0   1500
  rhel12c2          eth0          192.168.0.89  192.168.0.0   1500
  rhel12c2          eth0          192.168.0.88  192.168.0.0   1500

  Source                          Destination                     Connected?
  ------------------------------  ------------------------------  ----------------
  rhel12c1[eth1:192.168.1.87]     rhel12c2[eth1:192.168.1.88]     yes

  Source                          Destination                     Connected?
  ------------------------------  ------------------------------  ----------------
  rhel12c1[eth0:192.168.0.93]     rhel12c1[eth0:192.168.0.87]     yes
  rhel12c1[eth0:192.168.0.93]     rhel12c1[eth0:192.168.0.90]     yes
  rhel12c1[eth0:192.168.0.93]     rhel12c1[eth0:192.168.0.91]     yes
  rhel12c1[eth0:192.168.0.93]     rhel12c1[eth0:192.168.0.97]     yes
  rhel12c1[eth0:192.168.0.93]     rhel12c2[eth0:192.168.0.94]     yes
  rhel12c1[eth0:192.168.0.93]     rhel12c2[eth0:192.168.0.89]     yes
  rhel12c1[eth0:192.168.0.93]     rhel12c2[eth0:192.168.0.88]     yes
  rhel12c1[eth0:192.168.0.87]     rhel12c1[eth0:192.168.0.90]     yes
  rhel12c1[eth0:192.168.0.87]     rhel12c1[eth0:192.168.0.91]     yes
  rhel12c1[eth0:192.168.0.87]     rhel12c1[eth0:192.168.0.97]     yes
  rhel12c1[eth0:192.168.0.87]     rhel12c2[eth0:192.168.0.94]     yes
  rhel12c1[eth0:192.168.0.87]     rhel12c2[eth0:192.168.0.89]     yes
  rhel12c1[eth0:192.168.0.87]     rhel12c2[eth0:192.168.0.88]     yes
  rhel12c1[eth0:192.168.0.90]     rhel12c1[eth0:192.168.0.91]     yes
  rhel12c1[eth0:192.168.0.90]     rhel12c1[eth0:192.168.0.97]     yes
  rhel12c1[eth0:192.168.0.90]     rhel12c2[eth0:192.168.0.94]     yes
  rhel12c1[eth0:192.168.0.90]     rhel12c2[eth0:192.168.0.89]     yes
  rhel12c1[eth0:192.168.0.90]     rhel12c2[eth0:192.168.0.88]     yes
  rhel12c1[eth0:192.168.0.91]     rhel12c1[eth0:192.168.0.97]     yes
  rhel12c1[eth0:192.168.0.91]     rhel12c2[eth0:192.168.0.94]     yes
  rhel12c1[eth0:192.168.0.91]     rhel12c2[eth0:192.168.0.89]     yes
  rhel12c1[eth0:192.168.0.91]     rhel12c2[eth0:192.168.0.88]     yes
  rhel12c1[eth0:192.168.0.97]     rhel12c2[eth0:192.168.0.94]     yes
  rhel12c1[eth0:192.168.0.97]     rhel12c2[eth0:192.168.0.89]     yes
  rhel12c1[eth0:192.168.0.97]     rhel12c2[eth0:192.168.0.88]     yes
  rhel12c2[eth0:192.168.0.94]     rhel12c2[eth0:192.168.0.89]     yes
  rhel12c2[eth0:192.168.0.94]     rhel12c2[eth0:192.168.0.88]     yes
  rhel12c2[eth0:192.168.0.89]     rhel12c2[eth0:192.168.0.88]     yes
  Verifying Check that maximum (MTU) size packet goes through subnet ...PASSED
  Verifying subnet mask consistency for subnet "192.168.1.0" ...PASSED
  Verifying subnet mask consistency for subnet "192.168.0.0" ...PASSED
Verifying Node Connectivity ...PASSED
Verifying Multicast check ...
Checking subnet "192.168.1.0" for multicast communication with multicast group "224.0.0.251"
Verifying Multicast check ...PASSED
Verifying ASM Integrity ...
  Verifying Node Connectivity ...
    Verifying Hosts File ...
  Node Name                             Status
  ------------------------------------  ------------------------
  rhel12c1                              passed
    Verifying Hosts File ...PASSED

Interface information for node "rhel12c1"

 Name   IP Address      Subnet          Gateway         Def. Gateway    HW Address        MTU
 ------ --------------- --------------- --------------- --------------- ----------------- ------
 eth0   192.168.0.93    192.168.0.0     0.0.0.0         192.168.0.102   08:00:27:CB:D8:AE 1500
 eth0   192.168.0.87    192.168.0.0     0.0.0.0         192.168.0.102   08:00:27:CB:D8:AE 1500
 eth0   192.168.0.90    192.168.0.0     0.0.0.0         192.168.0.102   08:00:27:CB:D8:AE 1500
 eth0   192.168.0.91    192.168.0.0     0.0.0.0         192.168.0.102   08:00:27:CB:D8:AE 1500
 eth0   192.168.0.97    192.168.0.0     0.0.0.0         192.168.0.102   08:00:27:CB:D8:AE 1500
 eth1   192.168.1.87    192.168.1.0     0.0.0.0         192.168.0.102   08:00:27:92:0B:69 1500

Check: MTU consistency on the private interfaces of subnet "192.168.1.0"

  Node              Name          IP Address    Subnet        MTU
  ----------------  ------------  ------------  ------------  ----------------
  rhel12c1          eth1          192.168.1.87  192.168.1.0   1500

Check: MTU consistency of the subnet "192.168.0.0".

  Node              Name          IP Address    Subnet        MTU
  ----------------  ------------  ------------  ------------  ----------------
  rhel12c1          eth0          192.168.0.93  192.168.0.0   1500
  rhel12c1          eth0          192.168.0.87  192.168.0.0   1500
  rhel12c1          eth0          192.168.0.90  192.168.0.0   1500
  rhel12c1          eth0          192.168.0.91  192.168.0.0   1500
  rhel12c1          eth0          192.168.0.97  192.168.0.0   1500

  Source                          Destination                     Connected?
  ------------------------------  ------------------------------  ----------------
  rhel12c1[eth0:192.168.0.93]     rhel12c1[eth0:192.168.0.87]     yes
  rhel12c1[eth0:192.168.0.93]     rhel12c1[eth0:192.168.0.90]     yes
  rhel12c1[eth0:192.168.0.93]     rhel12c1[eth0:192.168.0.91]     yes
  rhel12c1[eth0:192.168.0.93]     rhel12c1[eth0:192.168.0.97]     yes
  rhel12c1[eth0:192.168.0.87]     rhel12c1[eth0:192.168.0.90]     yes
  rhel12c1[eth0:192.168.0.87]     rhel12c1[eth0:192.168.0.91]     yes
  rhel12c1[eth0:192.168.0.87]     rhel12c1[eth0:192.168.0.97]     yes
  rhel12c1[eth0:192.168.0.90]     rhel12c1[eth0:192.168.0.91]     yes
  rhel12c1[eth0:192.168.0.90]     rhel12c1[eth0:192.168.0.97]     yes
  rhel12c1[eth0:192.168.0.91]     rhel12c1[eth0:192.168.0.97]     yes
    Verifying Check that maximum (MTU) size packet goes through subnet ...PASSED
  Verifying Node Connectivity ...PASSED
Verifying ASM Integrity ...PASSED
Verifying OCR Integrity ...PASSED
Verifying Network Time Protocol (NTP) ...
  Verifying '/etc/ntp.conf' ...
  Node Name                             File exists?
  ------------------------------------  ------------------------
  rhel12c2                              no
  rhel12c1                              no

  Verifying '/etc/ntp.conf' ...PASSED
  Verifying '/var/run/ntpd.pid' ...
  Node Name                             File exists?
  ------------------------------------  ------------------------
  rhel12c2                              no
  rhel12c1                              no

  Verifying '/var/run/ntpd.pid' ...PASSED
Verifying Network Time Protocol (NTP) ...PASSED
Verifying Same core file name pattern ...PASSED
Verifying User Mask ...
  Node Name     Available                 Required                  Comment
  ------------  ------------------------  ------------------------  ----------
  rhel12c2      0022                      0022                      passed
  rhel12c1      0022                      0022                      passed
Verifying User Mask ...PASSED
Verifying User Not In Group "root": grid ...
  Node Name     Status                    Comment
  ------------  ------------------------  ------------------------
  rhel12c2      passed                    does not exist
  rhel12c1      passed                    does not exist
Verifying User Not In Group "root": grid ...PASSED
Verifying Time zone consistency ...PASSED
Verifying VIP Subnet configuration check ...PASSED
Verifying Voting Disk ...PASSED
Verifying resolv.conf Integrity ...
  Verifying (Linux) resolv.conf Integrity ...
  Node Name                             Status
  ------------------------------------  ------------------------
  rhel12c1                              failed
  rhel12c2                              failed

  checking response for name "rhel12c2" from each of the name servers specified
  in "/etc/resolv.conf"

  Node Name     Source                    Comment                   Status
  ------------  ------------------------  ------------------------  ----------
  rhel12c2      192.168.0.85              IPv4                      passed

  checking response for name "rhel12c1" from each of the name servers specified
  in "/etc/resolv.conf"

  Node Name     Source                    Comment                   Status
  ------------  ------------------------  ------------------------  ----------
  rhel12c1      192.168.0.85              IPv4                      passed
  Verifying (Linux) resolv.conf Integrity ...FAILED (PRVF-5636)
Verifying resolv.conf Integrity ...FAILED (PRVF-5636)
Verifying DNS/NIS name service ...PASSED
Verifying Daemon "avahi-daemon" not configured and running ...
  Node Name     Configured                Status
  ------------  ------------------------  ------------------------
  rhel12c2      no                        passed
  rhel12c1      no                        passed

  Node Name     Running?                  Status
  ------------  ------------------------  ------------------------
  rhel12c2      no                        passed
  rhel12c1      no                        passed
Verifying Daemon "avahi-daemon" not configured and running ...PASSED
Verifying Daemon "proxyt" not configured and running ...
  Node Name     Configured                Status
  ------------  ------------------------  ------------------------
  rhel12c2      no                        passed
  rhel12c1      no                        passed

  Node Name     Running?                  Status
  ------------  ------------------------  ------------------------
  rhel12c2      no                        passed
  rhel12c1      no                        passed
Verifying Daemon "proxyt" not configured and running ...PASSED
Verifying loopback network interface address ...PASSED
Verifying Grid Infrastructure home path: /opt/app/12.2.0/grid ...
  Verifying '/opt/app/12.2.0/grid' ...PASSED
Verifying Grid Infrastructure home path: /opt/app/12.2.0/grid ...PASSED
Verifying Privileged group consistency for upgrade ...PASSED
Verifying CRS user Consistency for upgrade ...PASSED
Verifying Clusterware Version Consistency ...PASSED
Verifying Check incorrectly sized ASM Disks ...PASSED
Verifying Network configuration consistency checks ...PASSED
Verifying File system mount options for path GI_HOME ...PASSED
Verifying /boot mount ...PASSED
Verifying OLR Integrity ...PASSED
Verifying Verify that the ASM instance was configured using an existing ASM parameter file. ...PASSED
Verifying User Equivalence ...PASSED
Verifying /dev/shm mounted as temporary file system ...PASSED
Verifying File system mount options for path /var ...PASSED
Verifying zeroconf check ...PASSED
Verifying ASM Filter Driver configuration ...PASSED

Pre-check for cluster services setup was unsuccessful.
Checks did not pass for the following nodes:
        rhel12c2,rhel12c1



3. Create new out-of-place location for 12.2 grid and run gridSetup.
mkdir -p /opt/app/12.2.0/grid
chown -R grid:oinstall /opt/app/12.2.0
cp ~/linuxx64_12201_grid_home.zip /opt/app/12.2.0/grid
cd /opt/app/12.2.0/grid
unzip linuxx64_12201_grid_home.zip

unset ORACLE_BASE
unset ORACLE_HOME
unset ORACLE_SID

./gridSetup.sh
As mentioned earlier the pre-freq failures could be ignored in this case.
Execute rootupgrade.sh on each node when prompted.
4. Root script run on node 1.
[root@rhel12c1 ~]# /opt/app/12.2.0/grid/rootupgrade.sh
Performing root user operation.

The following environment variables are set as:
    ORACLE_OWNER= grid
    ORACLE_HOME=  /opt/app/12.2.0/grid

Enter the full pathname of the local bin directory: [/usr/local/bin]:
The contents of "dbhome" have not changed. No need to overwrite.
The file "oraenv" already exists in /usr/local/bin.  Overwrite it? (y/n)
[n]:
The contents of "coraenv" have not changed. No need to overwrite.

Entries will be added to the /etc/oratab file as needed by
Database Configuration Assistant when a database is created
Finished running generic part of root script.
Now product-specific root actions will be performed.
Relinking oracle with rac_on option
Using configuration parameter file: /opt/app/12.2.0/grid/crs/install/crsconfig_params
The log of current session can be found at:
  /opt/app/oracle/crsdata/rhel12c1/crsconfig/rootcrs_rhel12c1_2017-09-19_12-20-07AM.log
2017/09/19 12:20:13 CLSRSC-595: Executing upgrade step 1 of 19: 'UpgradeTFA'.
2017/09/19 12:20:13 CLSRSC-4015: Performing install or upgrade action for Oracle Trace File Analyzer (TFA) Collector.
2017/09/19 12:20:41 CLSRSC-4003: Successfully patched Oracle Trace File Analyzer (TFA) Collector.
2017/09/19 12:20:41 CLSRSC-595: Executing upgrade step 2 of 19: 'ValidateEnv'.
2017/09/19 12:21:04 CLSRSC-595: Executing upgrade step 3 of 19: 'GenSiteGUIDs'.
2017/09/19 12:21:06 CLSRSC-595: Executing upgrade step 4 of 19: 'GetOldConfig'.
2017/09/19 12:21:06 CLSRSC-464: Starting retrieval of the cluster configuration data
2017/09/19 12:21:24 CLSRSC-515: Starting OCR manual backup.
2017/09/19 12:21:33 CLSRSC-516: OCR manual backup successful.
2017/09/19 12:22:02 CLSRSC-486:
 At this stage of upgrade, the OCR has changed.
 Any attempt to downgrade the cluster after this point will require a complete cluster outage to restore the OCR.
2017/09/19 12:22:02 CLSRSC-541:
 To downgrade the cluster:
 1. All nodes that have been upgraded must be downgraded.
2017/09/19 12:22:02 CLSRSC-542:
 2. Before downgrading the last node, the Grid Infrastructure stack on all other cluster nodes must be down.
2017/09/19 12:22:02 CLSRSC-615:
 3. The last node to downgrade cannot be a Leaf node.
2017/09/19 12:22:17 CLSRSC-465: Retrieval of the cluster configuration data has successfully completed.
2017/09/19 12:22:17 CLSRSC-595: Executing upgrade step 5 of 19: 'UpgPrechecks'.
2017/09/19 12:22:24 CLSRSC-363: User ignored prerequisites during installation
2017/09/19 12:22:54 CLSRSC-595: Executing upgrade step 6 of 19: 'SaveParamFile'.
2017/09/19 12:23:10 CLSRSC-595: Executing upgrade step 7 of 19: 'SetupOSD'.
2017/09/19 12:23:34 CLSRSC-595: Executing upgrade step 8 of 19: 'PreUpgrade'.
2017/09/19 12:23:43 CLSRSC-468: Setting Oracle Clusterware and ASM to rolling migration mode
2017/09/19 12:23:43 CLSRSC-482: Running command: '/opt/app/12.1.0/grid2/bin/crsctl start rollingupgrade 12.2.0.1.0'
CRS-1131: The cluster was successfully set to rolling upgrade mode.
2017/09/19 12:24:11 CLSRSC-482: Running command: '/opt/app/12.2.0/grid/bin/asmca -silent -upgradeNodeASM -nonRolling false -oldCRSHome /opt/app/12.1.0/grid2 -oldCRSVersion 12.1.0.2.0 -firstNode true -startRolling false '

ASM configuration upgraded in local node successfully.

2017/09/19 12:24:16 CLSRSC-469: Successfully set Oracle Clusterware and ASM to rolling migration mode
2017/09/19 12:24:36 CLSRSC-466: Starting shutdown of the current Oracle Grid Infrastructure stack
2017/09/19 12:25:06 CLSRSC-467: Shutdown of the current Oracle Grid Infrastructure stack has successfully completed.
2017/09/19 12:25:08 CLSRSC-595: Executing upgrade step 9 of 19: 'CheckCRSConfig'.
2017/09/19 12:25:09 CLSRSC-595: Executing upgrade step 10 of 19: 'UpgradeOLR'.
2017/09/19 12:25:20 CLSRSC-595: Executing upgrade step 11 of 19: 'ConfigCHMOS'.
2017/09/19 12:25:20 CLSRSC-595: Executing upgrade step 12 of 19: 'InstallAFD'.
2017/09/19 12:25:30 CLSRSC-595: Executing upgrade step 13 of 19: 'createOHASD'.
2017/09/19 12:25:41 CLSRSC-595: Executing upgrade step 14 of 19: 'ConfigOHASD'.
2017/09/19 12:25:57 CLSRSC-329: Replacing Clusterware entries in file 'oracle-ohasd.conf'
2017/09/19 12:26:23 CLSRSC-595: Executing upgrade step 15 of 19: 'InstallACFS'.
CRS-2791: Starting shutdown of Oracle High Availability Services-managed resources on 'rhel12c1'
CRS-2793: Shutdown of Oracle High Availability Services-managed resources on 'rhel12c1' has completed
CRS-4133: Oracle High Availability Services has been stopped.
CRS-4123: Oracle High Availability Services has been started.
2017/09/19 12:27:10 CLSRSC-595: Executing upgrade step 16 of 19: 'InstallKA'.
2017/09/19 12:27:20 CLSRSC-595: Executing upgrade step 17 of 19: 'UpgradeCluster'.
CRS-2791: Starting shutdown of Oracle High Availability Services-managed resources on 'rhel12c1'
CRS-2793: Shutdown of Oracle High Availability Services-managed resources on 'rhel12c1' has completed
CRS-4133: Oracle High Availability Services has been stopped.
CRS-4123: Starting Oracle High Availability Services-managed resources
CRS-2672: Attempting to start 'ora.mdnsd' on 'rhel12c1'
CRS-2672: Attempting to start 'ora.evmd' on 'rhel12c1'
CRS-2676: Start of 'ora.evmd' on 'rhel12c1' succeeded
CRS-2676: Start of 'ora.mdnsd' on 'rhel12c1' succeeded
CRS-2672: Attempting to start 'ora.gpnpd' on 'rhel12c1'
CRS-2676: Start of 'ora.gpnpd' on 'rhel12c1' succeeded
CRS-2672: Attempting to start 'ora.gipcd' on 'rhel12c1'
CRS-2676: Start of 'ora.gipcd' on 'rhel12c1' succeeded
CRS-2672: Attempting to start 'ora.cssdmonitor' on 'rhel12c1'
CRS-2676: Start of 'ora.cssdmonitor' on 'rhel12c1' succeeded
CRS-2672: Attempting to start 'ora.cssd' on 'rhel12c1'
CRS-2672: Attempting to start 'ora.diskmon' on 'rhel12c1'
CRS-2676: Start of 'ora.diskmon' on 'rhel12c1' succeeded
CRS-2676: Start of 'ora.cssd' on 'rhel12c1' succeeded
CRS-2672: Attempting to start 'ora.cluster_interconnect.haip' on 'rhel12c1'
CRS-2672: Attempting to start 'ora.ctssd' on 'rhel12c1'
CRS-2676: Start of 'ora.ctssd' on 'rhel12c1' succeeded
CRS-2676: Start of 'ora.cluster_interconnect.haip' on 'rhel12c1' succeeded
CRS-2672: Attempting to start 'ora.asm' on 'rhel12c1'
CRS-2676: Start of 'ora.asm' on 'rhel12c1' succeeded
CRS-2672: Attempting to start 'ora.storage' on 'rhel12c1'
CRS-2676: Start of 'ora.storage' on 'rhel12c1' succeeded
CRS-2672: Attempting to start 'ora.crf' on 'rhel12c1'
CRS-2676: Start of 'ora.crf' on 'rhel12c1' succeeded
CRS-2672: Attempting to start 'ora.crsd' on 'rhel12c1'
CRS-2676: Start of 'ora.crsd' on 'rhel12c1' succeeded
CRS-6017: Processing resource auto-start for servers: rhel12c1
CRS-2672: Attempting to start 'ora.ons' on 'rhel12c1'
CRS-2673: Attempting to stop 'ora.LISTENER_SCAN1.lsnr' on 'rhel12c2'
CRS-2673: Attempting to stop 'ora.rhel12c1.vip' on 'rhel12c2'
CRS-2677: Stop of 'ora.LISTENER_SCAN1.lsnr' on 'rhel12c2' succeeded
CRS-2673: Attempting to stop 'ora.scan1.vip' on 'rhel12c2'
CRS-2677: Stop of 'ora.rhel12c1.vip' on 'rhel12c2' succeeded
CRS-2672: Attempting to start 'ora.rhel12c1.vip' on 'rhel12c1'
CRS-2677: Stop of 'ora.scan1.vip' on 'rhel12c2' succeeded
CRS-2672: Attempting to start 'ora.scan1.vip' on 'rhel12c1'
CRS-2676: Start of 'ora.rhel12c1.vip' on 'rhel12c1' succeeded
CRS-2672: Attempting to start 'ora.LISTENER.lsnr' on 'rhel12c1'
CRS-2676: Start of 'ora.scan1.vip' on 'rhel12c1' succeeded
CRS-2672: Attempting to start 'ora.LISTENER_SCAN1.lsnr' on 'rhel12c1'
CRS-2676: Start of 'ora.ons' on 'rhel12c1' succeeded
CRS-2676: Start of 'ora.LISTENER.lsnr' on 'rhel12c1' succeeded
CRS-2672: Attempting to start 'ora.asm' on 'rhel12c1'
CRS-2676: Start of 'ora.LISTENER_SCAN1.lsnr' on 'rhel12c1' succeeded
CRS-2676: Start of 'ora.asm' on 'rhel12c1' succeeded
CRS-2672: Attempting to start 'ora.DATA.dg' on 'rhel12c1'
CRS-2676: Start of 'ora.DATA.dg' on 'rhel12c1' succeeded
CRS-2672: Attempting to start 'ora.FRA.dg' on 'rhel12c1'
CRS-2676: Start of 'ora.FRA.dg' on 'rhel12c1' succeeded
CRS-2672: Attempting to start 'ora.std12c1.db' on 'rhel12c1'
CRS-2676: Start of 'ora.std12c1.db' on 'rhel12c1' succeeded
CRS-2672: Attempting to start 'ora.std12c1.tbx.svc' on 'rhel12c1'
CRS-2676: Start of 'ora.std12c1.tbx.svc' on 'rhel12c1' succeeded
CRS-6016: Resource auto-start has completed for server rhel12c1
CRS-6024: Completed start of Oracle Cluster Ready Services-managed resources
CRS-4123: Oracle High Availability Services has been started.
2017/09/19 12:29:05 CLSRSC-343: Successfully started Oracle Clusterware stack
clscfg: EXISTING configuration version 5 detected.
clscfg: version 5 is 12c Release 1.
Successfully taken the backup of node specific configuration in OCR.
Successfully accumulated necessary OCR keys.
Creating OCR keys for user 'root', privgrp 'root'..
Operation successful.
2017/09/19 12:29:22 CLSRSC-595: Executing upgrade step 18 of 19: 'UpgradeNode'.
2017/09/19 12:29:29 CLSRSC-474: Initiating upgrade of resource types
2017/09/19 12:30:31 CLSRSC-482: Running command: 'srvctl upgrade model -s 12.1.0.2.0 -d 12.2.0.1.0 -p first'
2017/09/19 12:30:31 CLSRSC-475: Upgrade of resource types successfully initiated.
2017/09/19 12:30:59 CLSRSC-595: Executing upgrade step 19 of 19: 'PostUpgrade'.
2017/09/19 12:31:09 CLSRSC-325: Configure Oracle Grid Infrastructure for a Cluster ... succeeded
5. Root script run on last node
[root@rhel12c2 ~]# /opt/app/12.2.0/grid/rootupgrade.sh
Performing root user operation.

The following environment variables are set as:
    ORACLE_OWNER= grid
    ORACLE_HOME=  /opt/app/12.2.0/grid

Enter the full pathname of the local bin directory: [/usr/local/bin]:
The contents of "dbhome" have not changed. No need to overwrite.
The file "oraenv" already exists in /usr/local/bin.  Overwrite it? (y/n)
[n]:
The contents of "coraenv" have not changed. No need to overwrite.

Entries will be added to the /etc/oratab file as needed by
Database Configuration Assistant when a database is created
Finished running generic part of root script.
Now product-specific root actions will be performed.
Relinking oracle with rac_on option

Using configuration parameter file: /opt/app/12.2.0/grid/crs/install/crsconfig_params
The log of current session can be found at:
  /opt/app/oracle/crsdata/rhel12c2/crsconfig/rootcrs_rhel12c2_2017-09-19_12-33-52AM.log
2017/09/19 12:33:57 CLSRSC-595: Executing upgrade step 1 of 19: 'UpgradeTFA'.
2017/09/19 12:33:57 CLSRSC-4015: Performing install or upgrade action for Oracle Trace File Analyzer (TFA) Collector.
2017/09/19 12:34:24 CLSRSC-4003: Successfully patched Oracle Trace File Analyzer (TFA) Collector.
2017/09/19 12:34:25 CLSRSC-595: Executing upgrade step 2 of 19: 'ValidateEnv'.
2017/09/19 12:34:32 CLSRSC-595: Executing upgrade step 3 of 19: 'GenSiteGUIDs'.
2017/09/19 12:34:32 CLSRSC-595: Executing upgrade step 4 of 19: 'GetOldConfig'.
2017/09/19 12:34:32 CLSRSC-464: Starting retrieval of the cluster configuration data
2017/09/19 12:34:49 CLSRSC-465: Retrieval of the cluster configuration data has successfully completed.
2017/09/19 12:34:49 CLSRSC-595: Executing upgrade step 5 of 19: 'UpgPrechecks'.
2017/09/19 12:34:51 CLSRSC-363: User ignored prerequisites during installation
2017/09/19 12:34:53 CLSRSC-595: Executing upgrade step 6 of 19: 'SaveParamFile'.
2017/09/19 12:34:58 CLSRSC-595: Executing upgrade step 7 of 19: 'SetupOSD'.
2017/09/19 12:35:01 CLSRSC-595: Executing upgrade step 8 of 19: 'PreUpgrade'.

ASM configuration upgraded in local node successfully.

2017/09/19 12:35:09 CLSRSC-466: Starting shutdown of the current Oracle Grid Infrastructure stack
2017/09/19 12:35:41 CLSRSC-467: Shutdown of the current Oracle Grid Infrastructure stack has successfully completed.
2017/09/19 12:35:46 CLSRSC-595: Executing upgrade step 9 of 19: 'CheckCRSConfig'.
2017/09/19 12:35:47 CLSRSC-595: Executing upgrade step 10 of 19: 'UpgradeOLR'.
2017/09/19 12:35:50 CLSRSC-595: Executing upgrade step 11 of 19: 'ConfigCHMOS'.
2017/09/19 12:35:50 CLSRSC-595: Executing upgrade step 12 of 19: 'InstallAFD'.
2017/09/19 12:35:52 CLSRSC-595: Executing upgrade step 13 of 19: 'createOHASD'.
2017/09/19 12:35:53 CLSRSC-595: Executing upgrade step 14 of 19: 'ConfigOHASD'.
2017/09/19 12:36:09 CLSRSC-329: Replacing Clusterware entries in file 'oracle-ohasd.conf'
2017/09/19 12:36:28 CLSRSC-595: Executing upgrade step 15 of 19: 'InstallACFS'.
CRS-2791: Starting shutdown of Oracle High Availability Services-managed resources on 'rhel12c2'
CRS-2793: Shutdown of Oracle High Availability Services-managed resources on 'rhel12c2' has completed
CRS-4133: Oracle High Availability Services has been stopped.
CRS-4123: Oracle High Availability Services has been started.
2017/09/19 12:37:05 CLSRSC-595: Executing upgrade step 16 of 19: 'InstallKA'.
2017/09/19 12:37:07 CLSRSC-595: Executing upgrade step 17 of 19: 'UpgradeCluster'.
CRS-2791: Starting shutdown of Oracle High Availability Services-managed resources on 'rhel12c2'
CRS-2793: Shutdown of Oracle High Availability Services-managed resources on 'rhel12c2' has completed
CRS-4133: Oracle High Availability Services has been stopped.
CRS-4123: Starting Oracle High Availability Services-managed resources
CRS-2672: Attempting to start 'ora.evmd' on 'rhel12c2'
CRS-2672: Attempting to start 'ora.mdnsd' on 'rhel12c2'
CRS-2676: Start of 'ora.evmd' on 'rhel12c2' succeeded
CRS-2676: Start of 'ora.mdnsd' on 'rhel12c2' succeeded
CRS-2672: Attempting to start 'ora.gpnpd' on 'rhel12c2'
CRS-2676: Start of 'ora.gpnpd' on 'rhel12c2' succeeded
CRS-2672: Attempting to start 'ora.gipcd' on 'rhel12c2'
CRS-2676: Start of 'ora.gipcd' on 'rhel12c2' succeeded
CRS-2672: Attempting to start 'ora.cssdmonitor' on 'rhel12c2'
CRS-2676: Start of 'ora.cssdmonitor' on 'rhel12c2' succeeded
CRS-2672: Attempting to start 'ora.cssd' on 'rhel12c2'
CRS-2672: Attempting to start 'ora.diskmon' on 'rhel12c2'
CRS-2676: Start of 'ora.diskmon' on 'rhel12c2' succeeded
CRS-2676: Start of 'ora.cssd' on 'rhel12c2' succeeded
CRS-2672: Attempting to start 'ora.cluster_interconnect.haip' on 'rhel12c2'
CRS-2672: Attempting to start 'ora.ctssd' on 'rhel12c2'
CRS-2676: Start of 'ora.ctssd' on 'rhel12c2' succeeded
CRS-2676: Start of 'ora.cluster_interconnect.haip' on 'rhel12c2' succeeded
CRS-2672: Attempting to start 'ora.asm' on 'rhel12c2'
CRS-2676: Start of 'ora.asm' on 'rhel12c2' succeeded
CRS-2672: Attempting to start 'ora.storage' on 'rhel12c2'
CRS-2676: Start of 'ora.storage' on 'rhel12c2' succeeded
CRS-2672: Attempting to start 'ora.crf' on 'rhel12c2'
CRS-2676: Start of 'ora.crf' on 'rhel12c2' succeeded
CRS-2672: Attempting to start 'ora.crsd' on 'rhel12c2'
CRS-2676: Start of 'ora.crsd' on 'rhel12c2' succeeded
CRS-6023: Starting Oracle Cluster Ready Services-managed resources
CRS-6017: Processing resource auto-start for servers: rhel12c2
CRS-2673: Attempting to stop 'ora.rhel12c2.vip' on 'rhel12c1'
CRS-2672: Attempting to start 'ora.ons' on 'rhel12c2'
CRS-2673: Attempting to stop 'ora.LISTENER_SCAN1.lsnr' on 'rhel12c1'
CRS-2677: Stop of 'ora.LISTENER_SCAN1.lsnr' on 'rhel12c1' succeeded
CRS-2673: Attempting to stop 'ora.scan1.vip' on 'rhel12c1'
CRS-2677: Stop of 'ora.rhel12c2.vip' on 'rhel12c1' succeeded
CRS-2672: Attempting to start 'ora.rhel12c2.vip' on 'rhel12c2'
CRS-2677: Stop of 'ora.scan1.vip' on 'rhel12c1' succeeded
CRS-2672: Attempting to start 'ora.scan1.vip' on 'rhel12c2'
CRS-2676: Start of 'ora.rhel12c2.vip' on 'rhel12c2' succeeded
CRS-2672: Attempting to start 'ora.LISTENER.lsnr' on 'rhel12c2'
CRS-2676: Start of 'ora.scan1.vip' on 'rhel12c2' succeeded
CRS-2672: Attempting to start 'ora.LISTENER_SCAN1.lsnr' on 'rhel12c2'
CRS-2676: Start of 'ora.ons' on 'rhel12c2' succeeded
CRS-2676: Start of 'ora.LISTENER.lsnr' on 'rhel12c2' succeeded
CRS-2676: Start of 'ora.LISTENER_SCAN1.lsnr' on 'rhel12c2' succeeded
CRS-6016: Resource auto-start has completed for server rhel12c2
CRS-6024: Completed start of Oracle Cluster Ready Services-managed resources
CRS-4123: Oracle High Availability Services has been started.
2017/09/19 12:38:50 CLSRSC-343: Successfully started Oracle Clusterware stack
clscfg: EXISTING configuration version 5 detected.
clscfg: version 5 is 12c Release 2.
Successfully taken the backup of node specific configuration in OCR.
Successfully accumulated necessary OCR keys.
Creating OCR keys for user 'root', privgrp 'root'..
Operation successful.
2017/09/19 12:39:08 CLSRSC-595: Executing upgrade step 18 of 19: 'UpgradeNode'.
Start upgrade invoked..
2017/09/19 12:39:21 CLSRSC-478: Setting Oracle Clusterware active version on the last node to be upgraded
2017/09/19 12:39:21 CLSRSC-482: Running command: '/opt/app/12.2.0/grid/bin/crsctl set crs activeversion'
Started to upgrade the active version of Oracle Clusterware. This operation may take a few minutes.
Started to upgrade CSS.
CSS was successfully upgraded.
Started to upgrade Oracle ASM.
Started to upgrade CRS.
CRS was successfully upgraded.
Successfully upgraded the active version of Oracle Clusterware.
Oracle Clusterware active version was successfully set to 12.2.0.1.0.
2017/09/19 12:40:31 CLSRSC-479: Successfully set Oracle Clusterware active version
2017/09/19 12:40:31 CLSRSC-476: Finishing upgrade of resource types
2017/09/19 12:40:57 CLSRSC-482: Running command: 'srvctl upgrade model -s 12.1.0.2.0 -d 12.2.0.1.0 -p last'
2017/09/19 12:40:57 CLSRSC-477: Successfully completed upgrade of resource types
2017/09/19 12:41:41 CLSRSC-595: Executing upgrade step 19 of 19: 'PostUpgrade'.
2017/09/19 12:41:55 CLSRSC-325: Configure Oracle Grid Infrastructure for a Cluster ... succeeded
6. At this stage the cluster version is upgraded to 12.2
$ crsctl query crs activeversion
Oracle Clusterware active version on the cluster is [12.2.0.1.0]
$ crsctl query crs softwareversion -all
Oracle Clusterware version on node [rhel12c1] is [12.2.0.1.0]
Oracle Clusterware version on node [rhel12c2] is [12.2.0.1.0]

crsctl get cluster mode status
Cluster is running in "flex" mode
7. After root script run continue with the OUI for GMIR configuration.
The old MGMTDB is dropped
New MGMTDB is created in new location.
8. At the end of the upgrade the cluvfy will fail. This failure could be ignored and upgrade completes with a warning. The reason for this failure seems to be the scan name change introduced in 12.2. More on this is given below.
Looking at the upgrade log the failure on the cluvfy could be identified as below.
INFO:  [Sep 19, 2017 1:22:59 PM] Read: Verifying Single Client Access Name (SCAN) ...FAILED
INFO:  [Sep 19, 2017 1:22:59 PM] Verifying Single Client Access Name (SCAN) ...FAILED
INFO:  [Sep 19, 2017 1:22:59 PM] Read:   Verifying DNS/NIS name service
INFO:  [Sep 19, 2017 1:22:59 PM]   Verifying DNS/NIS name service
INFO:  [Sep 19, 2017 1:22:59 PM] Read:   'prod-cluster-scan.prod-cluster.rac.domain.net' ...FAILED
INFO:  [Sep 19, 2017 1:22:59 PM]   'prod-cluster-scan.prod-cluster.rac.domain.net' ...FAILED
INFO:  [Sep 19, 2017 1:22:59 PM] Read:   PRVG-1101 : SCAN name "prod-cluster-scan.prod-cluster.rac.domain.net" failed
INFO:  [Sep 19, 2017 1:22:59 PM]   PRVG-1101 : SCAN name "prod-cluster-scan.prod-cluster.rac.domain.net" failed
INFO:  [Sep 19, 2017 1:22:59 PM] Read:   to resolve
INFO:  [Sep 19, 2017 1:22:59 PM]   to resolve
INFO:  [Sep 19, 2017 1:22:59 PM] Read:
INFO:  [Sep 19, 2017 1:22:59 PM] Read: Verifying GNS Integrity ...WARNING
INFO:  [Sep 19, 2017 1:22:59 PM] Verifying GNS Integrity ...WARNING
INFO:  [Sep 19, 2017 1:22:59 PM] Read:   Verifying name resolution for GNS sub domain qualified names ...WARNING
INFO:  [Sep 19, 2017 1:22:59 PM]   Verifying name resolution for GNS sub domain qualified names ...WARNING
INFO:  [Sep 19, 2017 1:22:59 PM] Read:   PRVF-5218 : Domain name "rhel12c1-vip.prod-cluster.rac.domain.net" did not
INFO:  [Sep 19, 2017 1:22:59 PM]   PRVF-5218 : Domain name "rhel12c1-vip.prod-cluster.rac.domain.net" did not
In 12.2 the cluster name is added to the scan name and used as a prefixed to the GNS domain. It could be that mDNS is not fully aware of this change and some name resolutions fail while others succeed. For example below it could be seen that resolution of VIP name with cluster-name added (12.2 format) fails on node 1 but suceeds for node 2
[root@rhel12c2 ~]# nslookup rhel12c1-vip.prod-cluster.rac.domain.net
Server:         192.168.0.85
Address:        192.168.0.85#53

Non-authoritative answer:
*** Can't find rhel12c1-vip.prod-cluster.rac.domain.net: No answer

[root@rhel12c2 ~]# nslookup rhel12c2-vip.prod-cluster.rac.domain.net
Server:         192.168.0.85
Address:        192.168.0.85#53

Non-authoritative answer:
Name:   rhel12c2-vip.prod-cluster.rac.domain.net
Address: 192.168.0.89
If the old VIP name is used (without change introduced in 12.2) then it succeds for all nodes
[grid@rhel12c1 GridSetupActions2017-09-19_12-03-23PM]$ nslookup rhel12c1-vip.rac.domain.net
Server:         192.168.0.85
Address:        192.168.0.85#53

Non-authoritative answer:
Name:   rhel12c1-vip.rac.domain.net
Address: 192.168.0.90

[grid@rhel12c1 GridSetupActions2017-09-19_12-03-23PM]$ nslookup rhel12c2-vip.rac.domain.net
Server:         192.168.0.85
Address:        192.168.0.85#53

Non-authoritative answer:
Name:   rhel12c2-vip.rac.domain.net
Address: 192.168.0.89
Same could be seen for scan name resolution as well. Works with old scan name but fails for the new 12.2 format of it.
[root@rhel12c1 grid]# nslookup prod-cluster-scan.rac.domain.net
Server:         192.168.0.85
Address:        192.168.0.85#53

Non-authoritative answer:
Name:   prod-cluster-scan.rac.domain.net
Address: 192.168.0.88
Name:   prod-cluster-scan.rac.domain.net
Address: 192.168.0.91
Name:   prod-cluster-scan.rac.domain.net
Address: 192.168.0.98

[root@rhel12c2 ~]# nslookup prod-cluster-scan.prod-cluster.rac.domain.net
Server:         192.168.0.85
Address:        192.168.0.85#53

Non-authoritative answer:
*** Can't find prod-cluster-scan.prod-cluster.rac.domain.net: No answer
The config shows name of the new scan name but an OCR dump doesn't have any reference to this new name. OCR only contains old scan name references
srvctl config scan
SCAN name: prod-cluster-scan.prod-cluster.rac.domain.net, Network: 1
Subnet IPv4: 192.168.0.0/255.255.255.0/eth0, dhcp
Subnet IPv6:
SCAN 1 IPv4 VIP: -/scan1-vip/192.168.0.98
SCAN VIP is enabled.
SCAN VIP is individually enabled on nodes:
SCAN VIP is individually disabled on nodes:
SCAN 2 IPv4 VIP: -/scan2-vip/192.168.0.88
SCAN VIP is enabled.
SCAN VIP is individually enabled on nodes:
SCAN VIP is individually disabled on nodes:
SCAN 3 IPv4 VIP: -/scan3-vip/192.168.0.91
SCAN VIP is enabled.
SCAN VIP is individually enabled on nodes:
SCAN VIP is individually disabled on nodes:

# more OCRDUMPFILE | grep prod-cluster-scan.prod-cluster.rac.domain.net <-- nothing returned
# more OCRDUMPFILE | grep prod-cluster-scan.rac.domain.net (output has been shortend)
ORATEXT : ... =prod-cluster-scan.rac.domain.net~11=60~63=ora.hub.category~86=*~66=0~53=hard(ora.net1.network) weak(global:ora.gns) ...
ORATEXT : ... =prod-cluster-scan.rac.domain.net~11=60~63=ora.hub.category~86=*~66=0~53=hard(ora.net1.network) weak(global:ora.gns) ...
ORATEXT : ... =prod-cluster-scan.rac.domain.net~11=60~63=ora.hub.category~86=*~66=0~53=hard(ora.net1.network) weak(global:ora.gns) ...
ORATEXT : ... =prod-cluster-scan.rac.domain.net~11=60~63=ora.hub.category~86=*~66=0~53=hard(ora.net1.network) weak(global:ora.gns) ...
ORATEXT : ... =prod-cluster-scan.rac.domain.net~11=60~63=ora.hub.category~86=*~66=0~53=hard(ora.net1.network) weak(global:ora.gns) ...
ORATEXT : ... =prod-cluster-scan.rac.domain.net~11=60~63=ora.hub.category~86=*~66=0~53=hard(ora.net1.network) weak(global:ora.gns) ...
At times when the name resolution worked, it resolved to more than 3 IPs (SCAN was configured only for 3 IPs before upgrade).
[root@rhel12c2 ~]#  nslookup prod-cluster-scan.prod-cluster.rac.domain.net
Server:         192.168.0.85
Address:        192.168.0.85#53

Non-authoritative answer:
Name:   prod-cluster-scan.prod-cluster.rac.domain.net
Address: 192.168.0.89
Name:   prod-cluster-scan.prod-cluster.rac.domain.net
Address: 192.168.0.90
Name:   prod-cluster-scan.prod-cluster.rac.domain.net
Address: 192.168.0.92
Name:   prod-cluster-scan.prod-cluster.rac.domain.net
Address: 192.168.0.95
Name:   prod-cluster-scan.prod-cluster.rac.domain.net
Address: 192.168.0.97
The additoinal IPs were visible in the GNS config as well.
srvctl config gns -list
...
prod-cluster.Oracle-GNS-ZM SRV Target: Oracle-GNS-ZM Protocol: tcp Port: 41492 Weight: 0 Priority: 0 Flags: 0x315
prod-cluster-scan.prod-cluster A 192.168.0.86 Unique Flags: 0x81 <- new scan name
prod-cluster-scan.prod-cluster A 192.168.0.91 Unique Flags: 0x81  <- new scan name
prod-cluster-scan.prod-cluster A 192.168.0.92 Unique Flags: 0x81  <- new scan name additional IP
prod-cluster-scan.prod-cluster A 192.168.0.95 Unique Flags: 0x81 <- new scan name additional IP 
prod-cluster-scan.prod-cluster A 192.168.0.98 Unique Flags: 0x81  <- new scan name
prod-cluster-scan1-vip.prod-cluster A 192.168.0.98 Unique Flags: 0x81
prod-cluster-scan2-vip.prod-cluster A 192.168.0.86 Unique Flags: 0x81
prod-cluster-scan3-vip.prod-cluster A 192.168.0.91 Unique Flags: 0x81
rhel12c1-vip.prod-cluster A 192.168.0.88 Unique Flags: 0x81
rhel12c2-vip.prod-cluster A 192.168.0.97 Unique Flags: 0x81
prod-cluster-scan A 192.168.0.86 Unique Flags: 0x81 <- old scan name
prod-cluster-scan A 192.168.0.91 Unique Flags: 0x81 <- old scan name
prod-cluster-scan A 192.168.0.98 Unique Flags: 0x81 <- old scan name
prod-cluster-scan1-vip A 192.168.0.98 Unique Flags: 0x81
prod-cluster-scan2-vip A 192.168.0.86 Unique Flags: 0x81
prod-cluster-scan3-vip A 192.168.0.91 Unique Flags: 0x81
rhel12c1-vip A 192.168.0.88 Unique Flags: 0x81
rhel12c2-vip A 192.168.0.97 Unique Flags: 0x81
These additional IPs were not assgined to anything as a rescult the GNS verficiation fails.
cluvfy comp gns -postcrsinst -verbose

Verifying GNS Integrity ...
  Verifying subdomain is a valid name ...PASSED
  Verifying GNS VIP belongs to the public network ...PASSED
  Verifying GNS VIP is a valid address ...PASSED
  Verifying name resolution for GNS sub domain qualified names ...FAILED (PRVF-5216, PRKN-1035)
  Verifying GNS resource ...
  Node          Running?                  Enabled?
  ------------  ------------------------  ------------------------
  rhel12c1      no                        yes
  rhel12c2      yes                       yes

  Verifying GNS resource ...PASSED
  Verifying GNS VIP resource ...
  Node          Running?                  Enabled?
  ------------  ------------------------  ------------------------
  rhel12c1      no                        yes
  rhel12c2      yes                       yes

  Verifying GNS VIP resource ...PASSED
Verifying GNS Integrity ...FAILED (PRVF-5216, PRKN-1035)

Verification of GNS integrity was unsuccessful.
Checks did not pass for the following nodes:
        rhel12c1


Failures were encountered during execution of CVU verification request "GNS integrity".

Verifying GNS Integrity ...FAILED
  Verifying name resolution for GNS sub domain qualified names ...FAILED
  PRVF-5216 : The following GNS resolved IP addresses for
  "prod-cluster-scan.prod-cluster.rac.domain.net" are not reachable:
  "192.168.0.92,192.168.0.95"
  PRKN-1035 : Host "192.168.0.92" is unreachable
  PRKN-1035 : Host "192.168.0.95" is unreachable
The scan name resolution issue was observed in multiple upgrade attempts. There was no exact fix for this. At times after restart of scan the resolution worked without any issue.
srvctl stop scan_listener
srvctl stop scan
srvctl start scan
srvctl start scan_listener
Other times GNS had to be removed and added again
# srvctl stop gns
# srvctl remove gns
Remove GNS? (y/[n]) y
# srvctl add gns -vip 192.168.0.87 -domain rac.domain.net
# srvctl start gns
Once fixed the name resolution worked as expected
[root@rhel12c2 ~]# nslookup prod-cluster-scan.prod-cluster.rac.domain.net
Server:         192.168.0.85
Address:        192.168.0.85#53

Non-authoritative answer:
Name:   prod-cluster-scan.prod-cluster.rac.domain.net
Address: 192.168.0.89
Name:   prod-cluster-scan.prod-cluster.rac.domain.net
Address: 192.168.0.90
Name:   prod-cluster-scan.prod-cluster.rac.domain.net
Address: 192.168.0.92

[grid@rhel12c1 ~]$ nslookup rhel12c1-vip.prod-cluster.rac.domain.net
Server:         192.168.0.85
Address:        192.168.0.85#53

Non-authoritative answer:
Name:   rhel12c1-vip.prod-cluster.rac.domain.net
Address: 192.168.0.88

[grid@rhel12c1 ~]$ nslookup rhel12c2-vip.prod-cluster.rac.domain.net
Server:         192.168.0.85
Address:        192.168.0.85#53

Non-authoritative answer:
Name:   rhel12c2-vip.prod-cluster.rac.domain.net
Address: 192.168.0.97
GNS verification also suceeds
cluvfy comp gns -postcrsinst -verbose

Verifying GNS Integrity ...
  Verifying subdomain is a valid name ...PASSED
  Verifying GNS VIP belongs to the public network ...PASSED
  Verifying GNS VIP is a valid address ...PASSED
  Verifying name resolution for GNS sub domain qualified names ...PASSED
  Verifying GNS resource ...
  Node          Running?                  Enabled?
  ------------  ------------------------  ------------------------
  rhel12c1      yes                       yes
  rhel12c2      no                        yes

  Verifying GNS resource ...PASSED
  Verifying GNS VIP resource ...
  Node          Running?                  Enabled?
  ------------  ------------------------  ------------------------
  rhel12c1      yes                       yes
  rhel12c2      no                        yes

  Verifying GNS VIP resource ...PASSED
Verifying GNS Integrity ...PASSED

Verification of GNS integrity was successful.
8. Once the above issue is fixed run post crsinst to verify cluster setup.
cluvfy stage -post crsinst -n all


Verifying Node Connectivity ...
  Verifying Hosts File ...PASSED
  Verifying Check that maximum (MTU) size packet goes through subnet ...PASSED
  Verifying subnet mask consistency for subnet "192.168.1.0" ...PASSED
  Verifying subnet mask consistency for subnet "192.168.0.0" ...PASSED
Verifying Node Connectivity ...PASSED
Verifying Multicast check ...PASSED
Verifying ASM filter driver configuration consistency ...PASSED
Verifying Time zone consistency ...PASSED
Verifying Cluster Manager Integrity ...PASSED
Verifying User Mask ...PASSED
Verifying Cluster Integrity ...PASSED
Verifying OCR Integrity ...PASSED
Verifying CRS Integrity ...
  Verifying Clusterware Version Consistency ...PASSED
Verifying CRS Integrity ...PASSED
Verifying Node Application Existence ...PASSED
Verifying Single Client Access Name (SCAN) ...
  Verifying DNS/NIS name service 'prod-cluster-scan.prod-cluster.rac.domain.net' ...
    Verifying Name Service Switch Configuration File Integrity ...PASSED
  Verifying DNS/NIS name service 'prod-cluster-scan.prod-cluster.rac.domain.net' ...PASSED
Verifying Single Client Access Name (SCAN) ...PASSED
Verifying OLR Integrity ...PASSED
Verifying GNS Integrity ...
  Verifying subdomain is a valid name ...PASSED
  Verifying GNS VIP belongs to the public network ...PASSED
  Verifying GNS VIP is a valid address ...PASSED
  Verifying name resolution for GNS sub domain qualified names ...PASSED
  Verifying GNS resource ...PASSED
  Verifying GNS VIP resource ...PASSED
Verifying GNS Integrity ...PASSED
Verifying Voting Disk ...PASSED
Verifying ASM Integrity ...
  Verifying Node Connectivity ...
    Verifying Hosts File ...PASSED
    Verifying Check that maximum (MTU) size packet goes through subnet ...PASSED
    Verifying subnet mask consistency for subnet "192.168.1.0" ...PASSED
    Verifying subnet mask consistency for subnet "192.168.0.0" ...PASSED
  Verifying Node Connectivity ...PASSED
Verifying ASM Integrity ...PASSED
Verifying Device Checks for ASM ...
  Verifying Access Control List check ...PASSED
Verifying Device Checks for ASM ...PASSED
Verifying ASM disk group free space ...PASSED
Verifying I/O scheduler ...
  Verifying Package: cvuqdisk-1.0.10-1 ...PASSED
Verifying I/O scheduler ...PASSED
Verifying User Not In Group "root": grid ...PASSED
Verifying Clock Synchronization ...PASSED
Verifying Network configuration consistency checks ...PASSED
Verifying File system mount options for path GI_HOME ...PASSED

Post-check for cluster services setup was successful.
Related Posts
Upgrading RAC from 11.2.0.4 to 12.2.0.1 - Grid Infrastructure
Upgrading RAC from 11.2.0.4 to 12.1.0.2 - Grid Infrastructure
Upgrading Grid Infrastructure Used for Single Instance from 11.2.0.4 to 12.1.0.2
Upgrading RAC from 12.1.0.1 to 12.1.0.2 - Grid Infrastructure
Upgrading 12c CDB and PDB from 12.1.0.1 to 12.1.0.2
Upgrading from 11gR2 (11.2.0.3) to 12c (12.1.0.1) Grid Infrastructure
Upgrade Oracle Database 12c1 from 12.1.0.1 to 12.1.0.2
Upgrading RAC from 11.2.0.3 to 11.2.0.4 - Grid Infrastructure
Upgrading from 10.2.0.4 to 10.2.0.5 (Clusterware, RAC, ASM)
Upgrade from 10.2.0.5 to 11.2.0.3 (Clusterware, RAC, ASM)
Upgrade from 11.1.0.7 to 11.2.0.3 (Clusterware, ASM & RAC)
Upgrade from 11.1.0.7 to 11.2.0.4 (Clusterware, ASM & RAC)
Upgrading from 11.1.0.7 to 11.2.0.3 with Transient Logical Standby
Upgrading from 11.2.0.1 to 11.2.0.3 with in-place upgrade for RAC
In-place upgrade from 11.2.0.2 to 11.2.0.3
Upgrading from 11.2.0.2 to 11.2.0.3 with Physical Standby - 1
Upgrading from 11.2.0.2 to 11.2.0.3 with Physical Standby - 2
Upgrading from 11gR2 (11.2.0.3) to 12c (12.1.0.1) Grid Infrastructure