crsctl modify resource "ora.FLASH.dg" -attr "AUTO_START=always" crsctl modify resource "ora.DATA.dg" -attr "AUTO_START=always" crsctl modify resource ora.racse11g1.db -attr "ACTION_SCRIPT=/opt/app/11.2.0/grid/bin/racgwrap" crsctl modify resource ora.racse11g1.db -attr "AUTO_START=always"
1. Verify clusterware upgrade is clean and active version is 11.2.0.4
[oracle@rac2 ~]$ crsctl query crs softwareversion Oracle Clusterware version on node [rac2] is [11.2.0.4.0] [oracle@rac2 ~]$ crsctl query crs releaseversion Oracle High Availability Services release version on the local node is [11.2.0.4.0] [oracle@rac2 ~]$ crsctl query crs activeversion -f Oracle Clusterware active version on the cluster is [11.2.0.4.0]. The cluster upgrade state is [NORMAL].2. Identify the "OCR-node". As mentioned in the earlier post the OCR-node is the node where the backup of the lower version OCR was taken during the upgrade. In this setup this was done on node "rac1".
[oracle@rac1 cdata]$ cd /opt/app/11.2.0/grid/cdata/ [oracle@rac1 cdata]$ ls -l total 3132 drwxrwxr-x 2 oracle oinstall 4096 Mar 5 09:29 cg_11g_cluster drwxr-xr-x 2 oracle oinstall 4096 Mar 4 16:47 localhost -rw------- 1 root root 88875 Mar 4 17:11 ocr11.1.0.7.0 drwxr-xr-x 2 oracle oinstall 4096 Mar 4 17:11 rac1 -rw------- 1 root oinstall 272756736 Mar 5 09:31 rac1.olr3. MOS note 1364946.1 says to run the rootcrs.pl with downgrade option on all but the OCR-node (i.e don't run this on the OCR-node). But this results in an error
[root@rac2 oracle]# /opt/app/11.2.0/grid/crs/install/rootcrs.pl -downgrade -force Using configuration parameter file: /opt/app/11.2.0/grid/crs/install/crsconfig_params One or more options required but missing: -oldcrshome -versionIt could be that MOS not updated to reflect the changes of 11.2.0.4. In order to run the command specify the old crs home and the old crs version in 5 number format. Since rac1 is OCR-node this command is run on the other remaining node rac2
[root@rac2 oracle]# /opt/app/11.2.0/grid/crs/install/rootcrs.pl -downgrade -force -oldcrshome /opt/crs/oracle/product/11.1.0/crs -version 11.1.0.7.0 Using configuration parameter file: /opt/app/11.2.0/grid/crs/install/crsconfig_params CRS-2673: Attempting to stop 'ora.registry.acfs' on 'rac2' CRS-2677: Stop of 'ora.registry.acfs' on 'rac2' succeeded CRS-2791: Starting shutdown of Oracle High Availability Services-managed resources on 'rac2' CRS-2673: Attempting to stop 'ora.crsd' on 'rac2' CRS-2790: Starting shutdown of Cluster Ready Services-managed resources on 'rac2' CRS-2673: Attempting to stop 'ora.DATA.dg' on 'rac2' CRS-2673: Attempting to stop 'ora.FLASH.dg' on 'rac2' CRS-2673: Attempting to stop 'ora.racse11g1.racse11g2.inst' on 'rac2' CRS-2673: Attempting to stop 'ora.cvu' on 'rac2' CRS-2673: Attempting to stop 'ora.LISTENER.lsnr' on 'rac2' CRS-2673: Attempting to stop 'ora.oc4j' on 'rac2' CRS-2677: Stop of 'ora.cvu' on 'rac2' succeeded CRS-2672: Attempting to start 'ora.cvu' on 'rac1' CRS-2677: Stop of 'ora.LISTENER.lsnr' on 'rac2' succeeded CRS-2673: Attempting to stop 'ora.rac2.vip' on 'rac2' CRS-2676: Start of 'ora.cvu' on 'rac1' succeeded CRS-2677: Stop of 'ora.rac2.vip' on 'rac2' succeeded CRS-2672: Attempting to start 'ora.rac2.vip' on 'rac1' CRS-2676: Start of 'ora.rac2.vip' on 'rac1' succeeded CRS-2677: Stop of 'ora.racse11g1.racse11g2.inst' on 'rac2' succeeded CRS-2677: Stop of 'ora.FLASH.dg' on 'rac2' succeeded CRS-2677: Stop of 'ora.DATA.dg' on 'rac2' succeeded CRS-2673: Attempting to stop 'ora.asm' on 'rac2' CRS-2677: Stop of 'ora.asm' on 'rac2' succeeded CRS-2677: Stop of 'ora.oc4j' on 'rac2' succeeded CRS-2672: Attempting to start 'ora.oc4j' on 'rac1' CRS-2676: Start of 'ora.oc4j' on 'rac1' succeeded CRS-2673: Attempting to stop 'ora.ons' on 'rac2' CRS-2677: Stop of 'ora.ons' on 'rac2' succeeded CRS-2673: Attempting to stop 'ora.net1.network' on 'rac2' CRS-2677: Stop of 'ora.net1.network' on 'rac2' succeeded CRS-2792: Shutdown of Cluster Ready Services-managed resources on 'rac2' has completed CRS-2677: Stop of 'ora.crsd' on 'rac2' succeeded CRS-2673: Attempting to stop 'ora.asm' on 'rac2' CRS-2673: Attempting to stop 'ora.ctssd' on 'rac2' CRS-2673: Attempting to stop 'ora.evmd' on 'rac2' CRS-2673: Attempting to stop 'ora.mdnsd' on 'rac2' CRS-2677: Stop of 'ora.evmd' on 'rac2' succeeded CRS-2677: Stop of 'ora.mdnsd' on 'rac2' succeeded CRS-2677: Stop of 'ora.ctssd' on 'rac2' succeeded CRS-2677: Stop of 'ora.asm' on 'rac2' succeeded CRS-2673: Attempting to stop 'ora.cluster_interconnect.haip' on 'rac2' CRS-2677: Stop of 'ora.cluster_interconnect.haip' on 'rac2' succeeded CRS-2673: Attempting to stop 'ora.cssd' on 'rac2' CRS-2677: Stop of 'ora.cssd' on 'rac2' succeeded CRS-2673: Attempting to stop 'ora.crf' on 'rac2' CRS-2677: Stop of 'ora.crf' on 'rac2' succeeded CRS-2673: Attempting to stop 'ora.gipcd' on 'rac2' CRS-2677: Stop of 'ora.gipcd' on 'rac2' succeeded CRS-2673: Attempting to stop 'ora.gpnpd' on 'rac2' CRS-2677: Stop of 'ora.gpnpd' on 'rac2' succeeded CRS-2793: Shutdown of Oracle High Availability Services-managed resources on 'rac2' has completed CRS-4133: Oracle High Availability Services has been stopped. Removing Trace File Analyzer Successfully downgraded Oracle clusterware stack on this node4. On the OCR-node run the same command with the addition of the lastnode option.
[root@rac1 oracle]# /opt/app/11.2.0/grid/crs/install/rootcrs.pl -downgrade -force -lastnode -oldcrshome /opt/crs/oracle/product/11.1.0/crs -version 11.1.0.7.0 Using configuration parameter file: /opt/app/11.2.0/grid/crs/install/crsconfig_params CRS-2673: Attempting to stop 'ora.registry.acfs' on 'rac1' CRS-2677: Stop of 'ora.registry.acfs' on 'rac1' succeeded CRS-2791: Starting shutdown of Oracle High Availability Services-managed resources on 'rac1' CRS-2673: Attempting to stop 'ora.crsd' on 'rac1' CRS-2790: Starting shutdown of Cluster Ready Services-managed resources on 'rac1' CRS-2673: Attempting to stop 'ora.rac2.vip' on 'rac1' CRS-2673: Attempting to stop 'ora.oc4j' on 'rac1' CRS-2673: Attempting to stop 'ora.LISTENER.lsnr' on 'rac1' CRS-2673: Attempting to stop 'ora.cvu' on 'rac1' CRS-2673: Attempting to stop 'ora.racse11g1.db' on 'rac1' CRS-2673: Attempting to stop 'ora.LISTENER_SCAN1.lsnr' on 'rac1' CRS-2673: Attempting to stop 'ora.DATA.dg' on 'rac1' CRS-2673: Attempting to stop 'ora.FLASH.dg' on 'rac1' CRS-2673: Attempting to stop 'ora.racse11g1.racse11g1.inst' on 'rac1' CRS-2677: Stop of 'ora.cvu' on 'rac1' succeeded CRS-2677: Stop of 'ora.LISTENER_SCAN1.lsnr' on 'rac1' succeeded CRS-2673: Attempting to stop 'ora.scan1.vip' on 'rac1' CRS-2677: Stop of 'ora.LISTENER.lsnr' on 'rac1' succeeded CRS-2677: Stop of 'ora.rac2.vip' on 'rac1' succeeded CRS-2673: Attempting to stop 'ora.rac1.vip' on 'rac1' CRS-2677: Stop of 'ora.racse11g1.db' on 'rac1' succeeded CRS-2677: Stop of 'ora.rac1.vip' on 'rac1' succeeded CRS-2677: Stop of 'ora.scan1.vip' on 'rac1' succeeded CRS-2677: Stop of 'ora.racse11g1.racse11g1.inst' on 'rac1' succeeded CRS-2677: Stop of 'ora.FLASH.dg' on 'rac1' succeeded CRS-2677: Stop of 'ora.DATA.dg' on 'rac1' succeeded CRS-2673: Attempting to stop 'ora.asm' on 'rac1' CRS-2677: Stop of 'ora.oc4j' on 'rac1' succeeded CRS-2677: Stop of 'ora.asm' on 'rac1' succeeded CRS-2673: Attempting to stop 'ora.ons' on 'rac1' CRS-2677: Stop of 'ora.ons' on 'rac1' succeeded CRS-2673: Attempting to stop 'ora.net1.network' on 'rac1' CRS-2677: Stop of 'ora.net1.network' on 'rac1' succeeded CRS-2792: Shutdown of Cluster Ready Services-managed resources on 'rac1' has completed CRS-2677: Stop of 'ora.crsd' on 'rac1' succeeded CRS-2673: Attempting to stop 'ora.ctssd' on 'rac1' CRS-2673: Attempting to stop 'ora.evmd' on 'rac1' CRS-2673: Attempting to stop 'ora.asm' on 'rac1' CRS-2673: Attempting to stop 'ora.mdnsd' on 'rac1' CRS-2677: Stop of 'ora.evmd' on 'rac1' succeeded CRS-2677: Stop of 'ora.mdnsd' on 'rac1' succeeded CRS-2677: Stop of 'ora.ctssd' on 'rac1' succeeded CRS-2677: Stop of 'ora.asm' on 'rac1' succeeded CRS-2673: Attempting to stop 'ora.cluster_interconnect.haip' on 'rac1' CRS-2677: Stop of 'ora.cluster_interconnect.haip' on 'rac1' succeeded CRS-2673: Attempting to stop 'ora.cssd' on 'rac1' CRS-2677: Stop of 'ora.cssd' on 'rac1' succeeded CRS-2673: Attempting to stop 'ora.crf' on 'rac1' CRS-2677: Stop of 'ora.crf' on 'rac1' succeeded CRS-2673: Attempting to stop 'ora.gipcd' on 'rac1' CRS-2677: Stop of 'ora.gipcd' on 'rac1' succeeded CRS-2673: Attempting to stop 'ora.gpnpd' on 'rac1' CRS-2677: Stop of 'ora.gpnpd' on 'rac1' succeeded CRS-2793: Shutdown of Oracle High Availability Services-managed resources on 'rac1' has completed CRS-4133: Oracle High Availability Services has been stopped. Removing Trace File Analyzer Successfully downgraded OCR to 11.1.0.7.0 Run root.sh from the old crshome on all the cluster nodes one at a time to start the Clusterware Successful deletion of voting disk /dev/sdh1. Now formatting voting disk: /dev/sdh1. Successful addition of voting disk /dev/sdh1. Successful deletion of voting disk /dev/sdf1. Now formatting voting disk: /dev/sdf1. Successful addition of voting disk /dev/sdf1. Successful deletion of voting disk /dev/sdg1. Now formatting voting disk: /dev/sdg1. Successful addition of voting disk /dev/sdg1.5. Do not run the root.sh just yet. Edit the oratab to reflect the Oracle home the ASM instance would run out of. Since before the upgrade ASM ran out of separate home (not the same oracle home DB ran out of) this is added to the oratab on both nodes.
On rac1 +ASM1:/opt/app/oracle/product/11.1.0/asm_1:N On rac2 +ASM2:/opt/app/oracle/product/11.1.0/asm_1:N6. Clear the gpnp profile directories
rm -rf /opt/app/11.2.0/grid/gpnp/*7. Make sure any changes subsequently made to the cluster are reflected on the root* script. In this cluster an ocr mirror was added after it was created (following the execution of root.sh). As such this ocr mirror location is missing on the root script but present in the ocr.loc file.
cat /etc/oracle/ocr.loc ocrconfig_loc=/dev/sdb1 ocrmirrorconfig_loc=/dev/sde1 <-- added laterRuning the root.sh without correcting this resulted in the following error
[root@rac1 crs]# /opt/crs/oracle/product/11.1.0/crs/root.sh Checking to see if Oracle CRS stack is already configured Current Oracle Cluster Registry mirror location '/dev/sde1' in '/etc/oracle/ocr.loc' and '' does not match Update either '/etc/oracle/ocr.loc' to use '' or variable CRS_OCR_LOCATIONS in rootconfig.sh with '/dev/sde1' then rerun rootconfig.shTo fix this edit the (11.1) $CRS_HOME/install/rootconfig and add the ocr mirror location
CRS_OCR_LOCATIONS=/dev/sdb1,/dev/sde1After the change the root.sh runs without any issue.
[root@rac1 crs]# /opt/crs/oracle/product/11.1.0/crs/root.sh Checking to see if Oracle CRS stack is already configured Setting the permissions on OCR backup directory Setting up Network socket directories Oracle Cluster Registry configuration upgraded successfully clscfg: EXISTING configuration version 4 detected. clscfg: version 4 is 11 Release 1. Successfully accumulated necessary OCR keys. Using ports: CSS=49895 CRS=49896 EVMC=49898 and EVMR=49897. nodeAt the end of this execution the ASM and database instance will be up and running on this node. Run the root.sh on the second node: node 1: rac1 rac1-pvt rac1 node 2: rac2 rac2-pvt rac2 clscfg: Arguments check out successfully. NO KEYS WERE WRITTEN. Supply -force parameter to override. -force is destructive and will destroy any previous cluster configuration. Oracle Cluster Registry for cluster has already been initialized Startup will be queued to init within 30 seconds. Adding daemons to inittab Expecting the CRS daemons to be up within 600 seconds. Cluster Synchronization Services is active on these nodes. rac1 Cluster Synchronization Services is inactive on these nodes. rac2 Local node checking complete. Run root.sh on remaining nodes to start CRS daemons.
[root@rac2 crs]# /opt/crs/oracle/product/11.1.0/crs/root.sh Checking to see if Oracle CRS stack is already configured Setting the permissions on OCR backup directory Setting up Network socket directories Oracle Cluster Registry configuration upgraded successfully clscfg: EXISTING configuration version 4 detected. clscfg: version 4 is 11 Release 1. Successfully accumulated necessary OCR keys. Using ports: CSS=49895 CRS=49896 EVMC=49898 and EVMR=49897. nodeAt the end of this script execution it was found that ASM instance was not up and running. Trying to manually start resulted in following error: node 1: rac1 rac1-pvt rac1 node 2: rac2 rac2-pvt rac2 clscfg: Arguments check out successfully. NO KEYS WERE WRITTEN. Supply -force parameter to override. -force is destructive and will destroy any previous cluster configuration. Oracle Cluster Registry for cluster has already been initialized Startup will be queued to init within 30 seconds. Adding daemons to inittab Expecting the CRS daemons to be up within 600 seconds. Cluster Synchronization Services is active on these nodes. rac1 rac2 Cluster Synchronization Services is active on all the nodes. Waiting for the Oracle CRSD and EVMD to start Oracle CRS stack installed and running under init(1M)
[oracle@rac2 bin]$ srvctl start asm -n rac2 PRKS-1009 : Failed to start ASM instance "+ASM2" on node "rac2", [PRKS-1009 : Failed to start ASM instance "+ASM2" on node "rac2", [rac2:ora.rac2.ASM2.asm: rac2:ora.rac2.ASM2.asm:SQL*Plus: Release 11.1.0.7.0 - Production on Thu Mar 5 12:16:33 2015 ... rac2:ora.rac2.ASM2.asm:SQL> ORA-00304: requested INSTANCE_NUMBER is busy ... CRS-0215: Could not start resource 'ora.rac2.ASM2.asm'.]] [PRKS-1009 : Failed to start ASM instance "+ASM2" on node "rac2", [rac2:ora.rac2.ASM2.asm: rac2:ora.rac2.ASM2.asm:SQL*Plus: Release 11.1.0.7.0 - Production on Thu Mar 5 12:16:33 2015 ... rac2:ora.rac2.ASM2.asm:SQL> ORA-00304: requested INSTANCE_NUMBER is busy rac2:ora.rac2.ASM2.asm:SQL> Disconnected rac2:ora.rac2.ASM2.asm: CRS-0215: Could not start resource 'ora.rac2.ASM2.asm'.]]This is because during the upgrade the ASM spfile changed and remained the same even after the downgrade as well. Following is a pfile created from the spfile before the upgrade (11.1.0.7)
+ASM1.__oracle_base='/opt/app/oracle'#ORACLE_BASE set from environment +ASM2.__oracle_base='/opt/app/oracle'#ORACLE_BASE set from environment +ASM1.asm_diskgroups='DATA','FLASH' +ASM2.asm_diskgroups='DATA','FLASH' *.cluster_database=true *.diagnostic_dest='/opt/app/oracle' +ASM2.instance_number=2 +ASM1.instance_number=1 *.instance_type='asm' *.large_pool_size=12M *.asm_diskstring='ORCL:*'Below is the pfile created after the upgrade (and same content was there on the pfile created after the downgrade as well)
+ASM1.__oracle_base='/opt/app/oracle'#ORACLE_BASE set from environment +ASM2.__oracle_base='/opt/app/oracle'#ORACLE_BASE set from environment *.asm_diskgroups='DATA','FLASH' *.asm_diskstring='ORCL:*' *.asm_power_limit=1 *.diagnostic_dest='/opt/app/oracle' *.instance_type='asm' *.large_pool_size=16777216 *.memory_target=1627389952 *.remote_login_passwordfile='EXCLUSIVE'Comparing the pfile entries it could be seen that after the upgrade of ASM the instance number entries and cluster database entries are lost. As a result after the downgrade only one instance could be started. To fix this shutdown the database and instances on the node ASM is running. Start the ASM instance in nomount mode with the pfile created before the upgrade to 11.1.0.7 and recreate the spfile.
SQL> create spfile='/dev/sdb3' from pfile='/home/oracle/asmpfile.ora';
8. After this the ASM instances and DB instances could be started on all nodes. However the listener will fail to start on all the nodes (rac1 and rac2) and trying to manually start it would result in following error
[oracle@rac1 admin]$ srvctl start listener -n rac1 rac1:ora.rac1.LISTENER_RAC1.lsnr:TNSLSNR for Linux: Version 11.1.0.7.0 - Production rac1:ora.rac1.LISTENER_RAC1.lsnr:System parameter file is /opt/app/oracle/product/11.1.0/asm_1/network/admin/listener.ora rac1:ora.rac1.LISTENER_RAC1.lsnr:Log messages written to /opt/app/oracle/diag/tnslsnr/rac1/listener_rac1/alert/log.xml rac1:ora.rac1.LISTENER_RAC1.lsnr:TNS-01151: Missing listener name, LISTENER_RAC1, in LISTENER.ORA rac1:ora.rac1.LISTENER_RAC1.lsnr:Listener failed to start. See the error message(s) above... rac1:ora.rac1.LISTENER_RAC1.lsnr:Connecting to (DESCRIPTION=(CONNECT_DATA=(SERVICE_NAME=))(ADDRESS=(PROTOCOL=TCP)(HOST=192.168.0.85)(PORT=1521))) rac1:ora.rac1.LISTENER_RAC1.lsnr:TNS-12535: TNS:operation timed out rac1:ora.rac1.LISTENER_RAC1.lsnr: TNS-12560: TNS:protocol adapter error rac1:ora.rac1.LISTENER_RAC1.lsnr: TNS-00505: Operation timed out rac1:ora.rac1.LISTENER_RAC1.lsnr: Linux Error: 110: Connection timed out CRS-1006: No more members to consider CRS-0215: Could not start resource 'ora.rac1.LISTENER_RAC1.lsnr'Reason for this is that changes made to listener.ora during the upgrade are not rolled back or missing during downgrade. When upgraded to 11.2.0.4 the listener resource is named "ora.LISTENER.lsnr". However on 11.1 the listeners have node specific naming "ora.rac2.LISTENER_RAC1.lsnr" and "ora.rac2.LISTENER_RAC2.lsnr". During the downgrade the listener.ora file created is missing this node specific listener. Add this node specific listener entry to the listener.ora file in the ASM_HOME (only rac1 entry is shown below. Similar entry with correct vip and ip names must be added to rac2 as well).
LISTENER_RAC1 = (DESCRIPTION_LIST = (DESCRIPTION = (ADDRESS = (PROTOCOL = TCP)(HOST = rac1-vip)(PORT = 1521)(IP = FIRST)) (ADDRESS = (PROTOCOL = TCPS)(HOST = rac1-vip)(PORT = 1523)(IP = FIRST)) (ADDRESS = (PROTOCOL = TCP)(HOST = 192.168.0.85)(PORT = 1521)(IP = FIRST)) ) )After this it is possible to start the listener and all resources will be online
[oracle@rac1 admin]$ crs_stat -t Name Type Target State Host ------------------------------------------------------------ ora....SM1.asm application ONLINE ONLINE rac1 ora....C1.lsnr application ONLINE ONLINE rac1 ora.rac1.gsd application ONLINE ONLINE rac1 ora.rac1.ons application ONLINE ONLINE rac1 ora.rac1.vip application ONLINE ONLINE rac1 ora....SM2.asm application ONLINE ONLINE rac2 ora....C2.lsnr application ONLINE ONLINE rac2 ora.rac2.gsd application ONLINE ONLINE rac2 ora.rac2.ons application ONLINE ONLINE rac2 ora.rac2.vip application ONLINE ONLINE rac2 ora....11g1.db application ONLINE ONLINE rac2 ora....g1.inst application ONLINE ONLINE rac1 ora....g2.inst application ONLINE ONLINE rac29. No need to change the action script entry as this would have already been changed to refering 11.1 crs home
crs_stat -p NAME=ora.racse11g1.db TYPE=application ACTION_SCRIPT=/opt/crs/oracle/product/11.1.0/crs/bin/racgwrap10. Update the inventory information and crs=true for 11.1 crs home.
$CRS_HOME/out/bin/runInstaller -updateNodeList ORACLE_HOME=/opt/crs/oracle/product/11.1.0/crs CRS=true $CRS_HOME/out/bin/runInstaller -updateNodeList ORACLE_HOME=/opt/app/11.2.0/grid CRS=falseAfter these commands are run check the inventory.xml to see 11.1 has crs=true
<HOME NAME="clusterware_11g" LOC="/opt/crs/oracle/product/11.1.0/crs" TYPE="O" IDX="1" CRS="true"> <NODE_LIST> <NODE NAME="rac1"/> <NODE NAME="rac2"/> </NODE_LIST> </HOME> <HOME NAME="Ora11g_gridinfrahome1" LOC="/opt/app/11.2.0/grid" TYPE="O" IDX="4"> <-- 11.2 home <NODE_LIST> <NODE NAME="rac1"/> <NODE NAME="rac2"/> </NODE_LIST> </HOME>11. Check the crs version information
[oracle@rac1 crs]$ crsctl query crs activeversion Oracle Clusterware active version on the cluster is [11.1.0.7.0] [oracle@rac1 crs]$ crsctl query crs softwareversion Oracle Clusterware version on node [rac1] is [11.1.0.7.0] [oracle@rac1 admin]$ crsctl query crs releaseversion 11.1.0.7.012. Check ocr integrity and manually backup the ocr
[root@rac1 admin]# ocrcheck Status of Oracle Cluster Registry is as follows : Version : 2 Total space (kbytes) : 296940 Used space (kbytes) : 3916 Available space (kbytes) : 293024 ID : 1749862721 Device/File Name : /dev/sdb1 Device/File integrity check succeeded Device/File Name : /dev/sde1 Device/File integrity check succeeded Cluster registry integrity check succeeded Logical corruption check succeeded [root@rac1 admin]# ocrconfig -manualbackup13. Backup the vote disks using dd
crsctl query css votedisk 0. 0 /dev/sdh1 1. 0 /dev/sdf1 2. 0 /dev/sdg1 Located 3 voting disk(s). dd if=/dev/sdh1 of=/home/oracle/votediskbackup bs=8192 34134+1 records in 34134+1 records out 279627264 bytes (280 MB) copied, 0.69875 seconds, 400 MB/s14. As the last step detach the 11.2.0.4 GI Home from the inventory and remove it manually
./runInstaller -detachHome ORACLE_HOME=/opt/app/11.2.0/grid -silent rm -rf /opt/app/11.2.0/grid # run on all nodesRelated post
Downgrade Grid Infrastructure from 12.1.0.2 to 11.2.0.4
Downgrade Grid Infrastructure from 11.2.0.4 to 11.2.0.3
Useful metalink notes
How to Downgrade 11.2.0.2 Grid Infrastructure Cluster to 11.2.0.1 [ 1364230.1]
How to Downgrade 11.2.0.3 Grid Infrastructure Cluster to Lower 11.2 GI or Pre-11.2 CRS [ 1364946.1]
How to Update Inventory to Set/Unset "CRS=true" Flag for Oracle Clusterware Home [ 1053393.1]
Oracle Clusterware (GI or CRS) Related Abbreviations, Acronyms and Procedures [ 1374275.1]