Tuesday, March 17, 2015

Upgrading Single Instance on ASM from 11.2.0.3 to 11.2.0.4

This post list points to look out for when upgrading a 11.2.0.3 single instance DB on ASM (with role separation) to 11.2.0.4. It's not a comprehensive upgrade guide, refer oracle documentation for more information.
1. Before installing the new 11.2.0.4 GI check in the inventory.xml has crs=true against the existing GI home. In some cases crs=true is missing, usually if GI was installed with software only and later converted to HAS.
<HOME NAME="Ora11g_gridinfrahome1" LOC="/opt/app/oracle/product/11.2.0/grid_1" TYPE="O" IDX="1"/>
If there's no crs=true against the GI home the new version fails to detect the existing clusterware.
If crs=true is missing then update the inventory information by running the following command specifying the existing GI home and crs=true
./runInstaller -updateNodeList ORACLE_HOME=/opt/app/oracle/product/11.2.0/grid_1 CRS=true

<HOME NAME="Ora11g_gridinfrahome1" LOC="/opt/app/oracle/product/11.2.0/grid_1" TYPE="O" IDX="1" CRS="true"/>
Afterwards run the installer of the 11.2.0.4 and the existing GI will be detected. For more information on this issue refer 1953932.1 and 1117063.1

2. Before the upgrade the HAS shows version 11.2.0.3 for release and software version.
$ crsctl query has releaseversion
Oracle High Availability Services release version on the local node is [11.2.0.3.0]
$  crsctl query has softwareversion
Oracle High Availability Services version on the local node is [11.2.0.3.0]
GI upgrades are out of place. i.e install in a different location to existing GI home (grid_4 in this case).
3. Run the rootupgrade.sh when prompted. This will upgrade the ASM instance.
# /opt/app/oracle/product/11.2.0/grid_4/rootupgrade.sh
Performing root user operation for Oracle 11g

The following environment variables are set as:
    ORACLE_OWNER= grid
    ORACLE_HOME=  /opt/app/oracle/product/11.2.0/grid_4

Enter the full pathname of the local bin directory: [/usr/local/bin]:
The contents of "dbhome" have not changed. No need to overwrite.
The contents of "oraenv" have not changed. No need to overwrite.
The contents of "coraenv" have not changed. No need to overwrite.

Entries will be added to the /etc/oratab file as needed by
Database Configuration Assistant when a database is created
Finished running generic part of root script.
Now product-specific root actions will be performed.
Using configuration parameter file: /opt/app/oracle/product/11.2.0/grid_4/crs/install/crsconfig_params
Creating trace directory

ASM Configuration upgraded successfully.

Creating OCR keys for user 'grid', privgrp 'oinstall'..
Operation successful.
LOCAL ONLY MODE
Successfully accumulated necessary OCR keys.
Creating OCR keys for user 'root', privgrp 'root'..
Operation successful.
CRS-4664: Node rhel6m1 successfully pinned.
Replacing Clusterware entries in upstart
Replacing Clusterware entries in upstart

rhel6m1     2015/03/12 12:41:21     /opt/app/oracle/product/11.2.0/grid_4/cdata/rhel6m1/backup_20150312_124121.olr

rhel6m1     2015/03/11 17:57:18     /opt/app/oracle/product/11.2.0/grid_1/cdata/rhel6m1/backup_20150311_175718.olr
Successfully configured Oracle Grid Infrastructure for a Standalone Server
4. Afterwards the software version will reflect the new GI version but release version will remain the same.
$  crsctl query has releaseversion
Oracle High Availability Services release version on the local node is [11.2.0.3.0]
$  crsctl query has softwareversion
Oracle High Availability Services version on the local node is [11.2.0.4.0]
5. ASM instance and listener will run out of the new GI home.
$ srvctl config asm
ASM home: /opt/app/oracle/product/11.2.0/grid_4
ASM listener: LISTENER
Spfile: +DATA/asm/asmparameterfile/registry.253.874087179
ASM diskgroup discovery string: /dev/sd*

$ srvctl config listener
Name: LISTENER
Home: /opt/app/oracle/product/11.2.0/grid_4
End points: TCP:1521


6. Before installing the database software check if $ORACEL_BASE/cfgtoollogs has write permission for oracle user. As this is a setup with role separation this location must have write permission for oinstall group for oracle user to create directories/files inside cfgtoollogs.
chmod 770 $ORACEL_BASE/cfgtoollogs
7. Install the database software as an out of place upgrade. After database software is installed and before running the dbua make sure the oracle binaries have the correct permission on the new oracle home suited to a role separated setup. The group ownership should be asmadmin (or corresponding asm admin group used) but after the software install it remains as oinstall.
[oracle@rhel6m1 bin]$ ls -l oracle*
-rwsr-s--x. 1 oracle oinstall 226889744 Mar 12 13:29 oracle
-rwxr-x---. 1 oracle oinstall         0 Aug 24  2013 oracleO
Change this with setasmgidwrap to the correct ownership
[oracle@rhel6m1 bin]$ ls -l oracle*
-rwsr-s--x. 1 oracle asmadmin 226889744 Mar 12 13:29 oracle
-rwxr-x---. 1 oracle oinstall         0 Aug 24  2013 oracleO
9. Once oracle binary permissions are set run the DBUA to upgrade the database. Verify post upgrade status with cluvfy and orachk
cluvfy stage -post hacfg

orachk -u -o post
10. Finally if upgrade satisfactory increase the compatible parameter on the database and ASM disk attributes to the new 11.2.0.4 version. These changes cannot be rolled backed. On database
alter system set compatible='11.2.0.4.0' scope=spfile sid='*';
On ASM instance
alter diskgroup data set attribute 'compatible.asm'='11.2.0.4';
alter diskgroup flash set attribute 'compatible.asm'='11.2.0.4';
alter diskgroup data set attribute 'compatible.rdbms'='11.2.0.4';
alter diskgroup flash set attribute 'compatible.rdbms'='11.2.0.4';
The HAS shows 11.2.0.4 as the release version.
$  crsctl query has softwareversion
Oracle High Availability Services version on the local node is [11.2.0.4.0]
$  crsctl query has releaseversion
Oracle High Availability Services release version on the local node is [11.2.0.4.0]
This concludes the upgrade of single instance on ASM (with role separation) from 11.2.0.3 to 11.2.0.4.

There was incident where after the GI was upgraded the ASM instance didn't start and following error was shown
srvctl start asm
PRCR-1079 : Failed to start resource ora.asm
CRS-5017: The resource action "ora.asm start" encountered the following error:
ORA-00119: invalid specification for system parameter LOCAL_LISTENER
ORA-00132: syntax error or unresolved network name 'LISTENER_+ASM'
. For details refer to "(:CLSN00107:)" in "/opt/app/oracle/product/11.2.0/grid_4/log/rhel6m1/agent/ohasd/oraagent_grid/oraagent_grid.log".
This standalone system was created from a GI home first installed as software only and also had the spfile in the local file system (GI_HOME/dbs). Not sure if these were in any way contributed to this error being thrown. Looking at an ASM spfile it could be seen a local_listener entry has been added during the upgrade.
  large_pool_size          = 12M
  instance_type            = "asm"
  remote_login_passwordfile= "EXCLUSIVE"
  local_listener           = "LISTENER_+ASM"
  asm_diskstring           = "/dev/sd*"
  asm_diskgroups           = "FLASH"
  asm_diskgroups           = "DATA"
  asm_power_limit          = 1
  diagnostic_dest          = "/opt/app/oracle"
Since the listener.ora doesn't have such an entry and more over on 11.2. the local listener entries are auto updated. Once a ASM pfile was created without the local_listener entry the ASM instance started. A new SPfile had to be created and registered with the ASM configuration information.

Useful metalink notes
Oracle Restart: GI Upgrade From 12.1.0.1 to 12.1.0.2 Fails With INS-40406 [ID 1953932.1]
Oracle Restart ASM 11gR2: INS-40406 Upgrading ASM Instance To Release 11.2.0.1.0 [ID 1117063.1]

Related Posts
Upgrading RAC from 11.2.0.3 to 11.2.0.4 - Grid Infrastructure
Upgrading RAC from 11.2.0.3 to 11.2.0.4 - Database

Sunday, March 8, 2015

Downgrade Grid Infrastructure from 11.2.0.4 to 11.1.0.7

This post shows the steps for downgrading clusterware from 11.2.0.4 to 11.1.0.7. This clusterware upgrade was a successful upgrade from 11.1.0.7 to 11.2.0.4 and only the clusterware is upgraded and the database is on 11.1.0.7. If there were nodes that failed the upgrade then follow the MOS notes for additional steps needed for such situation. There is a similar post for downgrading from 11.2.0.4 to 11.2.0.3. However this upgrade is from 11.2 to a pre-11.2 has some additional steps and pitfalls to look out for. The cluster is a two node cluster and had a separate home for ASM instance.
The upgraded system had following resource attributes changed
crsctl modify resource "ora.FLASH.dg" -attr "AUTO_START=always"
crsctl modify resource "ora.DATA.dg" -attr "AUTO_START=always"
crsctl modify resource ora.racse11g1.db -attr "ACTION_SCRIPT=/opt/app/11.2.0/grid/bin/racgwrap"
crsctl modify resource ora.racse11g1.db -attr "AUTO_START=always"

1. Verify clusterware upgrade is clean and active version is 11.2.0.4
[oracle@rac2 ~]$ crsctl query crs softwareversion
Oracle Clusterware version on node [rac2] is [11.2.0.4.0]

[oracle@rac2 ~]$ crsctl query crs releaseversion
Oracle High Availability Services release version on the local node is [11.2.0.4.0]

[oracle@rac2 ~]$ crsctl query crs activeversion -f
Oracle Clusterware active version on the cluster is [11.2.0.4.0]. The cluster upgrade state is [NORMAL].
2. Identify the "OCR-node". As mentioned in the earlier post the OCR-node is the node where the backup of the lower version OCR was taken during the upgrade. In this setup this was done on node "rac1".
[oracle@rac1 cdata]$ cd /opt/app/11.2.0/grid/cdata/
[oracle@rac1 cdata]$ ls -l
total 3132
drwxrwxr-x 2 oracle oinstall      4096 Mar  5 09:29 cg_11g_cluster
drwxr-xr-x 2 oracle oinstall      4096 Mar  4 16:47 localhost
-rw------- 1 root   root         88875 Mar  4 17:11 ocr11.1.0.7.0
drwxr-xr-x 2 oracle oinstall      4096 Mar  4 17:11 rac1
-rw------- 1 root   oinstall 272756736 Mar  5 09:31 rac1.olr
3. MOS note 1364946.1 says to run the rootcrs.pl with downgrade option on all but the OCR-node (i.e don't run this on the OCR-node). But this results in an error
[root@rac2 oracle]# /opt/app/11.2.0/grid/crs/install/rootcrs.pl -downgrade -force
Using configuration parameter file: /opt/app/11.2.0/grid/crs/install/crsconfig_params
One or more options required but missing: -oldcrshome -version
It could be that MOS not updated to reflect the changes of 11.2.0.4. In order to run the command specify the old crs home and the old crs version in 5 number format. Since rac1 is OCR-node this command is run on the other remaining node rac2
[root@rac2 oracle]# /opt/app/11.2.0/grid/crs/install/rootcrs.pl -downgrade -force -oldcrshome /opt/crs/oracle/product/11.1.0/crs -version 11.1.0.7.0
Using configuration parameter file: /opt/app/11.2.0/grid/crs/install/crsconfig_params
CRS-2673: Attempting to stop 'ora.registry.acfs' on 'rac2'
CRS-2677: Stop of 'ora.registry.acfs' on 'rac2' succeeded
CRS-2791: Starting shutdown of Oracle High Availability Services-managed resources on 'rac2'
CRS-2673: Attempting to stop 'ora.crsd' on 'rac2'
CRS-2790: Starting shutdown of Cluster Ready Services-managed resources on 'rac2'
CRS-2673: Attempting to stop 'ora.DATA.dg' on 'rac2'
CRS-2673: Attempting to stop 'ora.FLASH.dg' on 'rac2'
CRS-2673: Attempting to stop 'ora.racse11g1.racse11g2.inst' on 'rac2'
CRS-2673: Attempting to stop 'ora.cvu' on 'rac2'
CRS-2673: Attempting to stop 'ora.LISTENER.lsnr' on 'rac2'
CRS-2673: Attempting to stop 'ora.oc4j' on 'rac2'
CRS-2677: Stop of 'ora.cvu' on 'rac2' succeeded
CRS-2672: Attempting to start 'ora.cvu' on 'rac1'
CRS-2677: Stop of 'ora.LISTENER.lsnr' on 'rac2' succeeded
CRS-2673: Attempting to stop 'ora.rac2.vip' on 'rac2'
CRS-2676: Start of 'ora.cvu' on 'rac1' succeeded
CRS-2677: Stop of 'ora.rac2.vip' on 'rac2' succeeded
CRS-2672: Attempting to start 'ora.rac2.vip' on 'rac1'
CRS-2676: Start of 'ora.rac2.vip' on 'rac1' succeeded
CRS-2677: Stop of 'ora.racse11g1.racse11g2.inst' on 'rac2' succeeded
CRS-2677: Stop of 'ora.FLASH.dg' on 'rac2' succeeded
CRS-2677: Stop of 'ora.DATA.dg' on 'rac2' succeeded
CRS-2673: Attempting to stop 'ora.asm' on 'rac2'
CRS-2677: Stop of 'ora.asm' on 'rac2' succeeded
CRS-2677: Stop of 'ora.oc4j' on 'rac2' succeeded
CRS-2672: Attempting to start 'ora.oc4j' on 'rac1'
CRS-2676: Start of 'ora.oc4j' on 'rac1' succeeded
CRS-2673: Attempting to stop 'ora.ons' on 'rac2'
CRS-2677: Stop of 'ora.ons' on 'rac2' succeeded
CRS-2673: Attempting to stop 'ora.net1.network' on 'rac2'
CRS-2677: Stop of 'ora.net1.network' on 'rac2' succeeded
CRS-2792: Shutdown of Cluster Ready Services-managed resources on 'rac2' has completed
CRS-2677: Stop of 'ora.crsd' on 'rac2' succeeded
CRS-2673: Attempting to stop 'ora.asm' on 'rac2'
CRS-2673: Attempting to stop 'ora.ctssd' on 'rac2'
CRS-2673: Attempting to stop 'ora.evmd' on 'rac2'
CRS-2673: Attempting to stop 'ora.mdnsd' on 'rac2'
CRS-2677: Stop of 'ora.evmd' on 'rac2' succeeded
CRS-2677: Stop of 'ora.mdnsd' on 'rac2' succeeded
CRS-2677: Stop of 'ora.ctssd' on 'rac2' succeeded
CRS-2677: Stop of 'ora.asm' on 'rac2' succeeded
CRS-2673: Attempting to stop 'ora.cluster_interconnect.haip' on 'rac2'
CRS-2677: Stop of 'ora.cluster_interconnect.haip' on 'rac2' succeeded
CRS-2673: Attempting to stop 'ora.cssd' on 'rac2'
CRS-2677: Stop of 'ora.cssd' on 'rac2' succeeded
CRS-2673: Attempting to stop 'ora.crf' on 'rac2'
CRS-2677: Stop of 'ora.crf' on 'rac2' succeeded
CRS-2673: Attempting to stop 'ora.gipcd' on 'rac2'
CRS-2677: Stop of 'ora.gipcd' on 'rac2' succeeded
CRS-2673: Attempting to stop 'ora.gpnpd' on 'rac2'
CRS-2677: Stop of 'ora.gpnpd' on 'rac2' succeeded
CRS-2793: Shutdown of Oracle High Availability Services-managed resources on 'rac2' has completed
CRS-4133: Oracle High Availability Services has been stopped.
Removing Trace File Analyzer
Successfully downgraded Oracle clusterware stack on this node
4. On the OCR-node run the same command with the addition of the lastnode option.
[root@rac1 oracle]# /opt/app/11.2.0/grid/crs/install/rootcrs.pl -downgrade -force -lastnode -oldcrshome /opt/crs/oracle/product/11.1.0/crs -version 11.1.0.7.0
Using configuration parameter file: /opt/app/11.2.0/grid/crs/install/crsconfig_params
CRS-2673: Attempting to stop 'ora.registry.acfs' on 'rac1'
CRS-2677: Stop of 'ora.registry.acfs' on 'rac1' succeeded
CRS-2791: Starting shutdown of Oracle High Availability Services-managed resources on 'rac1'
CRS-2673: Attempting to stop 'ora.crsd' on 'rac1'
CRS-2790: Starting shutdown of Cluster Ready Services-managed resources on 'rac1'
CRS-2673: Attempting to stop 'ora.rac2.vip' on 'rac1'
CRS-2673: Attempting to stop 'ora.oc4j' on 'rac1'
CRS-2673: Attempting to stop 'ora.LISTENER.lsnr' on 'rac1'
CRS-2673: Attempting to stop 'ora.cvu' on 'rac1'
CRS-2673: Attempting to stop 'ora.racse11g1.db' on 'rac1'
CRS-2673: Attempting to stop 'ora.LISTENER_SCAN1.lsnr' on 'rac1'
CRS-2673: Attempting to stop 'ora.DATA.dg' on 'rac1'
CRS-2673: Attempting to stop 'ora.FLASH.dg' on 'rac1'
CRS-2673: Attempting to stop 'ora.racse11g1.racse11g1.inst' on 'rac1'
CRS-2677: Stop of 'ora.cvu' on 'rac1' succeeded
CRS-2677: Stop of 'ora.LISTENER_SCAN1.lsnr' on 'rac1' succeeded
CRS-2673: Attempting to stop 'ora.scan1.vip' on 'rac1'
CRS-2677: Stop of 'ora.LISTENER.lsnr' on 'rac1' succeeded
CRS-2677: Stop of 'ora.rac2.vip' on 'rac1' succeeded
CRS-2673: Attempting to stop 'ora.rac1.vip' on 'rac1'
CRS-2677: Stop of 'ora.racse11g1.db' on 'rac1' succeeded
CRS-2677: Stop of 'ora.rac1.vip' on 'rac1' succeeded
CRS-2677: Stop of 'ora.scan1.vip' on 'rac1' succeeded
CRS-2677: Stop of 'ora.racse11g1.racse11g1.inst' on 'rac1' succeeded
CRS-2677: Stop of 'ora.FLASH.dg' on 'rac1' succeeded
CRS-2677: Stop of 'ora.DATA.dg' on 'rac1' succeeded
CRS-2673: Attempting to stop 'ora.asm' on 'rac1'
CRS-2677: Stop of 'ora.oc4j' on 'rac1' succeeded
CRS-2677: Stop of 'ora.asm' on 'rac1' succeeded
CRS-2673: Attempting to stop 'ora.ons' on 'rac1'
CRS-2677: Stop of 'ora.ons' on 'rac1' succeeded
CRS-2673: Attempting to stop 'ora.net1.network' on 'rac1'
CRS-2677: Stop of 'ora.net1.network' on 'rac1' succeeded
CRS-2792: Shutdown of Cluster Ready Services-managed resources on 'rac1' has completed
CRS-2677: Stop of 'ora.crsd' on 'rac1' succeeded
CRS-2673: Attempting to stop 'ora.ctssd' on 'rac1'
CRS-2673: Attempting to stop 'ora.evmd' on 'rac1'
CRS-2673: Attempting to stop 'ora.asm' on 'rac1'
CRS-2673: Attempting to stop 'ora.mdnsd' on 'rac1'
CRS-2677: Stop of 'ora.evmd' on 'rac1' succeeded
CRS-2677: Stop of 'ora.mdnsd' on 'rac1' succeeded
CRS-2677: Stop of 'ora.ctssd' on 'rac1' succeeded
CRS-2677: Stop of 'ora.asm' on 'rac1' succeeded
CRS-2673: Attempting to stop 'ora.cluster_interconnect.haip' on 'rac1'
CRS-2677: Stop of 'ora.cluster_interconnect.haip' on 'rac1' succeeded
CRS-2673: Attempting to stop 'ora.cssd' on 'rac1'
CRS-2677: Stop of 'ora.cssd' on 'rac1' succeeded
CRS-2673: Attempting to stop 'ora.crf' on 'rac1'
CRS-2677: Stop of 'ora.crf' on 'rac1' succeeded
CRS-2673: Attempting to stop 'ora.gipcd' on 'rac1'
CRS-2677: Stop of 'ora.gipcd' on 'rac1' succeeded
CRS-2673: Attempting to stop 'ora.gpnpd' on 'rac1'
CRS-2677: Stop of 'ora.gpnpd' on 'rac1' succeeded
CRS-2793: Shutdown of Oracle High Availability Services-managed resources on 'rac1' has completed
CRS-4133: Oracle High Availability Services has been stopped.
Removing Trace File Analyzer
Successfully downgraded OCR to 11.1.0.7.0
Run root.sh from the old crshome on all the cluster nodes one at a time to start the Clusterware
Successful deletion of voting disk /dev/sdh1.
Now formatting voting disk: /dev/sdh1.
Successful addition of voting disk /dev/sdh1.
Successful deletion of voting disk /dev/sdf1.
Now formatting voting disk: /dev/sdf1.
Successful addition of voting disk /dev/sdf1.
Successful deletion of voting disk /dev/sdg1.
Now formatting voting disk: /dev/sdg1.
Successful addition of voting disk /dev/sdg1.
5. Do not run the root.sh just yet. Edit the oratab to reflect the Oracle home the ASM instance would run out of. Since before the upgrade ASM ran out of separate home (not the same oracle home DB ran out of) this is added to the oratab on both nodes.
On rac1
+ASM1:/opt/app/oracle/product/11.1.0/asm_1:N
On rac2
+ASM2:/opt/app/oracle/product/11.1.0/asm_1:N
6. Clear the gpnp profile directories
rm -rf /opt/app/11.2.0/grid/gpnp/*
7. Make sure any changes subsequently made to the cluster are reflected on the root* script. In this cluster an ocr mirror was added after it was created (following the execution of root.sh). As such this ocr mirror location is missing on the root script but present in the ocr.loc file.
cat /etc/oracle/ocr.loc
ocrconfig_loc=/dev/sdb1
ocrmirrorconfig_loc=/dev/sde1 <-- added later
Runing the root.sh without correcting this resulted in the following error
[root@rac1 crs]# /opt/crs/oracle/product/11.1.0/crs/root.sh
Checking to see if Oracle CRS stack is already configured

Current Oracle Cluster Registry mirror location '/dev/sde1' in '/etc/oracle/ocr.loc' and '' does not match
Update either '/etc/oracle/ocr.loc' to use '' or variable CRS_OCR_LOCATIONS in rootconfig.sh with '/dev/sde1' then rerun rootconfig.sh
To fix this edit the (11.1) $CRS_HOME/install/rootconfig and add the ocr mirror location
CRS_OCR_LOCATIONS=/dev/sdb1,/dev/sde1
After the change the root.sh runs without any issue.
[root@rac1 crs]# /opt/crs/oracle/product/11.1.0/crs/root.sh
Checking to see if Oracle CRS stack is already configured

Setting the permissions on OCR backup directory
Setting up Network socket directories
Oracle Cluster Registry configuration upgraded successfully
clscfg: EXISTING configuration version 4 detected.
clscfg: version 4 is 11 Release 1.
Successfully accumulated necessary OCR keys.
Using ports: CSS=49895 CRS=49896 EVMC=49898 and EVMR=49897.
node :   
node 1: rac1 rac1-pvt rac1
node 2: rac2 rac2-pvt rac2
clscfg: Arguments check out successfully.

NO KEYS WERE WRITTEN. Supply -force parameter to override.
-force is destructive and will destroy any previous cluster
configuration.
Oracle Cluster Registry for cluster has already been initialized
Startup will be queued to init within 30 seconds.
Adding daemons to inittab
Expecting the CRS daemons to be up within 600 seconds.
Cluster Synchronization Services is active on these nodes.
        rac1
Cluster Synchronization Services is inactive on these nodes.
        rac2
Local node checking complete. Run root.sh on remaining nodes to start CRS daemons.
At the end of this execution the ASM and database instance will be up and running on this node. Run the root.sh on the second node
[root@rac2 crs]# /opt/crs/oracle/product/11.1.0/crs/root.sh
Checking to see if Oracle CRS stack is already configured

Setting the permissions on OCR backup directory
Setting up Network socket directories
Oracle Cluster Registry configuration upgraded successfully
clscfg: EXISTING configuration version 4 detected.
clscfg: version 4 is 11 Release 1.
Successfully accumulated necessary OCR keys.
Using ports: CSS=49895 CRS=49896 EVMC=49898 and EVMR=49897.
node :   
node 1: rac1 rac1-pvt rac1
node 2: rac2 rac2-pvt rac2
clscfg: Arguments check out successfully.

NO KEYS WERE WRITTEN. Supply -force parameter to override.
-force is destructive and will destroy any previous cluster
configuration.
Oracle Cluster Registry for cluster has already been initialized
Startup will be queued to init within 30 seconds.
Adding daemons to inittab
Expecting the CRS daemons to be up within 600 seconds.
Cluster Synchronization Services is active on these nodes.
        rac1
        rac2
Cluster Synchronization Services is active on all the nodes.
Waiting for the Oracle CRSD and EVMD to start
Oracle CRS stack installed and running under init(1M)
At the end of this script execution it was found that ASM instance was not up and running. Trying to manually start resulted in following error
[oracle@rac2 bin]$ srvctl start asm -n rac2
PRKS-1009 : Failed to start ASM instance "+ASM2" on node "rac2", [PRKS-1009 : Failed to start ASM instance "+ASM2" on node "rac2", [rac2:ora.rac2.ASM2.asm:
rac2:ora.rac2.ASM2.asm:SQL*Plus: Release 11.1.0.7.0 - Production on Thu Mar 5 12:16:33 2015
...
rac2:ora.rac2.ASM2.asm:SQL> ORA-00304: requested INSTANCE_NUMBER is busy
...
CRS-0215: Could not start resource 'ora.rac2.ASM2.asm'.]]
[PRKS-1009 : Failed to start ASM instance "+ASM2" on node "rac2", [rac2:ora.rac2.ASM2.asm:
rac2:ora.rac2.ASM2.asm:SQL*Plus: Release 11.1.0.7.0 - Production on Thu Mar 5 12:16:33 2015
...
rac2:ora.rac2.ASM2.asm:SQL> ORA-00304: requested INSTANCE_NUMBER is busy
rac2:ora.rac2.ASM2.asm:SQL> Disconnected
rac2:ora.rac2.ASM2.asm:
CRS-0215: Could not start resource 'ora.rac2.ASM2.asm'.]]
This is because during the upgrade the ASM spfile changed and remained the same even after the downgrade as well. Following is a pfile created from the spfile before the upgrade (11.1.0.7)
+ASM1.__oracle_base='/opt/app/oracle'#ORACLE_BASE set from environment
+ASM2.__oracle_base='/opt/app/oracle'#ORACLE_BASE set from environment
+ASM1.asm_diskgroups='DATA','FLASH'
+ASM2.asm_diskgroups='DATA','FLASH'
*.cluster_database=true
*.diagnostic_dest='/opt/app/oracle'
+ASM2.instance_number=2
+ASM1.instance_number=1
*.instance_type='asm'
*.large_pool_size=12M
*.asm_diskstring='ORCL:*'
Below is the pfile created after the upgrade (and same content was there on the pfile created after the downgrade as well)
+ASM1.__oracle_base='/opt/app/oracle'#ORACLE_BASE set from environment
+ASM2.__oracle_base='/opt/app/oracle'#ORACLE_BASE set from environment
*.asm_diskgroups='DATA','FLASH'
*.asm_diskstring='ORCL:*'
*.asm_power_limit=1
*.diagnostic_dest='/opt/app/oracle'
*.instance_type='asm'
*.large_pool_size=16777216
*.memory_target=1627389952
*.remote_login_passwordfile='EXCLUSIVE'
Comparing the pfile entries it could be seen that after the upgrade of ASM the instance number entries and cluster database entries are lost. As a result after the downgrade only one instance could be started. To fix this shutdown the database and instances on the node ASM is running. Start the ASM instance in nomount mode with the pfile created before the upgrade to 11.1.0.7 and recreate the spfile.
SQL> create spfile='/dev/sdb3' from pfile='/home/oracle/asmpfile.ora';


8. After this the ASM instances and DB instances could be started on all nodes. However the listener will fail to start on all the nodes (rac1 and rac2) and trying to manually start it would result in following error
[oracle@rac1 admin]$ srvctl start listener -n rac1
rac1:ora.rac1.LISTENER_RAC1.lsnr:TNSLSNR for Linux: Version 11.1.0.7.0 - Production
rac1:ora.rac1.LISTENER_RAC1.lsnr:System parameter file is /opt/app/oracle/product/11.1.0/asm_1/network/admin/listener.ora
rac1:ora.rac1.LISTENER_RAC1.lsnr:Log messages written to /opt/app/oracle/diag/tnslsnr/rac1/listener_rac1/alert/log.xml
rac1:ora.rac1.LISTENER_RAC1.lsnr:TNS-01151: Missing listener name, LISTENER_RAC1, in LISTENER.ORA
rac1:ora.rac1.LISTENER_RAC1.lsnr:Listener failed to start. See the error message(s) above...
rac1:ora.rac1.LISTENER_RAC1.lsnr:Connecting to (DESCRIPTION=(CONNECT_DATA=(SERVICE_NAME=))(ADDRESS=(PROTOCOL=TCP)(HOST=192.168.0.85)(PORT=1521)))
rac1:ora.rac1.LISTENER_RAC1.lsnr:TNS-12535: TNS:operation timed out
rac1:ora.rac1.LISTENER_RAC1.lsnr: TNS-12560: TNS:protocol adapter error
rac1:ora.rac1.LISTENER_RAC1.lsnr:  TNS-00505: Operation timed out
rac1:ora.rac1.LISTENER_RAC1.lsnr:   Linux Error: 110: Connection timed out
CRS-1006: No more members to consider
CRS-0215: Could not start resource 'ora.rac1.LISTENER_RAC1.lsnr'
Reason for this is that changes made to listener.ora during the upgrade are not rolled back or missing during downgrade. When upgraded to 11.2.0.4 the listener resource is named "ora.LISTENER.lsnr". However on 11.1 the listeners have node specific naming "ora.rac2.LISTENER_RAC1.lsnr" and "ora.rac2.LISTENER_RAC2.lsnr". During the downgrade the listener.ora file created is missing this node specific listener. Add this node specific listener entry to the listener.ora file in the ASM_HOME (only rac1 entry is shown below. Similar entry with correct vip and ip names must be added to rac2 as well).
LISTENER_RAC1 =
  (DESCRIPTION_LIST =
    (DESCRIPTION =
      (ADDRESS = (PROTOCOL = TCP)(HOST = rac1-vip)(PORT = 1521)(IP = FIRST))
      (ADDRESS = (PROTOCOL = TCPS)(HOST = rac1-vip)(PORT = 1523)(IP = FIRST))
      (ADDRESS = (PROTOCOL = TCP)(HOST = 192.168.0.85)(PORT = 1521)(IP = FIRST))
    )
  )
After this it is possible to start the listener and all resources will be online
[oracle@rac1 admin]$ crs_stat -t
Name           Type           Target    State     Host
------------------------------------------------------------
ora....SM1.asm application    ONLINE    ONLINE    rac1
ora....C1.lsnr application    ONLINE    ONLINE    rac1
ora.rac1.gsd   application    ONLINE    ONLINE    rac1
ora.rac1.ons   application    ONLINE    ONLINE    rac1
ora.rac1.vip   application    ONLINE    ONLINE    rac1
ora....SM2.asm application    ONLINE    ONLINE    rac2
ora....C2.lsnr application    ONLINE    ONLINE    rac2
ora.rac2.gsd   application    ONLINE    ONLINE    rac2
ora.rac2.ons   application    ONLINE    ONLINE    rac2
ora.rac2.vip   application    ONLINE    ONLINE    rac2
ora....11g1.db application    ONLINE    ONLINE    rac2
ora....g1.inst application    ONLINE    ONLINE    rac1
ora....g2.inst application    ONLINE    ONLINE    rac2
9. No need to change the action script entry as this would have already been changed to refering 11.1 crs home
crs_stat -p
NAME=ora.racse11g1.db
TYPE=application
ACTION_SCRIPT=/opt/crs/oracle/product/11.1.0/crs/bin/racgwrap
10. Update the inventory information and crs=true for 11.1 crs home.
$CRS_HOME/out/bin/runInstaller -updateNodeList ORACLE_HOME=/opt/crs/oracle/product/11.1.0/crs CRS=true
$CRS_HOME/out/bin/runInstaller -updateNodeList ORACLE_HOME=/opt/app/11.2.0/grid CRS=false
After these commands are run check the inventory.xml to see 11.1 has crs=true
<HOME NAME="clusterware_11g" LOC="/opt/crs/oracle/product/11.1.0/crs" TYPE="O" IDX="1" CRS="true">
   <NODE_LIST>
      <NODE NAME="rac1"/>
      <NODE NAME="rac2"/>
   </NODE_LIST>
</HOME>

<HOME NAME="Ora11g_gridinfrahome1" LOC="/opt/app/11.2.0/grid" TYPE="O" IDX="4"> <-- 11.2 home
   <NODE_LIST>
      <NODE NAME="rac1"/>
      <NODE NAME="rac2"/>
   </NODE_LIST>
</HOME>
11. Check the crs version information
[oracle@rac1 crs]$ crsctl query crs activeversion
Oracle Clusterware active version on the cluster is [11.1.0.7.0]

[oracle@rac1 crs]$ crsctl query crs softwareversion
Oracle Clusterware version on node [rac1] is [11.1.0.7.0]

[oracle@rac1 admin]$ crsctl query crs releaseversion
11.1.0.7.0
12. Check ocr integrity and manually backup the ocr
[root@rac1 admin]# ocrcheck
Status of Oracle Cluster Registry is as follows :
         Version                  :          2
         Total space (kbytes)     :     296940
         Used space (kbytes)      :       3916
         Available space (kbytes) :     293024
         ID                       : 1749862721
         Device/File Name         :  /dev/sdb1
                                    Device/File integrity check succeeded
         Device/File Name         :  /dev/sde1
                                    Device/File integrity check succeeded

         Cluster registry integrity check succeeded

         Logical corruption check succeeded

[root@rac1 admin]# ocrconfig -manualbackup
13. Backup the vote disks using dd
crsctl query css votedisk
 0.     0    /dev/sdh1
 1.     0    /dev/sdf1
 2.     0    /dev/sdg1
Located 3 voting disk(s).

dd if=/dev/sdh1 of=/home/oracle/votediskbackup bs=8192
34134+1 records in
34134+1 records out
279627264 bytes (280 MB) copied, 0.69875 seconds, 400 MB/s
14. As the last step detach the 11.2.0.4 GI Home from the inventory and remove it manually
./runInstaller -detachHome ORACLE_HOME=/opt/app/11.2.0/grid -silent
rm -rf /opt/app/11.2.0/grid # run on all nodes
Related post
Downgrade Grid Infrastructure from 12.1.0.2 to 11.2.0.4
Downgrade Grid Infrastructure from 11.2.0.4 to 11.2.0.3

Useful metalink notes
How to Downgrade 11.2.0.2 Grid Infrastructure Cluster to 11.2.0.1 [ 1364230.1]
How to Downgrade 11.2.0.3 Grid Infrastructure Cluster to Lower 11.2 GI or Pre-11.2 CRS [ 1364946.1]
How to Update Inventory to Set/Unset "CRS=true" Flag for Oracle Clusterware Home [ 1053393.1]
Oracle Clusterware (GI or CRS) Related Abbreviations, Acronyms and Procedures [ 1374275.1]

Sunday, March 1, 2015

Restore RAC DB Backup as a Single Instance DB

At times it may require a DBA to restore a backup of a RAC DB as a single instance DB. It could be that RAC DB is production system and copy of it is needed for development. This could be achieved with RAC to single instance duplication as well. However in this post steps are shown on how to restore RAC DB on ASM to a single instance DB which use local file system.
1. Create a backup of the RAC DB including the control files. In this case the backups are created in the local file system.
RMAN> backup database format '/home/oracle/backup/bakp%U' plus archivelog format '/home/oracle/backup/arch%U' delete all input;
RMAN> backup current controlfile format '/home/oracle/backup/ctl%U';
2. Create a pfile of the RAC. The output below shows the RAC DB pfile with RAC specific and instance specific parameters.
*.audit_file_dest='/opt/app/oracle/admin/rac11g2/adump'
*.audit_trail='NONE'
*.cluster_database=true
*.compatible='11.2.0.0.0'
*.control_files='+DATA/rac11g2/controlfile/current.260.732796395','+FLASH/rac11g2/controlfile/current.256.732796395'#Restore Controlfile
*.db_32k_cache_size=67108864
*.db_block_size=8192
*.db_create_file_dest='+DATA'
*.db_domain='domain.net'
*.db_name='rac11g2'
*.db_recovery_file_dest='+FLASH'
*.db_recovery_file_dest_size=9437184000
*.diagnostic_dest='/opt/app/oracle'
*.dispatchers='(PROTOCOL=TCP) (SERVICE=rac11g2XDB)'
rac11g21.instance_number=1
rac11g22.instance_number=2
*.java_jit_enabled=TRUE
*.log_archive_format='%t_%s_%r.dbf'
*.open_cursors=300
*.pga_aggregate_target=209715200
*.processes=150
*.remote_listener='rac-scan.domain.net:1521'
*.remote_login_passwordfile='EXCLUSIVE'
*.sga_target=633339904
rac11g21.thread=1
rac11g22.thread=2
rac11g21.undo_tablespace='UNDOTBS1'
rac11g22.undo_tablespace='UNDOTBS2'
3. Edit the pfile by removing the RAC and instance specific parameters, especially the cluster_database=true must be set to false. Output below gives the edited pfile. The new single instance will use OMF and the db_create_file and db_recoery_file_dest have been replaced with file system directories in place of ASM diskgroup used by the RAC DB. Also all instance specific parameters have been removed.
more rac11g2pfile.ora
*.audit_file_dest='/opt/app/oracle/admin/rac11g2/adump'
*.audit_trail='NONE'
*.cluster_database=false
*.compatible='11.2.0.0.0'
*.db_32k_cache_size=67108864
*.db_block_size=8192
*.db_create_file_dest='/data/oradata'
*.db_domain='domain.net'
*.db_name='rac11g2'
*.db_recovery_file_dest='/data/flash_recovery'
*.db_recovery_file_dest_size=9437184000
*.diagnostic_dest='/opt/app/oracle'
*.dispatchers='(PROTOCOL=TCP) (SERVICE=rac11g2XDB)'
*.java_jit_enabled=TRUE
*.log_archive_format='%t_%s_%r.dbf'
*.open_cursors=300
*.pga_aggregate_target=209715200
*.processes=150
*.remote_login_passwordfile='EXCLUSIVE'
*.sga_target=633339904
*.undo_tablespace='UNDOTBS1'
4. Copy the backup and the pfile to the new host where the single instance DB will be created.

5. Create the audit dump directory
 mkdir -p /opt/app/oracle/admin/rac11g2/adump
Set the ORACLE_SID to the RAC DB sid (not the instance SID in this case the RAC DB is called rac11g2) and start the db in nomount mode.
export ORACLE_SID=rac11g2
SQL> startup nomount pfile='rac11g2pfile.ora';
If desired the spfile could also be created in the same step and restart the instance in nomount mode using the spfile instead of the pfile. With the use of a spfile the control files entries will be added to it automatically when they are restored.
SQL> create spfile from pfile='/home/oracle/backups/rac11g2pfile.ora' ;
SQL> startup force nomount;
6. Use rman to restore the control file from the location in the new host.
 rman target /

connected to target database: RAC11G2 (not mounted)

RMAN> restore controlfile from '/home/oracle/backups/ctl5upvj3oh_1_1';

Starting restore at 18-FEB-15
using target database control file instead of recovery catalog
allocated channel: ORA_DISK_1
channel ORA_DISK_1: SID=63 device type=DISK

channel ORA_DISK_1: restoring control file
channel ORA_DISK_1: restore complete, elapsed time: 00:00:01
output file name=/data/oradata/RAC11G2/controlfile/o1_mf_bg8ykm51_.ctl
output file name=/data/flash_recovery/RAC11G2/controlfile/o1_mf_bg8ykmdt_.ctl
Finished restore at 18-FEB-15
7. Mount the database and catalog the backups copied over earlier.
RMAN> alter database mount;

RMAN> catalog start with '/home/oracle/backups';

Starting implicit crosscheck backup at 18-FEB-15
allocated channel: ORA_DISK_1
channel ORA_DISK_1: SID=63 device type=DISK
Crosschecked 20 objects
Finished implicit crosscheck backup at 18-FEB-15

Starting implicit crosscheck copy at 18-FEB-15
using channel ORA_DISK_1
Crosschecked 2 objects
Finished implicit crosscheck copy at 18-FEB-15

searching for all files in the recovery area
cataloging files...
no files cataloged

searching for all files that match the pattern /home/oracle/backups

List of Files Unknown to the Database
=====================================
File Name: /home/oracle/backups/arch5spvj3m6_1_1
File Name: /home/oracle/backups/rac11g2pfile.ora
File Name: /home/oracle/backups/ctl5upvj3oh_1_1
File Name: /home/oracle/backups/arch5npvj3f0_1_1
File Name: /home/oracle/backups/arch5opvj3h1_1_1
File Name: /home/oracle/backups/bakp5qpvj3k0_1_1
File Name: /home/oracle/backups/bakp5rpvj3m2_1_1
File Name: /home/oracle/backups/arch5ppvj3ij_1_1

Do you really want to catalog the above files (enter YES or NO)? yes
cataloging files...
cataloging done

List of Cataloged Files
=======================
File Name: /home/oracle/backups/arch5spvj3m6_1_1
File Name: /home/oracle/backups/ctl5upvj3oh_1_1
File Name: /home/oracle/backups/arch5npvj3f0_1_1
File Name: /home/oracle/backups/arch5opvj3h1_1_1
File Name: /home/oracle/backups/bakp5qpvj3k0_1_1
File Name: /home/oracle/backups/bakp5rpvj3m2_1_1
File Name: /home/oracle/backups/arch5ppvj3ij_1_1


8.Restore the database from the backups, switch datafiles to new file location and recover the database to the last archivelog available on the backups. Since OMF is used the newname for the databse is set as "to new".
run {
set newname for database to new;
restore database;
switch datafile all;
recover database;
}
  
RMAN> run {
2> set newname for database to new;
3>  restore database;
4> switch datafile all;
5> recover database;
6> }

executing command: SET NEWNAME

Starting restore at 18-FEB-15
using channel ORA_DISK_1

channel ORA_DISK_1: starting datafile backup set restore
channel ORA_DISK_1: specifying datafile(s) to restore from backup set
channel ORA_DISK_1: restoring datafile 00001 to /data/oradata/RAC11G2/datafile/o1_mf_system_%u_.dbf
channel ORA_DISK_1: restoring datafile 00002 to /data/oradata/RAC11G2/datafile/o1_mf_sysaux_%u_.dbf
channel ORA_DISK_1: restoring datafile 00003 to /data/oradata/RAC11G2/datafile/o1_mf_undotbs1_%u_.dbf
channel ORA_DISK_1: restoring datafile 00004 to /data/oradata/RAC11G2/datafile/o1_mf_users_%u_.dbf
channel ORA_DISK_1: restoring datafile 00005 to /data/oradata/RAC11G2/datafile/o1_mf_undotbs2_%u_.dbf
channel ORA_DISK_1: restoring datafile 00006 to /data/oradata/RAC11G2/datafile/o1_mf_test_%u_.dbf
channel ORA_DISK_1: restoring datafile 00007 to /data/oradata/RAC11G2/datafile/o1_mf_test_%u_.dbf
channel ORA_DISK_1: restoring datafile 00009 to /data/oradata/RAC11G2/datafile/o1_mf_gravelso_%u_.dbf
channel ORA_DISK_1: restoring datafile 00010 to /data/oradata/RAC11G2/datafile/o1_mf_sbxindex_%u_.dbf
channel ORA_DISK_1: restoring datafile 00011 to /data/oradata/RAC11G2/datafile/o1_mf_sbxlobs_%u_.dbf
channel ORA_DISK_1: reading from backup piece /home/oracle/backups/bakp5qpvj3k0_1_1
channel ORA_DISK_1: piece handle=/home/oracle/backups/bakp5qpvj3k0_1_1 tag=TAG20150218T121600
channel ORA_DISK_1: restored backup piece 1
channel ORA_DISK_1: restore complete, elapsed time: 00:00:35
channel ORA_DISK_1: starting datafile backup set restore
channel ORA_DISK_1: specifying datafile(s) to restore from backup set
channel ORA_DISK_1: restoring datafile 00008 to /data/oradata/RAC11G2/datafile/o1_mf_sbx32ktb_%u_.dbf
channel ORA_DISK_1: reading from backup piece /home/oracle/backups/bakp5rpvj3m2_1_1
channel ORA_DISK_1: piece handle=/home/oracle/backups/bakp5rpvj3m2_1_1 tag=TAG20150218T121600
channel ORA_DISK_1: restored backup piece 1
channel ORA_DISK_1: restore complete, elapsed time: 00:00:15
Finished restore at 18-FEB-15

datafile 1 switched to datafile copy
input datafile copy RECID=49 STAMP=871991658 file name=/data/oradata/RAC11G2/datafile/o1_mf_system_bg8z9rm9_.dbf
datafile 2 switched to datafile copy
input datafile copy RECID=50 STAMP=871991658 file name=/data/oradata/RAC11G2/datafile/o1_mf_sysaux_bg8z9rlo_.dbf
datafile 3 switched to datafile copy
input datafile copy RECID=51 STAMP=871991658 file name=/data/oradata/RAC11G2/datafile/o1_mf_undotbs1_bg8z9rnh_.dbf
datafile 4 switched to datafile copy
input datafile copy RECID=52 STAMP=871991658 file name=/data/oradata/RAC11G2/datafile/o1_mf_users_bg8z9rp1_.dbf
datafile 5 switched to datafile copy
input datafile copy RECID=53 STAMP=871991658 file name=/data/oradata/RAC11G2/datafile/o1_mf_undotbs2_bg8z9ro9_.dbf
datafile 6 switched to datafile copy
input datafile copy RECID=54 STAMP=871991658 file name=/data/oradata/RAC11G2/datafile/o1_mf_test_bg8z9rpp_.dbf
datafile 7 switched to datafile copy
input datafile copy RECID=55 STAMP=871991658 file name=/data/oradata/RAC11G2/datafile/o1_mf_test_bg8z9rq9_.dbf
datafile 8 switched to datafile copy
input datafile copy RECID=56 STAMP=871991658 file name=/data/oradata/RAC11G2/datafile/o1_mf_sbx32ktb_bg8zbvny_.dbf
datafile 9 switched to datafile copy
input datafile copy RECID=57 STAMP=871991658 file name=/data/oradata/RAC11G2/datafile/o1_mf_gravelso_bg8z9rr1_.dbf
datafile 10 switched to datafile copy
input datafile copy RECID=58 STAMP=871991658 file name=/data/oradata/RAC11G2/datafile/o1_mf_sbxindex_bg8z9s7h_.dbf
datafile 11 switched to datafile copy
input datafile copy RECID=59 STAMP=871991659 file name=/data/oradata/RAC11G2/datafile/o1_mf_sbxlobs_bg8z9s9k_.dbf

Starting recover at 18-FEB-15
using channel ORA_DISK_1

starting media recovery

channel ORA_DISK_1: starting archived log restore to default destination
channel ORA_DISK_1: restoring archived log
archived log thread=1 sequence=216
channel ORA_DISK_1: restoring archived log
archived log thread=2 sequence=180
channel ORA_DISK_1: reading from backup piece /home/oracle/backups/arch5spvj3m6_1_1
channel ORA_DISK_1: piece handle=/home/oracle/backups/arch5spvj3m6_1_1 tag=TAG20150218T121709
channel ORA_DISK_1: restored backup piece 1
channel ORA_DISK_1: restore complete, elapsed time: 00:00:01
archived log file name=/data/flash_recovery/RAC11G2/archivelog/2015_02_18/o1_mf_1_216_bg8zccx3_.arc thread=1 sequence=216
archived log file name=/data/flash_recovery/RAC11G2/archivelog/2015_02_18/o1_mf_2_180_bg8zccy6_.arc thread=2 sequence=180
channel default: deleting archived log(s)
archived log file name=/data/flash_recovery/RAC11G2/archivelog/2015_02_18/o1_mf_2_180_bg8zccy6_.arc RECID=1606 STAMP=871991660
unable to find archived log
archived log thread=2 sequence=181
RMAN-00571: ===========================================================
RMAN-00569: =============== ERROR MESSAGE STACK FOLLOWS ===============
RMAN-00571: ===========================================================
RMAN-03002: failure of recover command at 02/18/2015 11:54:21
RMAN-06054: media recovery requesting unknown archived log for thread 2 with sequence 181 and starting SCN of 50748909
9. Open the database with resetlogs
RMAN> alter database open resetlogs;
It is normal at this stage to observe following messages in the alert log
Starting background process ASMB
Wed Feb 18 15:56:57 2015
ASMB started with pid=58, OS id=26760
WARNING: failed to start ASMB (connection failed) state=0x1 sid=''
WARNING: ASMB exiting with error
This is due to some old references to files in the ASM but should not affect the functioning of the database.

10. Create a spfile from the memory (if not created at step 6) and restart the database using the spfile. At this stage the ASMB message observed earlier should not occur anymore.

11. Clean up the additional threads that came as part of the RAC. Since the RAC DB was a two instance the new single instance will have information of the two thread.
SQL> select THREAD#, STATUS, ENABLED from v$thread;

   THREAD# STATUS ENABLED
---------- ------ --------
         1 OPEN   PUBLIC
         2 CLOSED PUBLIC

SQL> select group# from v$log where THREAD#=2;

    GROUP#
----------
         3
         4

SQL> alter database disable thread 2;
Database altered.

SQL> alter database clear unarchived logfile group 3;
Database altered.

SQL>  alter database clear unarchived logfile group 4;
Database altered.

SQL> alter database drop logfile group 3;
Database altered.

SQL> alter database drop logfile group 4;
Database altered.

SQL>  select group# from v$log where THREAD#=2;
no rows selected
12. Drop undo tablespaces that are not used as part of the single instance. In this case the RAC DB had two undo tablespaces and one was chosen as the undo tablespace for the single instance. The other undo tablespace is dropped.
SQL> show parameter undo

NAME                                 TYPE        VALUE
------------------------------------ ----------- ------------------------------
undo_management                      string      AUTO
undo_retention                       integer     900
undo_tablespace                      string      UNDOTBS1

SQL>select tablespace_name from dba_tablespaces where contents='UNDO';

TABLESPACE_NAME
------------------------------
UNDOTBS1
UNDOTBS2

SQL>drop tablespace UNDOTBS2 including contents and datafiles;
Tablespace dropped.

SQL>select tablespace_name from dba_tablespaces where contents='UNDO';

TABLESPACE_NAME
------------------------------
UNDOTBS1
13. Since OMF was used the temp file for the temporary tablespace was automatically created when the database was restored. If not create the temp file and assigned to the temporary tablespace or create a new default temporary tablespace.
SELECT PROPERTY_VALUE FROM DATABASE_PROPERTIES WHERE PROPERTY_NAME = 'DEFAULT_TEMP_TABLESPACE';

PROPERTY_VALUE
------------------------------
TEMP

SQL>  select name from v$tempfile;

NAME
--------------------------------------------------------------------------------
/data/oradata/RAC11G2/datafile/o1_mf_temp_bg8zfgxv_.tmp
14. Since the restored DB is non-RAC the registry shows RAC option as invalid. Run dbms_registry to remove the RAC option from the registry.
Select comp_name,status,version from dba_registry;

Oracle Real Application Clusters                             INVALID                                      11.2.0.3.0

SQL> exec dbms_registry.removed('RAC');

Oracle Real Application Clusters                             REMOVED                                      11.2.0.3.0
This conclude the restoring a RAC DB backup as single instance DB.

Useful metalink notes
RAC Option Invalid After Migration [ID 312071.1]
HowTo Restore RMAN Disk backups of RAC Database to Single Instance On Another Node [ID 415579.1]