Thursday, October 17, 2013

Patching 12c (12.1.0.1) RAC with October 2013 PSU

First critical patch update for 12c was released Oct 15 2013. This post looks at the difference in patching 12c RAC environment (with role separation) compared to 11.2 environment. The environment used for patching is the environment that was upgraded from 11.2 to 12c.
First thing to notice is the name of the patch. On the readme.html that is included in the patch it is referred to as "Oracle Grid Infrastructure System Patch" instead of "Oracle Grid Infrastructure Patch Set Update" (More jargon to converse with!). However in the PSU and CPU availability document (1571391.1) it is still referred to as PSU (GI 12.1.0.1.1 PSU Patch 17272829). "GI System Patch" is used throughout the readme.html document so it's pretty safe to assume that's how the 12c patches going to be referred from now on.
Opatch auto option has been merged into one single command called "opatchauto".
However it is still possible to apply the patch manually. But at the time of this post (16/10/2013) the document with instruction for manual patch apply/rollback (1591616.1) is not available on MOS though the readme.html mentions it (shouldn't this be available before patches are released?). When this become available follow it for manual patch apply. In mean time as a workaround generateSteps option could be used to list the steps used by opatchauto
/opt/app/12.1.0/grid/OPatch/opatchauto apply  /usr/local/patch/17272829  -ocmrf ocm.rsp  -generateSteps
OPatch 12.1.0.1.2 or later is needed to apply this patch. Installing new OPatch on GI_HOME causes the following
unzip p6880880_121010_Linux-x86-64.zip
  ..
  inflating: OPatch/operr
error:  cannot create PatchSearch.xml
        Permission denied
File PatchSearch.xml is to be copied (or unzipped) to GI_HOME outside the OPatch directory and since GI_HOME has restrictive permission unzipping as grid user causes the above error. The file could be copied manually as root user into GI_HOME or ignore the error (this caused no issue when installing the patch). Looking inside the PatchSearch.xml file it seem this might be used to get the OPatch from MOS (has urls of MOS and OPatch including CSI number). No such issue installing the new OPatch on ORACLE_HOME.
Next issue is related to patch location. Readme.html mentions to use "PATH_TO_PATCH_DIRECTORY" in the opatchauto command. PATH_TO_PATCH_DIRECTORY is the location where the patch was unzipped. This is same as the 11.2. However this location is not recognized by the opatchauto command and complains of the missing bundle.xml file.
[grid@rhel6m2 patches]$ pwd
/usr/local/patches  <<-- this becomes the PATH_TO_PATCH_DIRECTORY (same as 11.2 as shown here)
[grid@rhel6m2 patches]$ ls
p17027533_121010_Linux-x86-64.zip
[grid@rhel6m2 patches]$ unzip p17027533_121010_Linux-x86-64.zip
[grid@rhel6m2 patches]$ su <-- preparing to run opatchauto as root user
[root@rhel6m2 patches]# /opt/app/12.1.0/grid/OPatch/opatchauto apply /usr/local/patches -ocmrf ocm.rsp

Parameter Validation: Successful

Patch Collection failed: Invalid patch location "/usr/local/patches" as there is no bundle.xml file in it or its parent directory.

opatchauto failed with error code 2.
So using the location where the patch was unzipped doesn't work unlike 11.2. Give the full path to the patch directory
[root@rhel6m2 patches]# /opt/app/12.1.0/grid/OPatch/opatchauto apply /usr/local/patches/17272829 -ocmrf ocm.rsp

OPatchauto version : 12.1.0.1.2
OUI version        : 12.1.0.1.0
Running from       : /opt/app/12.1.0/grid

opatchauto log file: /opt/app/12.1.0/grid/cfgtoollogs/opatchauto/17272829/opatch_gi_2013-10-16_15-39-34_deploy.log

Parameter Validation: Successful
...
Apply of patch progress.
Also worth noting is that along with opatchauto keyword apply must be given without it a syntax error occurs
[root@rhel6m1 patches]# /opt/app/12.1.0/grid/OPatch/opatchauto /usr/local/patches/17272829 -ocmrf ocm.rsp
OPatch Automation Tool
Copyright (c) 2013, Oracle Corporation.  All rights reserved.

Syntax Error... Unrecognized Command or Option (/usr/local/patches/17272829): 1st argument must be one of the following:
   apply
   rollback
   version
   ..
Section 2.3 on the readme.html does mention apply keyword in the commands but in 2.4 Patch installation section the apply key word missing. This is another difference compared to 11.2 where there was no apply key word when opatch auto option was used. Rollback commands on section 2.7 are also incorrectly listed. Correct rollback commands are listed on section 2.3.
The readme.html for GI system patch doesn't list any post installation task such as loading modified SQLs. This is automatically run as part of the patch apply. Once the patch is applied on the last node of the RAC the registry history is updated
SQL> select * from dba_registry_history;

ACTION_TIME                    ACTION     NAMESPACE  VERSION            ID BUNDLE_SER COMMENTS
------------------------------ ---------- ---------- ---------- ---------- ---------- ------------------------------
12-AUG-13 04.28.26.378432 PM   UPGRADE    SERVER     12.1.0.1.0                       Upgraded from 11.2.0.3.0
12-AUG-13 04.34.09.496894 PM   APPLY      SERVER     12.1.0.1            0 PSU        Patchset 12.1.0.0.0
16-OCT-13 04.05.54.514261 PM   APPLY      SERVER     12.1.0.1            1 PSU        PSU 12.1.0.1.1
SQL apply is logged in dba_registry_sqlpatch table
SQL> show con_name

CON_NAME
------------------------------
CDB$ROOT

SQL> select * from dba_registry_sqlpatch;

  PATCH_ID ACTION     STATUS          ACTION_TIME                    DESCRIPTIO LOGFILE
---------- ---------- --------------- ------------------------------ ---------- --------------------------------------------------------------------------------
  17027533 APPLY      SUCCESS         16-OCT-13 05.54.42.295071 PM   sqlpatch   /opt/app/oracle/product/12.1.0/dbhome_1/sqlpatch/17027533/17027533_apply_CDB12C_
                                                                                CDBROOT_2013Oct16_17_51_30.log
Each PDB will also have its own log file entry in the dba_registry_sqlpatch view
SQL> alter session set container=pdb12c;

Session altered.

SQL> show con_name

CON_NAME
------------------------------
PDB12C

SQL> select * from dba_registry_sqlpatch;

  PATCH_ID ACTION     STATUS          ACTION_TIME                    DESCRIPTIO LOGFILE
---------- ---------- --------------- ------------------------------ ---------- --------------------------------------------------------------------------------
  17027533 APPLY      END             16-OCT-13 05.54.44.488402 PM   sqlpatch   /opt/app/oracle/product/12.1.0/dbhome_1/sqlpatch/17027533/17027533_apply_CDB12C_
                                                                                PDB12C_2013Oct16_17_51_49.log
Even the pdb$seed database could be queried this way to confirm that it is also updated with the SQL changes made by the patch. Any new PDB created using the seed PDB also gets these modification and no patch post installation work is necessary.




Full output of running the opaatchauto is given below
[root@rhel6m1 17272829]# /opt/app/12.1.0/grid/OPatch/opatchauto apply `pwd` -ocmrf ../ocm.rsp
OPatch Automation Tool
Copyright (c) 2013, Oracle Corporation.  All rights reserved.

OPatchauto version : 12.1.0.1.2
OUI version        : 12.1.0.1.0
Running from       : /opt/app/12.1.0/grid

opatchauto log file: /opt/app/12.1.0/grid/cfgtoollogs/opatchauto/17272829/opatch_gi_2013-10-16_14-23-50_deploy.log

Parameter Validation: Successful


Grid Infrastructure home:
/opt/app/12.1.0/grid
RAC home(s):
/opt/app/oracle/product/12.1.0/dbhome_1

Configuration Validation: Successful

Patch Location: /usr/local/patches/17272829
Grid Infrastructure Patch(es): 17027533 17077442 17303297
RAC Patch(es): 17027533 17077442

Patch Validation: Successful

Stopping RAC (/opt/app/oracle/product/12.1.0/dbhome_1) ... Successful
Following database(s) were stopped and will be restarted later during the session: std11g2

Applying patch(es) to "/opt/app/oracle/product/12.1.0/dbhome_1" ...
Patch "/usr/local/patches/17272829/17027533" successfully applied to "/opt/app/oracle/product/12.1.0/dbhome_1".
Patch "/usr/local/patches/17272829/17077442" successfully applied to "/opt/app/oracle/product/12.1.0/dbhome_1".

Stopping CRS ... Successful

Applying patch(es) to "/opt/app/12.1.0/grid" ...
Patch "/usr/local/patches/17272829/17027533" successfully applied to "/opt/app/12.1.0/grid".
Patch "/usr/local/patches/17272829/17077442" successfully applied to "/opt/app/12.1.0/grid".
Patch "/usr/local/patches/17272829/17303297" successfully applied to "/opt/app/12.1.0/grid".

Starting CRS ... Successful

Starting RAC (/opt/app/oracle/product/12.1.0/dbhome_1) ... Successful

SQL changes, if any, are applied successfully on the following database(s): std11g2

Apply Summary:
Following patch(es) are successfully installed:
GI Home: /opt/app/12.1.0/grid: 17027533, 17077442, 17303297
RAC Home: /opt/app/oracle/product/12.1.0/dbhome_1: 17027533, 17077442

On a system with PDBs that have dynamic services created for them, stopping RAC step will have the following output listing the service
Stopping RAC (/opt/app/oracle/product/12.1.0/dbhome_1) ... Successful
Following database(s) were stopped and will be restarted later during the session: -pdbsvc,cdb12c
If there are no services created for the PDBs then only the CDB is mentioned in the output
Stopping RAC (/opt/app/oracle/product/12.1.0/dbhome_1) ... Successful
Following database(s) were stopped and will be restarted later during the session: cdb12c

Apply has the option of analyze which says
-analyze
              This option runs all the required prerequisite checks to confirm
              the patchability of the system without actually patching or
              affecting the system in any way.
Even though it says "runs all the required prerequisite checks to confirm the patchability" this seem not be the case. Analyze could suceed and actual patch apply could fail.
[root@rhel12c2 patch]# /opt/app/12.1.0/grid/OPatch/opatchauto apply /usr/local/patch/17272829 -ocmrf ocm.rsp -analyze
OPatch Automation Tool
Copyright (c) 2013, Oracle Corporation.  All rights reserved.

OPatchauto version : 12.1.0.1.2
OUI version        : 12.1.0.1.0
Running from       : /opt/app/12.1.0/grid

opatchauto log file: /opt/app/12.1.0/grid/cfgtoollogs/opatchauto/17272829/opatch_gi_2013-10-17_11-28-37_analyze.log

NOTE: opatchauto is running in ANALYZE mode. There will be no change to your system.

Parameter Validation: Successful

Grid Infrastructure home:
/opt/app/12.1.0/grid
RAC home(s):
/opt/app/oracle/product/12.1.0/dbhome_1

Configuration Validation: Successful

Patch Location: /usr/local/patch/17272829
Grid Infrastructure Patch(es): 17027533 17077442 17303297
RAC Patch(es): 17027533 17077442

Patch Validation: Successful

Analyzing patch(es) on "/opt/app/oracle/product/12.1.0/dbhome_1" ...
Patch "/usr/local/patch/17272829/17027533" successfully analyzed on "/opt/app/oracle/product/12.1.0/dbhome_1" for apply.
Patch "/usr/local/patch/17272829/17077442" successfully analyzed on "/opt/app/oracle/product/12.1.0/dbhome_1" for apply.

Analyzing patch(es) on "/opt/app/12.1.0/grid" ...
Patch "/usr/local/patch/17272829/17027533" successfully analyzed on "/opt/app/12.1.0/grid" for apply.
Patch "/usr/local/patch/17272829/17077442" successfully analyzed on "/opt/app/12.1.0/grid" for apply.
Patch "/usr/local/patch/17272829/17303297" successfully analyzed on "/opt/app/12.1.0/grid" for apply.

SQL changes, if any, are analyzed successfully on the following database(s): cdb12c

Apply Summary:
Following patch(es) are successfully analyzed:
GI Home: /opt/app/12.1.0/grid: 17027533, 17077442, 17303297
RAC Home: /opt/app/oracle/product/12.1.0/dbhome_1: 17027533, 17077442

opatchauto succeeded.

<<------ Running of actual patch command ----------->>
[root@rhel12c2 patch]# /opt/app/12.1.0/grid/OPatch/opatchauto apply /usr/local/patch/17272829 -ocmrf ocm.rsp
OPatch Automation Tool
Copyright (c) 2013, Oracle Corporation.  All rights reserved.

OPatchauto version : 12.1.0.1.2
OUI version        : 12.1.0.1.0
Running from       : /opt/app/12.1.0/grid

opatchauto log file: /opt/app/12.1.0/grid/cfgtoollogs/opatchauto/17272829/opatch_gi_2013-10-17_11-32-12_deploy.log

Parameter Validation: Successful

Grid Infrastructure home:
/opt/app/12.1.0/grid
RAC home(s):
/opt/app/oracle/product/12.1.0/dbhome_1

Configuration Validation: Successful

Patch Location: /usr/local/patch/17272829
Grid Infrastructure Patch(es): 17027533 17077442 17303297
RAC Patch(es): 17027533 17077442

Patch Validation: Successful

Stopping RAC (/opt/app/oracle/product/12.1.0/dbhome_1) ... Successful
Following database(s) were stopped and will be restarted later during the session: -pdbsvc,cdb12c

Applying patch(es) to "/opt/app/oracle/product/12.1.0/dbhome_1" ...
Patch "/usr/local/patch/17272829/17027533" successfully applied to "/opt/app/oracle/product/12.1.0/dbhome_1".
Patch "/usr/local/patch/17272829/17077442" successfully applied to "/opt/app/oracle/product/12.1.0/dbhome_1".

Stopping CRS ... Successful

Applying patch(es) to "/opt/app/12.1.0/grid" ...
Command "/opt/app/12.1.0/grid/OPatch/opatch napply -phBaseFile /tmp/OraGI12Home1_patchList -local  -invPtrLoc /opt/app/12.1.0/grid/oraInst.loc -oh /opt/app/12.1.0/grid -silent -ocmrf /usr/local/patch/ocm.rsp" execution failed:
UtilSession failed:
Prerequisite check "CheckSystemSpace" failed.

Log file Location for the failed command: /opt/app/12.1.0/grid/cfgtoollogs/opatch/opatch2013-10-17_11-39-04AM_1.log

[WARNING] The local database instance 'cdb12c2' from '/opt/app/oracle/product/12.1.0/dbhome_1' is not running. SQL changes, if any,  will not be applied. Please refer to the log file for more details.
For more details, please refer to the log file "/opt/app/12.1.0/grid/cfgtoollogs/opatchauto/17272829/opatch_gi_2013-10-17_11-32-12_deploy.debug.log".

Apply Summary:
Following patch(es) are successfully installed:
RAC Home: /opt/app/oracle/product/12.1.0/dbhome_1: 17027533, 17077442

Following patch(es) failed to be installed:
GI Home: /opt/app/12.1.0/grid: 17027533, 17077442, 17303297

opatchauto failed with error code 2.
Log files list the failed steps and has commands that could be manually executed.
-------------------Following steps still need to be executed-------------------

/opt/app/12.1.0/grid/OPatch/opatch napply -phBaseFile /tmp/OraGI12Home1_patchList -local  -invPtrLoc /opt/app/12.1.0/grid/oraInst.loc -oh /opt/app/12.1.0/grid -silent -ocmrf /usr/local/patch/ocm.rsp (TRIED BUT FAILED)

/opt/app/12.1.0/grid/rdbms/install/rootadd_rdbms.sh

/usr/bin/perl /opt/app/12.1.0/grid/crs/install/rootcrs.pl -postpatch
Executing the first command shows how much free disk space must be available before the patch apply
[grid@rhel12c2 patch]$ /opt/app/12.1.0/grid/OPatch/opatch napply -phBaseFile /tmp/OraGI12Home1_patchList -local  -invPtrLoc /opt/app/12.1.0/grid/oraInst.loc -oh /opt/app/12.1.0/grid -silent -ocmrf /usr/local/patch/ocm.rsp
Oracle Interim Patch Installer version 12.1.0.1.2
Copyright (c) 2013, Oracle Corporation.  All rights reserved.


Oracle Home       : /opt/app/12.1.0/grid
Central Inventory : /opt/app/oraInventory
   from           : /opt/app/12.1.0/grid/oraInst.loc
OPatch version    : 12.1.0.1.2
OUI version       : 12.1.0.1.0
Log file location : /opt/app/12.1.0/grid/cfgtoollogs/opatch/opatch2013-10-17_11-42-52AM_1.log

Verifying environment and performing prerequisite checks...
Prerequisite check "CheckSystemSpace" failed.
The details are:
Required amount of space(10578.277MB) is not available.
UtilSession failed:
Prerequisite check "CheckSystemSpace" failed.
Log file location: /opt/app/12.1.0/grid/cfgtoollogs/opatch/opatch2013-10-17_11-42-52AM_1.log

OPatch failed with error code 73

Unlike the RAC environment single instance database requires running the "loading modified SQLs" manually. 12c provides the datapatch tool for this purpose unlike in 11.2 where catbundle script was run for the same purpose. All databases (CDB and PDB) are updated.
[oracle@rhel6m1 OPatch]$ ./datapatch -verbose
SQL Patching tool version 12.1.0.1.0 on Mon Oct 21 16:58:15 2013
Copyright (c) 2013, Oracle.  All rights reserved.

Connecting to database...OK
Determining current state...
Currently installed SQL Patches:
  PDB CDB$ROOT:
  PDB PDB$SEED:
  PDB PDB12C:
  PDB PDB12CDI:
Currently installed C Patches: 17027533
For the following PDBs: CDB$ROOT
  Nothing to roll back
  The following patches will be applied: 17027533
For the following PDBs: PDB$SEED
  Nothing to roll back
  The following patches will be applied: 17027533
For the following PDBs: PDB12C
  Nothing to roll back
  The following patches will be applied: 17027533
For the following PDBs: PDB12CDI
  Nothing to roll back
  The following patches will be applied: 17027533
Adding patches to installation queue...
Installing patches...
Validating logfiles...
Patch 17027533 apply (pdb CDB$ROOT): SUCCESS
  logfile: /opt/app/oracle/product/12.1.0/dbhome_1/sqlpatch/17027533/17027533_apply_ENT12C_CDBROOT_2013Oct21_16_58_30.log (no errors)
Patch 17027533 apply (pdb PDB$SEED): SUCCESS
  logfile: /opt/app/oracle/product/12.1.0/dbhome_1/sqlpatch/17027533/17027533_apply_ENT12C_PDBSEED_2013Oct21_16_59_06.log (no errors)
Patch 17027533 apply (pdb PDB12C): SUCCESS
  logfile: /opt/app/oracle/product/12.1.0/dbhome_1/sqlpatch/17027533/17027533_apply_ENT12C_PDB12C_2013Oct21_16_59_32.log (no errors)
Patch 17027533 apply (pdb PDB12CDI): SUCCESS
  logfile: /opt/app/oracle/product/12.1.0/dbhome_1/sqlpatch/17027533/17027533_apply_ENT12C_PDB12CDI_2013Oct21_16_59_55.log (no errors)
SQL Patching tool complete on Mon Oct 21 17:00:30 2013
Each container could be queried to check the status of the apply.
SQL> show con_name

CON_NAME
------------------------------
CDB$ROOT
SQL> select * from dba_registry_sqlpatch;

  PATCH_ID ACTION          STATUS          ACTION_TIME                  DESCRIPTIO LOGFILE
---------- --------------- --------------- ---------------------------- ---------- ----------------------------------------------------------------------
  17027533 APPLY           SUCCESS         21-OCT-13 05.00.27.856979 PM sqlpatch   /opt/app/oracle/product/12.1.0/dbhome_1/sqlpatch/17027533/17027533_app
                                                                                   ly_ENT12C_CDBROOT_2013Oct21_16_58_30.log

SQL> ALTER SESSION SET container = pdb$seed;
Session altered.

SQL> show con_name

CON_NAME
------------------------------
PDB$SEED
SQL>  select * from dba_registry_sqlpatch;

  PATCH_ID ACTION          STATUS          ACTION_TIME                  DESCRIPTIO LOGFILE
---------- --------------- --------------- ---------------------------- ---------- ----------------------------------------------------------------------
  17027533 APPLY           SUCCESS         21-OCT-13 05.00.29.488402 PM sqlpatch   /opt/app/oracle/product/12.1.0/dbhome_1/sqlpatch/17027533/17027533_app
                                                                                   ly_ENT12C_PDBSEED_2013Oct21_16_59_06.log                       

SQL> ALTER SESSION SET container = pdb12c;
Session altered.
                       
SQL> show con_name

CON_NAME
------------------------------
PDB12C
SQL> select * from dba_registry_sqlpatch;

  PATCH_ID ACTION          STATUS          ACTION_TIME                  DESCRIPTIO LOGFILE
---------- --------------- --------------- ---------------------------- ---------- ----------------------------------------------------------------------
  17027533 APPLY           SUCCESS         21-OCT-13 05.00.30.823562 PM sqlpatch   /opt/app/oracle/product/12.1.0/dbhome_1/sqlpatch/17027533/17027533_app
                                                                                   ly_ENT12C_PDB12C_2013Oct21_16_59_32.log
                      
SQL> ALTER SESSION SET container = pdb12cdi;
Session altered.

SQL> show con_name

CON_NAME
------------------------------
PDB12CDI
SQL> select * from dba_registry_sqlpatch;

  PATCH_ID ACTION          STATUS          ACTION_TIME                  DESCRIPTIO LOGFILE
---------- --------------- --------------- ---------------------------- ---------- ----------------------------------------------------------------------
  17027533 APPLY           SUCCESS         21-OCT-13 05.00.30.996406 PM sqlpatch   /opt/app/oracle/product/12.1.0/dbhome_1/sqlpatch/17027533/17027533_app
                                                                                   ly_ENT12C_PDB12CDI_2013Oct21_16_59_55.log

Useful metalink notes
Known Patching Issues for the Oct 15 PSU, Oracle Database 12c R1 using opatchauto and EM [ID 1592252.1]

Update 17 January 2014
More Useful metalink notes
Supplemental Readme - Patch Installation and Deinstallation For 12.1.0.1.x GI PSU [ID 1591616.1]
Example: Manually Apply a 12c GI PSU in Cluster Environment [ID 1594184.1]
Example: Manually Apply a 12c GI PSU in Standalone Environment [ID 1595408.1]
Example: Applying a 12c GI PSU With opatchauto in GI Cluster or Standalone Environment [ID 1594183.1]
What's the sub-patches in 12c GI PSU [ID 1595371.1]

Wednesday, October 16, 2013

Upgrade from 11.1.0.7 to 11.2.0.4 (Clusterware, ASM & RAC)

This is not a step by step guide to upgrading from 11.1.0.7 to 11.2.0.4. There's an earlier post which shows upgrading from 11.1.0.7 to 11.2.0.3 and this post is a follow up to that highlighting mainly the differences. For oracle documentation and useful metalink notes refer the previous post. The 11.1 environment used is a clone of the one used for the previous post.
First difference encountered during the upgrade to 11.2.0.4 is on the cluvfy
./runcluvfy.sh stage -pre crsinst -upgrade -n rac1,rac2 -rolling -src_crshome /opt/crs/oracle/product/11.1.0/crs -dest_crshome /opt/app/11.2.0/grid -dest_version 11.2.0.4.0 -fixup  -verbose
Running the cluvfy that came with the installation media flagged several per-requisites as failed. This seem to be an issue/bug on the 11.2.0.4 installation's cluvfy as the per-requisites that were flagged as failed were successful when evaluated with a 11.2.0.3 cluvfy and the checked values hasn't changed from 11.2.0.3 to 11.2.0.4. For the most part the failures were when evaluating the remote node. If the node running the cluvfy was changed (earlier remote node becomes the local node running the cluvfy) then per-requisites that were flagged as failed are now successful and same per-requisites are flagged as failed on the new remote node. In short the runcluvfy.sh that comes with the 11.2.0.4 installation media(in file p13390677_112040_Linux-x86-64_3of7.zip) is not useful in evaluating the per-requisites for upgrade are met. Following is the list of per-requisites that had issues, clvufy was run from node called rac1 (local node) and in this case rac2 is the remote node
Check: Free disk space for "rac2:/opt/app/11.2.0/grid,rac2:/tmp"
  Path              Node Name     Mount point   Available     Required      Status
  ----------------  ------------  ------------  ------------  ------------  ------------
  /opt/app/11.2.0/grid  rac2          UNKNOWN       NOTAVAIL      7.5GB         failed
  /tmp              rac2          UNKNOWN       NOTAVAIL      7.5GB         failed
Result: Free disk space check failed for "rac2:/opt/app/11.2.0/grid,rac2:/tmp"
cluvfy seem unable to get the space usage from the remote node. When cluvfy was run from rac2 space check on rac2 was passed and space check on rac1 would fail.
Checking for Oracle patch "11724953" in home "/opt/crs/oracle/product/11.1.0/crs".
  Node Name     Applied                   Required                  Comment
  ------------  ------------------------  ------------------------  ----------
  rac2          missing                   11724953                  failed
  rac1          11724953                  11724953                  passed
Result: Check for Oracle patch "11724953" in home "/opt/crs/oracle/product/11.1.0/crs" failed
Patch 11724953 (2011 April CRS PSU) is required to be present in the 11.1 environment before the upgrade to 11.2.0.4 and cluvfy is unable to verify this on the remote node. This could be manually checked with OPatch.
Check: TCP connectivity of subnet "192.168.0.0"
  Source                          Destination                     Connected?
  ------------------------------  ------------------------------  ----------------
  rac1:192.168.0.85               rac2:192.168.0.85               failed

ERROR:
PRVF-7617 : Node connectivity between "rac1 : 192.168.0.85" and "rac2 : 192.168.0.85" failed
  rac1:192.168.0.85               rac2:192.168.0.89               failed

ERROR:
PRVF-7617 : Node connectivity between "rac1 : 192.168.0.85" and "rac2 : 192.168.0.89" failed
  rac1:192.168.0.85               rac1:192.168.0.89               failed

ERROR:
PRVF-7617 : Node connectivity between "rac1 : 192.168.0.85" and "rac1 : 192.168.0.89" failed
Result: TCP connectivity check failed for subnet "192.168.0.0"
Some of the node connectivity checks also fails. Oddly enough using cluvfy's own nodereach and nodecon checks pass.
ERROR:
PRVF-5449 : Check of Voting Disk location "/dev/sdb2(/dev/sdb2)" failed on the following nodes:
        rac2
        rac2:GetFileInfo command failed.

PRVF-5431 : Oracle Cluster Voting Disk configuration check failed
Even though cvuqdisk-1.0.9-1.rpm is installed sharedness check for vote disk fails on the remote node. (update 2015/02/20 : workaround for this error is given on 1599025.1)
Apart from cluvfy, raccheck could also be used to evaluate the upgrade readiness.
raccheck -u -o pre
Even though cluvfy fails to evaluate certain per-requisites OUI is able to evaluate all without any issue. Below is the output from the OUI

Create additional user groups for ASM administration (refer the previous post) and begin the clusterware upgrade. It is possible to upgrade ASM after the clusterware upgrade but in this case ASM is upgraded at the same time as the clusterware. This is a out-of-place rolling upgrade. The clusterware stack will be up until rootupgrade.sh is run. Versions before the upgrade
[oracle@rac1 ~]$ crsctl query crs activeversion
Oracle Clusterware active version on the cluster is [11.1.0.7.0]

[oracle@rac1 ~]$ crsctl query crs softwareversion rac1
Oracle Clusterware version on node [rac1] is [11.1.0.7.0]

[oracle@rac1 ~]$ crsctl query crs softwareversion rac2
Oracle Clusterware version on node [rac2] is [11.1.0.7.0]

[oracle@rac1 ~]$ crsctl query crs releaseversion
11.1.0.7.0
Summary page
Rootupgrade execution output from rac1 node
[root@rac1 ~]# /opt/app/11.2.0/grid/rootupgrade.sh
Performing root user operation for Oracle 11g

The following environment variables are set as:
    ORACLE_OWNER= oracle
    ORACLE_HOME=  /opt/app/11.2.0/grid

Enter the full pathname of the local bin directory: [/usr/local/bin]:
The contents of "dbhome" have not changed. No need to overwrite.
The file "oraenv" already exists in /usr/local/bin.  Overwrite it? (y/n)
[n]: y
   Copying oraenv to /usr/local/bin ...
The file "coraenv" already exists in /usr/local/bin.  Overwrite it? (y/n)
[n]: y
   Copying coraenv to /usr/local/bin ...

Entries will be added to the /etc/oratab file as needed by
Database Configuration Assistant when a database is created
Finished running generic part of root script.
Now product-specific root actions will be performed.
Using configuration parameter file: /opt/app/11.2.0/grid/crs/install/crsconfig_params
Creating trace directory
Installing Trace File Analyzer
OLR initialization - successful
  root wallet
  root wallet cert
  root cert export
  peer wallet
  profile reader wallet
  pa wallet
  peer wallet keys
  pa wallet keys
  peer cert request
  pa cert request
  peer cert
  pa cert
  peer root cert TP
  profile reader root cert TP
  pa root cert TP
  peer pa cert TP
  pa peer cert TP
  profile reader pa cert TP
  profile reader peer cert TP
  peer user cert
  pa user cert
Replacing Clusterware entries in inittab
clscfg: EXISTING configuration version 4 detected.
clscfg: version 4 is 11 Release 1.
Successfully accumulated necessary OCR keys.
Creating OCR keys for user 'root', privgrp 'root'..
Operation successful.
Configure Oracle Grid Infrastructure for a Cluster ... succeeded
This will update the software version but active version will remain the lower version of 11.1 until all nodes are upgraded.
[oracle@rac1 ~]$ crsctl query crs activeversion
Oracle Clusterware active version on the cluster is [11.1.0.7.0]

[oracle@rac1 ~]$ crsctl query crs softwareversion
Oracle Clusterware version on node [rac1] is [11.2.0.4.0]
Rootupgrade output from node rac2 (last node)
[root@rac2 ~]# /opt/app/11.2.0/grid/rootupgrade.sh
Performing root user operation for Oracle 11g

The following environment variables are set as:
    ORACLE_OWNER= oracle
    ORACLE_HOME=  /opt/app/11.2.0/grid

Enter the full pathname of the local bin directory: [/usr/local/bin]:
The contents of "dbhome" have not changed. No need to overwrite.
The file "oraenv" already exists in /usr/local/bin.  Overwrite it? (y/n)
[n]: y
   Copying oraenv to /usr/local/bin ...
The file "coraenv" already exists in /usr/local/bin.  Overwrite it? (y/n)
[n]: y
   Copying coraenv to /usr/local/bin ...

Entries will be added to the /etc/oratab file as needed by
Database Configuration Assistant when a database is created
Finished running generic part of root script.
Now product-specific root actions will be performed.
Using configuration parameter file: /opt/app/11.2.0/grid/crs/install/crsconfig_params
Creating trace directory
Installing Trace File Analyzer
OLR initialization - successful
Replacing Clusterware entries in inittab
clscfg: EXISTING configuration version 5 detected.
clscfg: version 5 is 11g Release 2.
Successfully accumulated necessary OCR keys.
Creating OCR keys for user 'root', privgrp 'root'..
Operation successful.
Start upgrade invoked..
Started to upgrade the Oracle Clusterware. This operation may take a few minutes.
Started to upgrade the OCR.
Started to upgrade the CSS.
Started to upgrade the CRS.
The CRS was successfully upgraded.
Successfully upgraded the Oracle Clusterware.
Oracle Clusterware operating version was successfully set to 11.2.0.4.0
Configure Oracle Grid Infrastructure for a Cluster ... succeeded
Active version is updated to 11.2.0.4
[oracle@rac2 ~]$ crsctl query crs activeversion
Oracle Clusterware active version on the cluster is [11.2.0.4.0]
Once the OK button on "Execute Configuration Script" dialog is clicked it will start the execution of set of configuration assistants and ASMCA among them. ASM upgrade is done in a rolling fashion and following could be seen on the ASM alert log
Tue Oct 15 12:52:37 2013
ALTER SYSTEM START ROLLING MIGRATION TO 11.2.0.4.0
Once the configuration assistants are run clusterware upgrade is complete. It must also be noted that the upgraded environment had OCR and Vote disk using block devices
[oracle@rac1 ~]$ crsctl query css votedisk
##  STATE    File Universal Id                File Name Disk group
--  -----    -----------------                --------- ---------
 1. ONLINE   9687be081c784f98bf9166d561d875b0 (/dev/sdb2) []
Located 1 voting disk(s).
If it's planned to upgrade to 12c (from 11.2) these must be moved to an ASM diskgroup.
Check and change auto start status of certain resources so they are always brought up
>   'BEGIN {printf "%-35s %-25s %-18s\n", "Resource Name", "Type", "Auto Start State";
>           printf "%-35s %-25s %-18s\n", "-----------", "------", "----------------";}'
Resource Name                       Type                      Auto Start State
-----------                         ------                    ----------------
[oracle@rac1 ~]$ crsctl stat res -p | egrep -w "NAME|TYPE|AUTO_START" | grep -v DEFAULT_TEMPLATE | awk \
>  'BEGIN { FS="="; state = 0; }
>   $1~/NAME/ {appname = $2; state=1};
>   state == 0 {next;}
>   $1~/TYPE/ && state == 1 {apptarget = $2; state=2;}
>   $1~/AUTO_START/ && state == 2 {appstate = $2; state=3;}
>   state == 3 {printf "%-35s %-25s %-18s\n", appname, apptarget, appstate; state=0;}'
ora.DATA.dg                         ora.diskgroup.type        never
ora.FLASH.dg                        ora.diskgroup.type        never
ora.LISTENER.lsnr                   ora.listener.type         restore
ora.LISTENER_SCAN1.lsnr             ora.scan_listener.type    restore
ora.asm                             ora.asm.type              never
ora.cvu                             ora.cvu.type              restore
ora.gsd                             ora.gsd.type              always
ora.net1.network                    ora.network.type          restore
ora.oc4j                            ora.oc4j.type             restore
ora.ons                             ora.ons.type              always
ora.rac1.vip                        ora.cluster_vip_net1.type restore
ora.rac11g1.db                      application               1
ora.rac11g1.rac11g11.inst           application               1
ora.rac11g1.rac11g12.inst           application               1
ora.rac11g1.bx.cs                  application               1
ora.rac11g1.bx.rac11g11.srv        application               restore
ora.rac11g1.bx.rac11g12.srv        application               restore
ora.rac2.vip                        ora.cluster_vip_net1.type restore
ora.registry.acfs                   ora.registry.acfs.type    restore
ora.scan1.vip                       ora.scan_vip.type         restore
Remove the old 11.1 clusterware installation.
During the database shutdown (and before DB was upgraded to 11.2 version) following was seen during the cluster resrouce stop.
CRS-5809: Failed to execute 'ACTION_SCRIPT' value of '/opt/crs/oracle/product/11.1.0/crs/bin/racgwrap' for 'ora.rac11g1.db'. Error information 'cmd /opt/crs/oracle/product/11.1.0/crs/bin/racgwrap not found', Category : -2, OS error : 2
CRS-2678: 'ora.rac11g1.db' on 'rac1' has experienced an unrecoverable failure
CRS-0267: Human intervention required to resume its availability.
This was due to action_script attribute not being updated to reflect the new clusterware location. To fix this start the clusterware stack and run the following
[root@rac1 oracle]# crsctl modify resource ora.rac11g1.db -attr "ACTION_SCRIPT=/opt/app/11.2.0/grid/bin/racgwrap"
"/opt/app/11.2.0/grid" is the new clusterware home. More on this is available on Pre 11.2 Database Issues in 11gR2 Grid Infrastructure Environment [948456.1]
Post clusterware installation could be checked with cluvyf and raccheck
cluvfy stage -post crsinst -n rac1,rac2
./raccheck -u -o post
This conclude the upgrade of clusteware and ASM. Next step is the upgrade of database.



The database software upgrade is an out-of-place upgrade. Though it is possible for database software, in-place upgrades are not recommended. Unlike the previous post in this upgrade database software is installed first and then database is upgrade. Check the database installation readiness with
cluvfy stage -pre dbinst -upgrade -src_dbhome /opt/app/oracle/product/11.1.0/db_1 -dbname racse11g1 -dest_dbhome /opt/app/oracle/product/11.2.0/dbhome_1 -dest_version 11.2.0.4.0 -verbose -fixup
Unset ORACLE_HOME varaible and run the installation.

Give an different location to current ORACLE_HOME to proceed with the out-of-place upgrade.

Once the database software is installed next step is to upgrade the database. Copy utlu112i.sql from the 11.2 ORACLE_HOME/rdbms/admin to a location outside ORACLE_HOME and run. The pre-upgrade configuration tool utlu112i.sql will list any work needed before the upgrade.
SQL> @utlu112i.sql
SQL> SET SERVEROUTPUT ON FORMAT WRAPPED;
SQL> -- Linesize 100 for 'i' version 1000 for 'x' version
SQL> SET ECHO OFF FEEDBACK OFF PAGESIZE 0 LINESIZE 100;
Oracle Database 11.2 Pre-Upgrade Information Tool 10-15-2013 15:22:44
Script Version: 11.2.0.4.0 Build: 001
.
**********************************************************************
Database:
**********************************************************************
--> name:          RAC11G1
--> version:       11.1.0.7.0
--> compatible:    11.1.0.0.0
--> blocksize:     8192
--> platform:      Linux x86 64-bit
--> timezone file: V4
.
**********************************************************************
Tablespaces: [make adjustments in the current environment]
**********************************************************************
--> SYSTEM tablespace is adequate for the upgrade.
.... minimum required size: 1100 MB
--> SYSAUX tablespace is adequate for the upgrade.
.... minimum required size: 1445 MB
--> UNDOTBS1 tablespace is adequate for the upgrade.
.... minimum required size: 400 MB
--> TEMP tablespace is adequate for the upgrade.
.... minimum required size: 60 MB
.
**********************************************************************
Flashback: OFF
**********************************************************************
**********************************************************************
Update Parameters: [Update Oracle Database 11.2 init.ora or spfile]
Note: Pre-upgrade tool was run on a lower version 64-bit database.
**********************************************************************
--> If Target Oracle is 32-Bit, refer here for Update Parameters:
-- No update parameter changes are required.
.

--> If Target Oracle is 64-Bit, refer here for Update Parameters:
-- No update parameter changes are required.
.
**********************************************************************
Renamed Parameters: [Update Oracle Database 11.2 init.ora or spfile]
**********************************************************************
-- No renamed parameters found. No changes are required.
.
**********************************************************************
Obsolete/Deprecated Parameters: [Update Oracle Database 11.2 init.ora or spfile]
**********************************************************************
-- No obsolete parameters found. No changes are required
.

**********************************************************************
Components: [The following database components will be upgraded or installed]
**********************************************************************
--> Oracle Catalog Views         [upgrade]  VALID
--> Oracle Packages and Types    [upgrade]  VALID
--> JServer JAVA Virtual Machine [upgrade]  VALID
--> Oracle XDK for Java          [upgrade]  VALID
--> Real Application Clusters    [upgrade]  VALID
--> Oracle Workspace Manager     [upgrade]  VALID
--> OLAP Analytic Workspace      [upgrade]  VALID
--> OLAP Catalog                 [upgrade]  VALID
--> EM Repository                [upgrade]  VALID
--> Oracle Text                  [upgrade]  VALID
--> Oracle XML Database          [upgrade]  VALID
--> Oracle Java Packages         [upgrade]  VALID
--> Oracle interMedia            [upgrade]  VALID
--> Spatial                      [upgrade]  VALID
--> Oracle Ultra Search          [upgrade]  VALID
--> Expression Filter            [upgrade]  VALID
--> Rule Manager                 [upgrade]  VALID
--> Oracle Application Express   [upgrade]  VALID
... APEX will only be upgraded if the version of APEX in
... the target Oracle home is higher than the current one.
--> Oracle OLAP API              [upgrade]  VALID
.
**********************************************************************
Miscellaneous Warnings
**********************************************************************
WARNING: --> The "cluster_database" parameter is currently "TRUE"
.... and must be set to "FALSE" prior to running a manual upgrade.
WARNING: --> Database is using a timezone file older than version 14.
.... After the release migration, it is recommended that DBMS_DST package
.... be used to upgrade the 11.1.0.7.0 database timezone version
.... to the latest version which comes with the new release.
WARNING: --> Database contains INVALID objects prior to upgrade.
.... The list of invalid SYS/SYSTEM objects was written to
.... registry$sys_inv_objs.
.... The list of non-SYS/SYSTEM objects was written to
.... registry$nonsys_inv_objs.
.... Use utluiobj.sql after the upgrade to identify any new invalid
.... objects due to the upgrade.
.... USER ASANGA has 1 INVALID objects.
WARNING: --> EM Database Control Repository exists in the database.
.... Direct downgrade of EM Database Control is not supported. Refer to the
.... Upgrade Guide for instructions to save the EM data prior to upgrade.
WARNING: --> Ultra Search is not supported in 11.2 and must be removed
.... prior to upgrading by running rdbms/admin/wkremov.sql.
.... If you need to preserve Ultra Search data
.... please perform a manual cold backup prior to upgrade.
WARNING: --> Your recycle bin contains 4 object(s).
.... It is REQUIRED that the recycle bin is empty prior to upgrading
.... your database.  The command:
        PURGE DBA_RECYCLEBIN
.... must be executed immediately prior to executing your upgrade.
WARNING: --> Database contains schemas with objects dependent on DBMS_LDAP package.
.... Refer to the 11g Upgrade Guide for instructions to configure Network ACLs.
.... USER WKSYS has dependent objects.
.... USER FLOWS_030000 has dependent objects.
.
**********************************************************************
Recommendations
**********************************************************************
Oracle recommends gathering dictionary statistics prior to
upgrading the database.
To gather dictionary statistics execute the following command
while connected as SYSDBA:

    EXECUTE dbms_stats.gather_dictionary_stats;

**********************************************************************
Oracle recommends removing all hidden parameters prior to upgrading.

To view existing hidden parameters execute the following command
while connected AS SYSDBA:

    SELECT name,description from SYS.V$PARAMETER WHERE name
        LIKE '\_%' ESCAPE '\'

Changes will need to be made in the init.ora or spfile.

**********************************************************************
Oracle recommends reviewing any defined events prior to upgrading.

To view existing non-default events execute the following commands
while connected AS SYSDBA:
  Events:
    SELECT (translate(value,chr(13)||chr(10),' ')) FROM sys.v$parameter2
      WHERE  UPPER(name) ='EVENT' AND  isdefault='FALSE'

  Trace Events:
    SELECT (translate(value,chr(13)||chr(10),' ')) from sys.v$parameter2
      WHERE UPPER(name) = '_TRACE_EVENTS' AND isdefault='FALSE'

Changes will need to be made in the init.ora or spfile.

**********************************************************************
Elapsed: 00:00:01.26
For 11.1.0.7 to 11.2.0.4 upgrades if the database timezone is less than 14 there's no additional patches needed before the upgrade. But it's recommended to upgrade the database to 11.2.0.4 timezone once the upgrade is done. Timezone upgrade could also be done at the same time as database upgrade using DBUA. More on 1562142.1
If there are large number of records in AUD$ and FGA_LOG$ tables, pre-processing these tables could speed up the database upgrade. More on 1329590.1
Large amount of files in $ORACLE_HOME/`hostname -s`_$ORACLE_SID/sysman/emd/upload location could also lengthen the upgrade time. Refer 870814.1 and 837570.1 for more information.
Upgrade summary before DBUA is executed

Upgrade summary after the upgrade

Since in 11.2 versions _external_scn_rejection_threshold_hours is set to 24 by default commenting of this parameter after the upgrade is not a problem.
Check the timezone values of the upgraded database.
SQL> SELECT PROPERTY_NAME, SUBSTR(property_value, 1, 30) value
  2          FROM DATABASE_PROPERTIES
  3          WHERE PROPERTY_NAME LIKE 'DST_%'
  4          ORDER BY PROPERTY_NAME;

PROPERTY_NAME                  VALUE
------------------------------ ----------
DST_PRIMARY_TT_VERSION         14
DST_SECONDARY_TT_VERSION       0
DST_UPGRADE_STATE              NONE


SQL>  SELECT VERSION FROM v$timezone_file;

   VERSION
----------
        14

SQL> select TZ_VERSION from registry$database;

TZ_VERSION
----------
         4
If timezone_file value and value shown in database registry differ then registry could be updated as per 1509653.1
SQL> update registry$database set TZ_VERSION = (select version FROM v$timezone_file);

1 row updated.

SQL> commit;

Commit complete.

SQL> select TZ_VERSION from registry$database;

TZ_VERSION
----------
        14
Remote listener parameter will contain both scan VIP and pre-11.2 value. This could be reset to have only thee SCAN IPs.
SQL> select name,value from v$parameter where name='remote_listener';

NAME            VALUE
--------------- ----------------------------------------
remote_listener LISTENERS_RAC11G1, rac-scan-vip:1521
Finally update the compatible parameter to 11.2.0.4 once the upgrade is deemed satisfactory. This concludes the upgrade from 11.1.0.7 to 11.2.0.4

Useful metalink notes
RACcheck Upgrade Readiness Assessment [ID 1457357.1]
Complete Checklist for Manual Upgrades to 11gR2 [ID 837570.1]
Complete Checklist to Upgrade the Database to 11gR2 using DBUA [ID 870814.1]
Pre 11.2 Database Issues in 11gR2 Grid Infrastructure Environment [ID 948456.1]
Things to Consider Before Upgrading to 11.2.0.3/11.2.0.4 Grid Infrastructure/ASM [ID 1363369.1]
Actions For DST Updates When Upgrading To Or Applying The 11.2.0.4 Patchset [ID 1579838.1]
How to Pre-Process SYS.AUD$ Records Pre-Upgrade From 10.1 or later to 11gR1 or later. [ID 1329590.1]
Things to Consider Before Upgrading to 11.2.0.3 to Avoid Poor Performance or Wrong Results [ID 1392633.1]
Things to Consider Before Upgrading to 11.2.0.4 to Avoid Poor Performance or Wrong Results [ID 1645862.1]
Things to Consider Before Upgrading to Avoid Poor Performance or Wrong Results (11.2.0.X) [ID 1904820.1]

Related Posts
Upgrading from 10.2.0.4 to 10.2.0.5 (Clusterware, RAC, ASM)
Upgrade from 10.2.0.5 to 11.2.0.3 (Clusterware, RAC, ASM)
Upgrade from 11.1.0.7 to 11.2.0.3 (Clusterware, ASM & RAC)
Upgrading from 11.1.0.7 to 11.2.0.3 with Transient Logical Standby
Upgrading from 11.2.0.1 to 11.2.0.3 with in-place upgrade for RAC
In-place upgrade from 11.2.0.2 to 11.2.0.3
Upgrading from 11.2.0.2 to 11.2.0.3 with Physical Standby - 1
Upgrading from 11.2.0.2 to 11.2.0.3 with Physical Standby - 2
Upgrading from 11gR2 (11.2.0.3) to 12c (12.1.0.1) Grid Infrastructure
Upgrading RAC from 11.2.0.4 to 12.1.0.2 - Grid Infrastructure

Update on 2016-01-05

On a recent upgrade two pre-req checks failed which were successful on some of the earlier environments.
First one was related to ASMLib check.
Checking ASMLib configuration.
  Node Name                             Status
  ------------------------------------  ------------------------
  abx-db2                               (failed) ASMLib configuration is incorrect.
  abx-db1                               (failed) ASMLib configuration is incorrect.
Result: Check for ASMLib configuration failed.
However according to Linux: cluvfy reports "ASMLib configuration is incorrect" (Doc ID 1541309.1) this is ignorable if the manual check of the ASMLib status returns OK.
Second one was OCR sharedness.
Checking OCR integrity...
Check for compatible storage device for OCR location "/dev/mapper/mpath1p1"...

Checking OCR device "/dev/mapper/mpath1p1" for sharedness...
ERROR:
PRVF-4172 : Check of OCR device "/dev/mapper/mpath1p1" for sharedness failed
Could not find the storage
Check for compatible storage device for OCR location "/dev/mapper/mpath2p1"...

Checking OCR device "/dev/mapper/mpath2p1" for sharedness...
ERROR:
PRVF-4172 : Check of OCR device "/dev/mapper/mpath2p1" for sharedness failed
Could not find the storage

OCR integrity check failed
Again this is ignorable according to PRVF-4172 Check Of OCR Device For Sharedness Failed (Doc ID 1600719.1) and INS-20802 PRVF-4172 Reported after Successful Upgrade to 11gR2 Grid Infrastructure (Doc ID 1051763.1). These MOS notes refer mostly for solaris but the environment in this case was Linux 64-bit (RHEL 5).

Update on 2016-01-15

During the ASM upgrade following error could be seen on the ASM alert log where ASM upgrade is happening.
ALTER SYSTEM STOP ROLLING MIGRATION
KSXP:RM:       ->
KSXP:RM:       ->arg:[hgTo6 (111070->112040) @ inrm3, pay 30203]
KSXP:RM:       ->rm:[cver 30203 nver 30204 cifv 0 swtch/ing 1/0 flux 3 lastunrdy 0/put4]
KSXP:RM:       ->pages:[cur 2 refs 25 tot 25]
KSXP:RM:       ->oob:[ia changed 0 sync 1 lw 0 sg 0 sg_a 0 ssg 0 parm 0]
KSXP:RM:       ->ia:[changed 0 compat 1 sg1 {[0/1]=192.168.1.87} sg2 {[0/1]=192.168.1.87}]
KSXP:RM:  RET hgTo [SKGXP] incompat3 [not-native] at 112040
KSXP:RM:       ->
Errors in file /opt/app/oracle/diag/asm/+asm/+ASM1/trace/+ASM1_rbal_5838.trc:
ORA-15160: rolling migration internal fatal error in module SKGXP,hgTo:not-native
Private Interface 'eth1:1' configured from GPnP for use as a private interconnect.
  [name='eth1:1', type=1, ip=169.254.239.174, mac=08-00-27-79-49-de, net=169.254.0.0/16, mask=255.255.0.0, use=haip:cluster_interconnect/62]
  [name='eth0', type=1, ip=192.168.0.85, mac=08-00-27-c4-55-af, net=192.168.0.0/24, mask=255.255.255.0, use=public/1]
  [name='eth0:1', type=1, ip=192.168.0.89, mac=08-00-27-c4-55-af, net=192.168.0.0/24, mask=255.255.255.0, use=public/1]
  [name='eth0:2', type=1, ip=192.168.0.92, mac=08-00-27-c4-55-af, net=192.168.0.0/24, mask=255.255.255.0, use=public/1]
Cluster communication is configured to use the following interface(s) for this instance
  169.254.239.174
There are two MOS notes regarding this error.
ORA-15160: rolling migration internal fatal error in module SKGXP,valNorm:not-native (Doc ID 1682591.1)
Oracle Clusterware and RAC Support for RDS Over Infiniband (Doc ID 751343.1)
However the solutions mentioned in the MOS notes are not needed if before the upgrade ASM instances were using the UDP protocol. This could be verified looking at the ASM alert log. Before the upgrade on ASM alert log
Starting up ORACLE RDBMS Version: 11.1.0.7.0.
Using parameter settings in server-side pfile /opt/app/oracle/product/11.1.0/asm/dbs/init+ASM1.ora
Cluster communication is configured to use the following interface(s) for this instance
  10.0.3.2
cluster interconnect IPC version:Oracle UDP/IP (generic)
After the upgrade on the ASM alert log
ORACLE_HOME = /opt/app/11.2.0/grid
Using parameter settings in server-side spfile /dev/mapper/mpath6p1
Cluster communication is configured to use the following interface(s) for this instance
  169.254.213.90
cluster interconnect IPC version:Oracle UDP/IP (generic)
If the upgraded database edition is standard then refer the following posts to rectify expdp/impdp related issues.
DBMS_AW_EXP: SYS.AW$EXPRESS: OLAP not enabled After Upgrading to 11.2.0.4 Standard Edition
ORA-39127: unexpected error from call to export_string :=SYS.DBMS_CUBE_EXP.SCHEMA_INFO_EXP while Exporting

Wednesday, October 9, 2013

Role Separation and External Tables

Success of a select query on a external table that is created in a system with role separation whether it's single instance or RAC will depend on the connection (local or remote) and the permissions of the directories/files used for the external table. This seem to be odd behavior as output for a query shouldn't depend on the nature of the connection, if it fails or return data for a local connection same should happen for a remote connection. If one set of permission return results for a locally connected session then same permission should also return data for a remote connection.
Most likely scenario is that since under role separation listener runs as grid user and it appears certain commands are getting executed as grid user. Which muddies the "role separation" when grid user is involved with database objects. Whether this is a bug or not is not confirmed yet, SR is on going.
Below is the test case to recreate the behavior.
1. Create a directory to hold the external table related files in a location that could be made common for both grid and oracle user. In this case /usr/local is chosen as the location.
[root@rhel6m1 local]# cd /usr/local/
[root@rhel6m1 local]# mkdir exdata
[root@rhel6m1 local]# chmod 770 exdata
[root@rhel6m1 local]# chown oracle:oinstall exdata
Directory permission are set 770 for the first set of test cases.
2. The external table is created using a shell script and using the preprocessor clause (this external table example is available in Oracle Magazine 2012 Nov, if memory is correct!). Create the shell script file that will be the basis for the external table with following content
[oracle@rhel6m1 exdata]$ more run_df.sh
#!/bin/sh
/bin/df -Pl
3. Create a directory object and grant read,write permission on the directory object to the user that will create the external table
SQL> create or replace directory exec_dir as '/usr/local/exdata';

SQL> grant read,write on directory exec_dir to asanga;
Check the user has the privileges on the directory
SQL> select * from dba_tab_privs where table_name='EXEC_DIR';

GRANTEE  OWNER TABLE_NAME GRANTOR  PRIVILEGE  GRA HIE
-------- ----- ---------- -------- ---------- --- ---
ASANGA   SYS   EXEC_DIR   SYS      READ       NO  NO
ASANGA   SYS   EXEC_DIR   SYS      WRITE      NO  NO
4. Create the external table
SQL> conn asanga/asa

CREATE TABLE "DF"
    ( "FSNAME" VARCHAR2(100),
    "BLOCKS" NUMBER,
    "USED" NUMBER,
    "AVAIL" NUMBER,
    "CAPACITY" VARCHAR2(10),
    "MOUNT" VARCHAR2(100)
    )
    ORGANIZATION EXTERNAL
   ( TYPE ORACLE_LOADER
   DEFAULT DIRECTORY "EXEC_DIR"
   ACCESS PARAMETERS
   ( records delimited by newline
     preprocessor exec_dir:'run_df.sh'
     skip 1 fields terminated by whitespace ldrtrim
   )
   LOCATION
   ( "EXEC_DIR":'run_df.sh' ));
5. Case 1. Only oracle user having permission on the external table file (run_df.sh).
Set permission on the external file so that only oracle user has read and execute permission and no other permissions set on the file
[oracle@rhel6m1 exdata]$ chmod 500 run_df.sh
[oracle@rhel6m1 exdata]$ ls -l run_df.sh
-r-x------. 1 oracle oinstall 22 Oct  9 14:01 run_df.sh
Test 1.1 Run select query on the external table connecting with a local connection
[oracle@rhel6m1 exdata]$ sqlplus  asanga/asa

SQL> select  * from df;

FSNAME         BLOCKS       USED      AVAIL CAPACITY   MOUNT
---------- ---------- ---------- ---------- ---------- ----------
/dev/sda3    37054144   32706324    2465564 93%        /
tmpfs         1961580     228652    1732928 12%        /dev/shm
/dev/sda1       99150      27725      66305 30%        /boot
Test 1.2 Run select query on the external table connecting with a remote connection. In this case an SQLPlus remote connection is used. Output is the same even if a JDBC connection is used.
[oracle@rhel6m1 exdata]$ sqlplus  asanga/asa@std11g21

SQL> select * from df;
select * from df
*
ERROR at line 1:
ORA-29913: error in executing ODCIEXTTABLEFETCH callout
ORA-29400: data cartridge error
KUP-04095: preprocessor command /usr/local/exdata/run_df.sh encountered error
"/bin/sh: /usr/local/exdata/run_df.sh: Permission denied
As seen from the above output no rows returned and an error occurs. Interesting line is "/usr/local/exdata/run_df.sh: Permission denied" no such permission issue when executing with a local connection! Same table,same query and different outputs depending on whether connection is local or remote.
6. Case 2 Oracle user having execute and group having read permission.
All else remaining the same change the permission of the external file to 140 so oracle user has execute permission and group (oinstall) has read permission
[oracle@rhel6m1 exdata]$ chmod 140 run_df.sh
[oracle@rhel6m1 exdata]$ ls -l run_df.sh
---xr-----. 1 oracle oinstall 22 Oct  9 14:01 run_df.sh
Test 2.1 Run select query with a local connection
[oracle@rhel6m1 exdata]$ sqlplus  asanga/asa

SQL> select * from df;
select * from df
*
ERROR at line 1:
ORA-29913: error in executing ODCIEXTTABLEFETCH callout
ORA-29400: data cartridge error
KUP-04095: preprocessor command /usr/local/exdata/run_df.sh encountered error
"/bin/sh: /usr/local/exdata/run_df.sh: Permission denied
This time running with a local connection fails. Same error as before.
Test 2.2 Run select query with a remote connection
[oracle@rhel6m1 exdata]$ sqlplus  asanga/asa@std11g21

SQL> select  * from df;

FSNAME         BLOCKS       USED      AVAIL CAPACITY   MOUNT
---------- ---------- ---------- ---------- ---------- ----------
/dev/sda3    37054144   32697924    2473964 93%        /
tmpfs         1961580     228652    1732928 12%        /dev/shm
/dev/sda1       99150      27725      66305 30%        /boot
Running with a remote connection succeeds. Again same table,same query and different outputs depending on whether connection is local or remote. Looking at the file permission it is clear that when connection is remote file is read by a user that belongs to the oinstall group and executed by Oracle user. Only other user in the oinstall group beside oracle is grid. As remote connection comes in via the listener which is running as grid user remote connections are able to read the external file.
7. Case 3 Oracle user having read and execute permission and group having read permission.
This is a amalgamation of the permission from the above cases. Change external file permission so oracle user has read and execute permission and group has read permission.
[oracle@rhel6m1 exdata]$ chmod 540 run_df.sh
[oracle@rhel6m1 exdata]$ ls -l run_df.sh
-r-xr-----. 1 oracle dba 22 Oct  9 14:01 run_df.sh
Test 3.1 Run select query with a local connection
[oracle@rhel6m1 exdata]$ sqlplus  asanga/asa

SQL> select  * from df;

FSNAME         BLOCKS       USED      AVAIL CAPACITY   MOUNT
---------- ---------- ---------- ---------- ---------- ----------
/dev/sda3    37054144   32706324    2465564 93%        /
tmpfs         1961580     228652    1732928 12%        /dev/shm
/dev/sda1       99150      27725      66305 30%        /boot
Test 3.2 Run select query with a remote connection
[oracle@rhel6m1 exdata]$ sqlplus  asanga/asa@std11g21

SQL> select  * from df;

FSNAME         BLOCKS       USED      AVAIL CAPACITY   MOUNT
---------- ---------- ---------- ---------- ---------- ----------
/dev/sda3    37054144   32697924    2473964 93%        /
tmpfs         1961580     228652    1732928 12%        /dev/shm
/dev/sda1       99150      27725      66305 30%        /boot
This time both local and remote connections returns rows. So when a external table is created with a preprocessor to get same behavior for both local and remote connections permission must be set for both Oracle user and group.




If the external table is created without a preprocessor but using a static data file then permission of the file makes no difference to the output whether the connection is local or remote. However in this case it's the permission of the directory that matters and this directory permission also applicable to the preprocessor cases mentioned above as well.
8. A static file containing comma separated list of values will be created with the following command where it extract information about file permission and ownership.Execute it in a folder with several files
for i in `ls`; do ls -l $i | awk '{print $1","$3","$4","$9}' >> permission.txt ; done

[oracle@rhel6m1 exdata]$ more permission.txt
-rw-r--r--.,oracle,asmadmin,DF_27539.log
-rw-r--r--.,oracle,asmadmin,DF_27545.log
-rw-r--r--.,oracle,asmadmin,FILE_PERMS_27818.log
-rw-r--r--.,oracle,asmadmin,FILE_PERMS_27842.log
-r-xr-----.,oracle,dba,run_df.sh
-rwxr-xr-x.,root,root,status.sh
-rwxr-xr-x.,root,root,t.sh
Copy the generated file to exdata directory created in the previous cases. Set the permission of the permission.txt file such that only oracle user has read permission and no other permissions set.
[oracle@rhel6m1 exdata]$ chmod 400 permission.txt
[oracle@rhel6m1 exdata]$ ls -l permission.txt
-r--------. 1 oracle oinstall 272 Oct  9 15:38 permission.txt
9. Create the external table using this csv file
create table file_perms (
permission varchar2(12),
owner varchar2(15),
groups varchar2(15),
file_name varchar2(40))
ORGANIZATION EXTERNAL (
  TYPE ORACLE_LOADER
  DEFAULT DIRECTORY exec_dir
  ACCESS PARAMETERS (
    RECORDS DELIMITED BY NEWLINE
    FIELDS TERMINATED BY ','
    MISSING FIELD VALUES ARE NULL
  ) LOCATION ('permission.txt'))
   PARALLEL 5
   REJECT LIMIT UNLIMITED;
10.Case 4 CSV file having only read permission for Oracle user
Test 4.1 Run select query with a local connection
[oracle@rhel6m1 exdata]$ sqlplus  asanga/asa

SQL> select * from file_perms;

PERMISSION   OWNER           GROUPS          FILE_NAME
------------ --------------- --------------- -------------------------
-rw-r--r--.  oracle          asmadmin        DF_27539.log
-rw-r--r--.  oracle          asmadmin        DF_27545.log
-rw-r--r--.  oracle          asmadmin        FILE_PERMS_27818.log
-rw-r--r--.  oracle          asmadmin        FILE_PERMS_27842.log
-r-xr-----.  oracle          dba             run_df.sh
-rwxr-xr-x.  root            root            status.sh
-rwxr-xr-x.  root            root            t.sh
Local connection returns rows.
Test 4.2 Run select with a remote connection
[oracle@rhel6m1 exdata]$ sqlplus  asanga/asa@std11g21

SQL> select * from file_perms;

PERMISSION   OWNER           GROUPS          FILE_NAME
------------ --------------- --------------- -------------------------
-rw-r--r--.  oracle          asmadmin        DF_27539.log
-rw-r--r--.  oracle          asmadmin        DF_27545.log
-rw-r--r--.  oracle          asmadmin        FILE_PERMS_27818.log
-rw-r--r--.  oracle          asmadmin        FILE_PERMS_27842.log
-r-xr-----.  oracle          dba             run_df.sh
-rwxr-xr-x.  root            root            status.sh
-rwxr-xr-x.  root            root            t.sh
Remote connection returns rows as well and there's no difference to the output due to the fact oinstall group not having any permission the csv file.
11. Case 5 Change the permission on the directory containing the files such that only oracle has full set of permission and no other permissions. Only the directory's permission are changed permissions of the file inside the directory are unchanged.
[oracle@rhel6m1 local]$ chmod 700 exdata
drwx------. 2 oracle oinstall 4096 Oct  9 15:48 exdata
Test 5.1 Run select query with a local connection
[oracle@rhel6m1 exdata]$ sqlplus  asanga/asa

SQL> select * from file_perms;

PERMISSION   OWNER           GROUPS          FILE_NAME
------------ --------------- --------------- -------------------------
-rw-r--r--.  oracle          asmadmin        DF_27539.log
-rw-r--r--.  oracle          asmadmin        DF_27545.log
-rw-r--r--.  oracle          asmadmin        FILE_PERMS_27818.log
-rw-r--r--.  oracle          asmadmin        FILE_PERMS_27842.log
-r-xr-----.  oracle          dba             run_df.sh
-rwxr-xr-x.  root            root            status.sh
-rwxr-xr-x.  root            root            t.sh
Local connection returns rows as before.
Test 5.2 Run select with a remote connection
[oracle@rhel6m1 exdata]$ sqlplus  asanga/asa@std11g21

SQL> select * from file_perms;
select * from file_perms
*
ERROR at line 1:
ORA-29913: error in executing ODCIEXTTABLEOPEN callout
ORA-29400: data cartridge error
KUP-04040: file permission.txt in EXEC_DIR not found
Remote connection fails to return any rows. The error says file permission.txt is not found in the directory which is not true. Again same table, same query two different outputs based on whether connection being local or remote. The same behavior could be seen for running the table that uses shell script file as well.
Test 5.3 Run select query with a local connection
[oracle@rhel6m1 exdata]$ sqlplus  asanga/asa

SQL> select  * from df;

FSNAME         BLOCKS       USED      AVAIL CAPACITY   MOUNT
---------- ---------- ---------- ---------- ---------- ----------
/dev/sda3    37054144   32706324    2465564 93%        /
tmpfs         1961580     228652    1732928 12%        /dev/shm
/dev/sda1       99150      27725      66305 30%        /boot
Running select with a remote connection
Test 5.4
[oracle@rhel6m1 exdata]$ sqlplus  asanga/asa@std11g21

SQL> select * from df;
select * from df
*
ERROR at line 1:
ORA-29913: error in executing ODCIEXTTABLEOPEN callout
ORA-29400: data cartridge error
KUP-04040: file run_df.sh in /usr/local/exdata not found
12. Case 6 Oracle user having full permission and group having execute permission on the directory.
Change the permission of the directory containing the csv file as shown below
[oracle@rhel6m1 local]$ chmod 710 exdata
drwx--x---. 2 oracle oinstall 4096 Oct  9 15:59 exdata
Test 6.1 Run select query with a local connection
[oracle@rhel6m1 exdata]$ sqlplus  asanga/asa

SQL> select * from file_perms;

PERMISSION   OWNER           GROUPS          FILE_NAME
------------ --------------- --------------- -------------------------
-rw-r--r--.  oracle          asmadmin        DF_27539.log
-rw-r--r--.  oracle          asmadmin        DF_27545.log
-rw-r--r--.  oracle          asmadmin        FILE_PERMS_27818.log
-rw-r--r--.  oracle          asmadmin        FILE_PERMS_27842.log
-r-xr-----.  oracle          dba             run_df.sh
-rwxr-xr-x.  root            root            status.sh
-rwxr-xr-x.  root            root            t.sh
Local connection returns rows.
Test 6.2 Run select with a remote connection
[oracle@rhel6m1 exdata]$ sqlplus  asanga/asa@std11g21

SQL> select * from file_perms;

PERMISSION   OWNER           GROUPS          FILE_NAME
------------ --------------- --------------- -------------------------
-rw-r--r--.  oracle          asmadmin        DF_27539.log
-rw-r--r--.  oracle          asmadmin        DF_27545.log
-rw-r--r--.  oracle          asmadmin        FILE_PERMS_27818.log
-rw-r--r--.  oracle          asmadmin        FILE_PERMS_27842.log
-r-xr-----.  oracle          dba             run_df.sh
-rwxr-xr-x.  root            root            status.sh
-rwxr-xr-x.  root            root            t.sh
Remote connection returns rows as well. Unlike before output is same for both remote and local connection. This permission (710) also works for the external table based on the shell script
[oracle@rhel6m1 exdata]$ sqlplus  asanga/asa@std11g21

SQL> select  * from df;

FSNAME         BLOCKS       USED      AVAIL CAPACITY   MOUNT
---------- ---------- ---------- ---------- ---------- ----------
/dev/sda3    37054144   32697924    2473964 93%        /
tmpfs         1961580     228652    1732928 12%        /dev/shm
/dev/sda1       99150      27725      66305 30%        /boot
In summary when external tables are created in a role separated environment group permissions must be set for files/directories used for creating the external tables. If not the output may vary depending on whether the connection is a local connection or a remote connection.
Tested on 11.2.0.3, 11.2.0.4 and 12.1.0.1 (non-CDB)

Wednesday, October 2, 2013

Database Performance Monitor - oratop

oratop is a database performance monitoring tool available for download from MOS. It has a interface similar to Unix/Linux top command. Oratop could also be used to monitor remote databases. It's RAC aware and gathers data from the internal views. Looking at the views that require select privilege this may also be used to monitor standard edition database as well as there are no views that require diagnostic or tuning license.
There are many options top tool by default list cumulative wait events

However this could be changed to list real time wait events as well.

There's nothing to install simply run the downloaded binary (after renaming it to oratop). Currently available for 11.2.0.3 - 11.2.0.4 and 12c.
Oratop is available for download from following MOS note
oratop - Utility for Near Real-time Monitoring of Databases, RAC and Single Instance [ID 1500864.1]


Tuesday, October 1, 2013

Duplicate Database Without Target Connection or Catalog Connection

There are many techniques for database duplication. There's an earlier post which lists step for duplicating RAC database to a single instance using active database. This post list the steps for duplicating a source database without connecting to the target nor rman catalog. For more information on this refer Backup-Based Duplication Without a Target and a Recovery Catalog Connection in the Oracle documentation
Since no connectivity from source database is required for this technique of duplicating , there's no need to create static listener configuration in the duplicate host nor adding tns entry to the source tnsnames.ora file.
But a default listener is needed for the database.
The source database is called ent11g2 and the duplicated database will be named dupdb.
1. On the source database create a full database backup and copy the backup to the host where duplicate database will be created. The backup location need not be same as in the source location. For this post the backups will be copied to location "/home/oracle/backup" in the host used for the duplicate database. The source database is using OMF names and using recovery destination for the backups. If auto backup of control file is enabled copy the auto backup control file and the backup sets to the host where the duplicate database will be created.
cd /data/flash_recovery/ENT11G2
ls
archivelog  autobackup  backupset  controlfile  flashback  onlinelog
cd autobackup/2013_10_01/
ls
o1_mf_s_827667431_94o9gqww_.bkp  <==== auto backup of control file
scp -C o1_mf_s_827667431_94o9gqww_.bkp 192.168.0.99:/home/oracle/backup
o1_mf_s_827667431_94o9gqww_.bkp         100%   10MB  10.2MB/s
cd backupset/2013_10_01
$ scp -C * 192.168.0.99:/home/oracle/backup
o1_mf_annnn_TAG20131001T113643_94o9fw2g_.bkp                  100%   14MB   6.9MB/s   00:02
o1_mf_annnn_TAG20131001T113710_94o9gpnp_.bkp                  100%   11KB  10.5KB/s   00:00
o1_mf_nnndf_TAG20131001T113645_94o9fxjn_.bkp                  100% 2170MB  13.2MB/s   02:45
That's the only time the source database is involved. Rest of the steps happen in the host used for the duplication.
2. Create password file for the duplicate database. This is not a must for the duplication (this method of duplication will work even without a password file) but may be required later on when the duplicate database is in use.
orapwd file=orapwdupdb password=password ignorecase=y
3. Create the directory for the audit files. Without this duplication would fail with audit trail set to OS.
mkdir -p /opt/app/oracle/admin/dupdb/adump
4. Create an init file with the db_name parameter set in it.
cat initdupdb.ora
db_name='dupdb'
5. Start the duplicate database in the nomount mode.
export ORACLE_SID=dupdb
SQL> startup nomount;
6. Connect to the duplicate database using an rman auxiliary connection
[oracle@hpc5 dbs]$ rman auxiliary /
Recovery Manager: Release 11.2.0.3.0 - Production on Tue Oct 1 12:58:49 2013
Copyright (c) 1982, 2011, Oracle and/or its affiliates.  All rights reserved.
connected to auxiliary database: DUPDB (not mounted)
and run the duplicate command
DUPLICATE TARGET DATABASE
TO dupdb
SPFILE PARAMETER_VALUE_CONVERT 'ent11g2', 'dupdb', '/data/oradata', '/opt/app/oracle/oradata', 
'/data/flash_recovery','/opt/app/oracle/flash_recovery_area','ENT11G2','DUPDB'
set db_create_file_dest='/opt/app/oracle/oradata'
set db_recovery_file_dest='/opt/app/oracle/flash_recovery_area'
BACKUP LOCATION '/home/oracle/backup';
When data files use OMF names rather than using *file_name_convert parameter duplication must use create_file_dest. Refer Specifying OMF or ASM Alternative Names for Duplicate Database Files section on Oracle documentation.




Full output of the duplicate command is given below.
RMAN> DUPLICATE TARGET DATABASE
2> TO dupdb
3> SPFILE PARAMETER_VALUE_CONVERT 'ent11g2', 'dupdb', '/data/oradata', '/opt/app/oracle/oradata', '/data/flash_recovery','/opt/app/oracle/flash_recovery_area','ENT11G2','DUPDB'
4> set db_create_file_dest='/opt/app/oracle/oradata'
5> set db_recovery_file_dest='/opt/app/oracle/flash_recovery_area'
6> BACKUP LOCATION '/home/oracle/backup';

Starting Duplicate Db at 01-OCT-13

contents of Memory Script:
{
   restore clone spfile to  '/opt/app/oracle/product/11.2.0/ent11.2.0.3/dbs/spfiledupdb.ora' from
 '/home/oracle/backup/o1_mf_s_827667431_94o9gqww_.bkp';
   sql clone "alter system set spfile= ''/opt/app/oracle/product/11.2.0/ent11.2.0.3/dbs/spfiledupdb.ora''";
}
executing Memory Script

Starting restore at 01-OCT-13
allocated channel: ORA_AUX_DISK_1
channel ORA_AUX_DISK_1: SID=216 device type=DISK

channel ORA_AUX_DISK_1: restoring spfile from AUTOBACKUP /home/oracle/backup/o1_mf_s_827667431_94o9gqww_.bkp
channel ORA_AUX_DISK_1: SPFILE restore from AUTOBACKUP complete
Finished restore at 01-OCT-13

sql statement: alter system set spfile= ''/opt/app/oracle/product/11.2.0/ent11.2.0.3/dbs/spfiledupdb.ora''

contents of Memory Script:
{
   sql clone "alter system set  db_name =
 ''DUPDB'' comment=
 ''duplicate'' scope=spfile";
   sql clone "alter system set  audit_file_dest =
 ''/opt/app/oracle/admin/dupdb/adump'' comment=
 '''' scope=spfile";
   sql clone "alter system set  control_files =
 ''/opt/app/oracle/oradata/DUPDB/controlfile/o1_mf_696xsob5_.ctl'', ''/opt/app/oracle/flash_recovery_area/DUPDB/controlfile/o1_mf_696xsoc9_.ctl'' comment=
 '''' scope=spfile";
   sql clone "alter system set  dispatchers =
 ''(PROTOCOL=TCP) (SERVICE=dupdbXDB)'' comment=
 '''' scope=spfile";
   sql clone "alter system set  db_create_file_dest =
 ''/opt/app/oracle/oradata'' comment=
 '''' scope=spfile";
   sql clone "alter system set  db_recovery_file_dest =
 ''/opt/app/oracle/flash_recovery_area'' comment=
 '''' scope=spfile";
   shutdown clone immediate;
   startup clone nomount;
}
executing Memory Script

sql statement: alter system set  db_name =  ''DUPDB'' comment= ''duplicate'' scope=spfile

sql statement: alter system set  audit_file_dest =  ''/opt/app/oracle/admin/dupdb/adump'' comment= '''' scope=spfile

sql statement: alter system set  control_files =  ''/opt/app/oracle/oradata/DUPDB/controlfile/o1_mf_696xsob5_.ctl'', ''/opt/app/oracle/flash_recovery_area/DUPDB/controlfile/o1_mf_696xsoc9_.ctl'' comment= '''' scope=spfile

sql statement: alter system set  dispatchers =  ''(PROTOCOL=TCP) (SERVICE=dupdbXDB)'' comment= '''' scope=spfile

sql statement: alter system set  db_create_file_dest =  ''/opt/app/oracle/oradata'' comment= '''' scope=spfile

sql statement: alter system set  db_recovery_file_dest =  ''/opt/app/oracle/flash_recovery_area'' comment= '''' scope=spfile

Oracle instance shut down

connected to auxiliary database (not started)
Oracle instance started

Total System Global Area    3758010368 bytes

Fixed Size                     2233960 bytes
Variable Size                738199960 bytes
Database Buffers            2986344448 bytes
Redo Buffers                  31232000 bytes

contents of Memory Script:
{
   sql clone "alter system set  control_files =
  ''/opt/app/oracle/oradata/DUPDB/controlfile/o1_mf_696xsob5_.ctl'', ''/opt/app/oracle/flash_recovery_area/DUPDB/controlfile/o1_mf_696xsoc9_.ctl'' comment=
 ''Set by RMAN'' scope=spfile";
   sql clone "alter system set  db_name =
 ''ENT11G2'' comment=
 ''Modified by RMAN duplicate'' scope=spfile";
   sql clone "alter system set  db_unique_name =
 ''DUPDB'' comment=
 ''Modified by RMAN duplicate'' scope=spfile";
   shutdown clone immediate;
   startup clone force nomount
   restore clone primary controlfile from  '/home/oracle/backup/o1_mf_s_827667431_94o9gqww_.bkp';
   alter clone database mount;
}
executing Memory Script

sql statement: alter system set  control_files =   ''/opt/app/oracle/oradata/DUPDB/controlfile/o1_mf_696xsob5_.ctl'', ''/opt/app/oracle/flash_recovery_area/DUPDB/controlfile/o1_mf_696xsoc9_.ctl'' comment= ''Set by RMAN'' scope=spfile

sql statement: alter system set  db_name =  ''ENT11G2'' comment= ''Modified by RMAN duplicate'' scope=spfile

sql statement: alter system set  db_unique_name =  ''DUPDB'' comment= ''Modified by RMAN duplicate'' scope=spfile

Oracle instance shut down

Oracle instance started

Total System Global Area    3758010368 bytes

Fixed Size                     2233960 bytes
Variable Size                738199960 bytes
Database Buffers            2986344448 bytes
Redo Buffers                  31232000 bytes

Starting restore at 01-OCT-13
allocated channel: ORA_AUX_DISK_1
channel ORA_AUX_DISK_1: SID=958 device type=DISK

channel ORA_AUX_DISK_1: restoring control file
channel ORA_AUX_DISK_1: restore complete, elapsed time: 00:00:02
output file name=/opt/app/oracle/oradata/DUPDB/controlfile/o1_mf_696xsob5_.ctl
output file name=/opt/app/oracle/flash_recovery_area/DUPDB/controlfile/o1_mf_696xsoc9_.ctl
Finished restore at 01-OCT-13

database mounted
released channel: ORA_AUX_DISK_1
allocated channel: ORA_AUX_DISK_1
channel ORA_AUX_DISK_1: SID=958 device type=DISK

contents of Memory Script:
{
   set until scn  51111288;
   set newname for clone datafile  1 to new;
   set newname for clone datafile  2 to new;
   set newname for clone datafile  3 to new;
   set newname for clone datafile  4 to new;
   set newname for clone datafile  5 to new;
   set newname for clone datafile  6 to new;
   set newname for clone datafile  7 to new;
   set newname for clone datafile  8 to new;
   set newname for clone datafile  9 to new;
   set newname for clone datafile  10 to new;
   set newname for clone datafile  11 to new;
   set newname for clone datafile  12 to new;
   set newname for clone datafile  13 to new;
   set newname for clone datafile  14 to new;
   restore
   clone database
   ;
}
executing Memory Script

executing command: SET until clause

executing command: SET NEWNAME

executing command: SET NEWNAME

executing command: SET NEWNAME

executing command: SET NEWNAME

executing command: SET NEWNAME

executing command: SET NEWNAME

executing command: SET NEWNAME

executing command: SET NEWNAME

executing command: SET NEWNAME

executing command: SET NEWNAME

executing command: SET NEWNAME

executing command: SET NEWNAME

executing command: SET NEWNAME

executing command: SET NEWNAME

Starting restore at 01-OCT-13
using channel ORA_AUX_DISK_1

channel ORA_AUX_DISK_1: starting datafile backup set restore
channel ORA_AUX_DISK_1: specifying datafile(s) to restore from backup set
channel ORA_AUX_DISK_1: restoring datafile 00001 to /opt/app/oracle/oradata/DUPDB/datafile/o1_mf_system_%u_.dbf
channel ORA_AUX_DISK_1: restoring datafile 00002 to /opt/app/oracle/oradata/DUPDB/datafile/o1_mf_sysaux_%u_.dbf
channel ORA_AUX_DISK_1: restoring datafile 00003 to /opt/app/oracle/oradata/DUPDB/datafile/o1_mf_undotbs1_%u_.dbf
channel ORA_AUX_DISK_1: restoring datafile 00004 to /opt/app/oracle/oradata/DUPDB/datafile/o1_mf_users_%u_.dbf
channel ORA_AUX_DISK_1: restoring datafile 00005 to /opt/app/oracle/oradata/DUPDB/datafile/o1_mf_example_%u_.dbf
channel ORA_AUX_DISK_1: restoring datafile 00006 to /opt/app/oracle/oradata/DUPDB/datafile/o1_mf_velbo_%u_.dbf
channel ORA_AUX_DISK_1: restoring datafile 00007 to /opt/app/oracle/oradata/DUPDB/datafile/o1_mf_datatbs_%u_.dbf
channel ORA_AUX_DISK_1: restoring datafile 00008 to /opt/app/oracle/oradata/DUPDB/datafile/o1_mf_index_%u_.dbf
channel ORA_AUX_DISK_1: restoring datafile 00009 to /opt/app/oracle/oradata/DUPDB/datafile/o1_mf_lobs_%u_.dbf
channel ORA_AUX_DISK_1: restoring datafile 00010 to /opt/app/oracle/oradata/DUPDB/datafile/o1_mf_test_%u_.dbf
channel ORA_AUX_DISK_1: restoring datafile 00011 to /opt/app/oracle/oradata/DUPDB/datafile/o1_mf_test_%u_.dbf
channel ORA_AUX_DISK_1: restoring datafile 00012 to /opt/app/oracle/oradata/DUPDB/datafile/o1_mf_lobtbs_%u_.dbf
channel ORA_AUX_DISK_1: restoring datafile 00013 to /opt/app/oracle/oradata/DUPDB/datafile/o1_mf_lobtbs11_%u_.dbf
channel ORA_AUX_DISK_1: restoring datafile 00014 to /opt/app/oracle/oradata/DUPDB/datafile/o1_mf_2ktbs_%u_.dbf
channel ORA_AUX_DISK_1: reading from backup piece /home/oracle/backup/o1_mf_nnndf_TAG20131001T113645_94o9fxjn_.bkp
channel ORA_AUX_DISK_1: piece handle=/home/oracle/backup/o1_mf_nnndf_TAG20131001T113645_94o9fxjn_.bkp tag=TAG20131001T113645
channel ORA_AUX_DISK_1: restored backup piece 1
channel ORA_AUX_DISK_1: restore complete, elapsed time: 00:02:15
Finished restore at 01-OCT-13

contents of Memory Script:
{
   switch clone datafile all;
}
executing Memory Script

datafile 1 switched to datafile copy
input datafile copy RECID=15 STAMP=827672497 file name=/opt/app/oracle/oradata/DUPDB/datafile/o1_mf_system_94og8tfx_.dbf
datafile 2 switched to datafile copy
input datafile copy RECID=16 STAMP=827672497 file name=/opt/app/oracle/oradata/DUPDB/datafile/o1_mf_sysaux_94og8tf6_.dbf
datafile 3 switched to datafile copy
input datafile copy RECID=17 STAMP=827672497 file name=/opt/app/oracle/oradata/DUPDB/datafile/o1_mf_undotbs1_94og8tdb_.dbf
datafile 4 switched to datafile copy
input datafile copy RECID=18 STAMP=827672497 file name=/opt/app/oracle/oradata/DUPDB/datafile/o1_mf_users_94og8tgf_.dbf
datafile 5 switched to datafile copy
input datafile copy RECID=19 STAMP=827672497 file name=/opt/app/oracle/oradata/DUPDB/datafile/o1_mf_example_94og8tgo_.dbf
datafile 6 switched to datafile copy
input datafile copy RECID=20 STAMP=827672497 file name=/opt/app/oracle/oradata/DUPDB/datafile/o1_mf_velbo_94og8tfg_.dbf
datafile 7 switched to datafile copy
input datafile copy RECID=21 STAMP=827672497 file name=/opt/app/oracle/oradata/DUPDB/datafile/o1_mf_datatbs_94og939j_.dbf
datafile 8 switched to datafile copy
input datafile copy RECID=22 STAMP=827672497 file name=/opt/app/oracle/oradata/DUPDB/datafile/o1_mf_index_94og8tfp_.dbf
datafile 9 switched to datafile copy
input datafile copy RECID=23 STAMP=827672497 file name=/opt/app/oracle/oradata/DUPDB/datafile/o1_mf_lobs_94og8wxx_.dbf
datafile 10 switched to datafile copy
input datafile copy RECID=24 STAMP=827672497 file name=/opt/app/oracle/oradata/DUPDB/datafile/o1_mf_test_94og91sh_.dbf
datafile 11 switched to datafile copy
input datafile copy RECID=25 STAMP=827672497 file name=/opt/app/oracle/oradata/DUPDB/datafile/o1_mf_test_94og93d0_.dbf
datafile 12 switched to datafile copy
input datafile copy RECID=26 STAMP=827672497 file name=/opt/app/oracle/oradata/DUPDB/datafile/o1_mf_lobtbs_94og8yhj_.dbf
datafile 13 switched to datafile copy
input datafile copy RECID=27 STAMP=827672497 file name=/opt/app/oracle/oradata/DUPDB/datafile/o1_mf_lobtbs11_94og8zw0_.dbf
datafile 14 switched to datafile copy
input datafile copy RECID=28 STAMP=827672497 file name=/opt/app/oracle/oradata/DUPDB/datafile/o1_mf_2ktbs_94og8tg5_.dbf

contents of Memory Script:
{
   set until scn  51111288;
   recover
   clone database
    delete archivelog
   ;
}
executing Memory Script

executing command: SET until clause

Starting recover at 01-OCT-13
using channel ORA_AUX_DISK_1

starting media recovery

channel ORA_AUX_DISK_1: starting archived log restore to default destination
channel ORA_AUX_DISK_1: restoring archived log
archived log thread=1 sequence=503
channel ORA_AUX_DISK_1: reading from backup piece /home/oracle/backup/o1_mf_annnn_TAG20131001T113710_94o9gpnp_.bkp
channel ORA_AUX_DISK_1: piece handle=/home/oracle/backup/o1_mf_annnn_TAG20131001T113710_94o9gpnp_.bkp tag=TAG20131001T113710
channel ORA_AUX_DISK_1: restored backup piece 1
channel ORA_AUX_DISK_1: restore complete, elapsed time: 00:00:01
archived log file name=/opt/app/oracle/flash_recovery_area/DUPDB/archivelog/2013_10_01/o1_mf_1_503_94ogf2sd_.arc thread=1 sequence=503
channel clone_default: deleting archived log(s)
archived log file name=/opt/app/oracle/flash_recovery_area/DUPDB/archivelog/2013_10_01/o1_mf_1_503_94ogf2sd_.arc RECID=1 STAMP=827672498
media recovery complete, elapsed time: 00:00:01
Finished recover at 01-OCT-13
Oracle instance started

Total System Global Area    3758010368 bytes

Fixed Size                     2233960 bytes
Variable Size                738199960 bytes
Database Buffers            2986344448 bytes
Redo Buffers                  31232000 bytes

contents of Memory Script:
{
   sql clone "alter system set  db_name =
 ''DUPDB'' comment=
 ''Reset to original value by RMAN'' scope=spfile";
   sql clone "alter system reset  db_unique_name scope=spfile";
   shutdown clone immediate;
   startup clone nomount;
}
executing Memory Script

sql statement: alter system set  db_name =  ''DUPDB'' comment= ''Reset to original value by RMAN'' scope=spfile

sql statement: alter system reset  db_unique_name scope=spfile

Oracle instance shut down

connected to auxiliary database (not started)
Oracle instance started

Total System Global Area    3758010368 bytes

Fixed Size                     2233960 bytes
Variable Size                738199960 bytes
Database Buffers            2986344448 bytes
Redo Buffers                  31232000 bytes
sql statement: CREATE CONTROLFILE REUSE SET DATABASE "DUPDB" RESETLOGS ARCHIVELOG
  MAXLOGFILES     16
  MAXLOGMEMBERS      3
  MAXDATAFILES      100
  MAXINSTANCES     8
  MAXLOGHISTORY      584
 LOGFILE
  GROUP   1  SIZE 50 M ,
  GROUP   2  SIZE 50 M ,
  GROUP   3  SIZE 50 M
 DATAFILE
  '/opt/app/oracle/oradata/DUPDB/datafile/o1_mf_system_94og8tfx_.dbf'
 CHARACTER SET AL32UTF8


contents of Memory Script:
{
   set newname for clone tempfile  1 to new;
   set newname for clone tempfile  2 to new;
   set newname for clone tempfile  3 to new;
   switch clone tempfile all;
   catalog clone datafilecopy  "/opt/app/oracle/oradata/DUPDB/datafile/o1_mf_sysaux_94og8tf6_.dbf",
 "/opt/app/oracle/oradata/DUPDB/datafile/o1_mf_undotbs1_94og8tdb_.dbf",
 "/opt/app/oracle/oradata/DUPDB/datafile/o1_mf_users_94og8tgf_.dbf",
 "/opt/app/oracle/oradata/DUPDB/datafile/o1_mf_example_94og8tgo_.dbf",
 "/opt/app/oracle/oradata/DUPDB/datafile/o1_mf_velbo_94og8tfg_.dbf",
 "/opt/app/oracle/oradata/DUPDB/datafile/o1_mf_datatbs_94og939j_.dbf",
 "/opt/app/oracle/oradata/DUPDB/datafile/o1_mf_index_94og8tfp_.dbf",
 "/opt/app/oracle/oradata/DUPDB/datafile/o1_mf_lobs_94og8wxx_.dbf",
 "/opt/app/oracle/oradata/DUPDB/datafile/o1_mf_test_94og91sh_.dbf",
 "/opt/app/oracle/oradata/DUPDB/datafile/o1_mf_test_94og93d0_.dbf",
 "/opt/app/oracle/oradata/DUPDB/datafile/o1_mf_lobtbs_94og8yhj_.dbf",
 "/opt/app/oracle/oradata/DUPDB/datafile/o1_mf_lobtbs11_94og8zw0_.dbf",
 "/opt/app/oracle/oradata/DUPDB/datafile/o1_mf_2ktbs_94og8tg5_.dbf";
   switch clone datafile all;
}
executing Memory Script

executing command: SET NEWNAME

executing command: SET NEWNAME

executing command: SET NEWNAME

renamed tempfile 1 to /opt/app/oracle/oradata/DUPDB/datafile/o1_mf_temp_%u_.tmp in control file
renamed tempfile 2 to /opt/app/oracle/oradata/DUPDB/datafile/o1_mf_txtemp_%u_.tmp in control file
renamed tempfile 3 to /opt/app/oracle/oradata/DUPDB/datafile/o1_mf_temp5m_%u_.tmp in control file

cataloged datafile copy
datafile copy file name=/opt/app/oracle/oradata/DUPDB/datafile/o1_mf_sysaux_94og8tf6_.dbf RECID=1 STAMP=827672513
cataloged datafile copy
datafile copy file name=/opt/app/oracle/oradata/DUPDB/datafile/o1_mf_undotbs1_94og8tdb_.dbf RECID=2 STAMP=827672513
cataloged datafile copy
datafile copy file name=/opt/app/oracle/oradata/DUPDB/datafile/o1_mf_users_94og8tgf_.dbf RECID=3 STAMP=827672513
cataloged datafile copy
datafile copy file name=/opt/app/oracle/oradata/DUPDB/datafile/o1_mf_example_94og8tgo_.dbf RECID=4 STAMP=827672513
cataloged datafile copy
datafile copy file name=/opt/app/oracle/oradata/DUPDB/datafile/o1_mf_velbo_94og8tfg_.dbf RECID=5 STAMP=827672513
cataloged datafile copy
datafile copy file name=/opt/app/oracle/oradata/DUPDB/datafile/o1_mf_datatbs_94og939j_.dbf RECID=6 STAMP=827672513
cataloged datafile copy
datafile copy file name=/opt/app/oracle/oradata/DUPDB/datafile/o1_mf_index_94og8tfp_.dbf RECID=7 STAMP=827672513
cataloged datafile copy
datafile copy file name=/opt/app/oracle/oradata/DUPDB/datafile/o1_mf_lobs_94og8wxx_.dbf RECID=8 STAMP=827672513
cataloged datafile copy
datafile copy file name=/opt/app/oracle/oradata/DUPDB/datafile/o1_mf_test_94og91sh_.dbf RECID=9 STAMP=827672513
cataloged datafile copy
datafile copy file name=/opt/app/oracle/oradata/DUPDB/datafile/o1_mf_test_94og93d0_.dbf RECID=10 STAMP=827672513
cataloged datafile copy
datafile copy file name=/opt/app/oracle/oradata/DUPDB/datafile/o1_mf_lobtbs_94og8yhj_.dbf RECID=11 STAMP=827672513
cataloged datafile copy
datafile copy file name=/opt/app/oracle/oradata/DUPDB/datafile/o1_mf_lobtbs11_94og8zw0_.dbf RECID=12 STAMP=827672513
cataloged datafile copy
datafile copy file name=/opt/app/oracle/oradata/DUPDB/datafile/o1_mf_2ktbs_94og8tg5_.dbf RECID=13 STAMP=827672513

datafile 2 switched to datafile copy
input datafile copy RECID=1 STAMP=827672513 file name=/opt/app/oracle/oradata/DUPDB/datafile/o1_mf_sysaux_94og8tf6_.dbf
datafile 3 switched to datafile copy
input datafile copy RECID=2 STAMP=827672513 file name=/opt/app/oracle/oradata/DUPDB/datafile/o1_mf_undotbs1_94og8tdb_.dbf
datafile 4 switched to datafile copy
input datafile copy RECID=3 STAMP=827672513 file name=/opt/app/oracle/oradata/DUPDB/datafile/o1_mf_users_94og8tgf_.dbf
datafile 5 switched to datafile copy
input datafile copy RECID=4 STAMP=827672513 file name=/opt/app/oracle/oradata/DUPDB/datafile/o1_mf_example_94og8tgo_.dbf
datafile 6 switched to datafile copy
input datafile copy RECID=5 STAMP=827672513 file name=/opt/app/oracle/oradata/DUPDB/datafile/o1_mf_velbo_94og8tfg_.dbf
datafile 7 switched to datafile copy
input datafile copy RECID=6 STAMP=827672513 file name=/opt/app/oracle/oradata/DUPDB/datafile/o1_mf_datatbs_94og939j_.dbf
datafile 8 switched to datafile copy
input datafile copy RECID=7 STAMP=827672513 file name=/opt/app/oracle/oradata/DUPDB/datafile/o1_mf_index_94og8tfp_.dbf
datafile 9 switched to datafile copy
input datafile copy RECID=8 STAMP=827672513 file name=/opt/app/oracle/oradata/DUPDB/datafile/o1_mf_lobs_94og8wxx_.dbf
datafile 10 switched to datafile copy
input datafile copy RECID=9 STAMP=827672513 file name=/opt/app/oracle/oradata/DUPDB/datafile/o1_mf_test_94og91sh_.dbf
datafile 11 switched to datafile copy
input datafile copy RECID=10 STAMP=827672513 file name=/opt/app/oracle/oradata/DUPDB/datafile/o1_mf_test_94og93d0_.dbf
datafile 12 switched to datafile copy
input datafile copy RECID=11 STAMP=827672513 file name=/opt/app/oracle/oradata/DUPDB/datafile/o1_mf_lobtbs_94og8yhj_.dbf
datafile 13 switched to datafile copy
input datafile copy RECID=12 STAMP=827672513 file name=/opt/app/oracle/oradata/DUPDB/datafile/o1_mf_lobtbs11_94og8zw0_.dbf
datafile 14 switched to datafile copy
input datafile copy RECID=13 STAMP=827672513 file name=/opt/app/oracle/oradata/DUPDB/datafile/o1_mf_2ktbs_94og8tg5_.dbf

contents of Memory Script:
{
   Alter clone database open resetlogs;
}
executing Memory Script

database opened
Finished Duplicate Db at 01-OCT-13
7. Verify the database open mode and role
SQL> select name,created,open_mode,database_role from v$database;

NAME      CREATED   OPEN_MODE            DATABASE_ROLE
--------- --------- -------------------- ----------------
DUPDB     01-OCT-13 READ WRITE           PRIMARY
This conclude the database duplication without target or catalog connections.

Related Post
RAC to Single Instance Active Database Duplication

Useful Metalink Notes
Oracle10G RMAN Database Duplication [ID 259694.1]
Creating a Duplicate Database on a New Host. [ID 388431.1]
RMAN 'Duplicate Database' Feature in Oracle9i / 10G and 11.1 [ID 228257.1]
Performing duplicate database with ASM/OMF/RMAN [ID 340848.1]
RMAN 11GR2 : DUPLICATE WITHOUT CONNECTING TO TARGET DATABASE [ID 874352.1]
RMAN Duplicate Database From RAC ASM To RAC ASM [ID 461479.1]


Update 17 August 2015
Duplicating a database from ASM to FS with OMF. The db_name is asangapu and the db_unique_name is asangauat. db_name could only be 8 characters long and duplication fails if db_name is set longer than that. Set ORACLE_SID to db_unique_name value and also the directories will contain the db_unique_name.
cat initasangauat.ora
db_name='asangapu'

export ORACLE_SID=asangauat

mkdir -p /opt/app/oracle/admin/asangauat/adump
Creation of control files fails if the directory path doesn't exists.
RMAN-03015: error occurred in stored script Memory Script
ORA-19870: error while restoring backup piece /opt/app/backup/full_c-4271488948-20150817-17.ctl
ORA-19504: failed to create file "/opt/app/oracle/oradata/ASANGAUAT/controlfile/current.256.867683577"
ORA-27040: file create error, unable to create file
Once the directory paths are created no issue
mkdir -p /opt/app/oracle/oradata/ASANGAUAT/controlfile
mkdir -p /opt/app/oracle/fast_recovery_area/ASANGAUAT/controlfile


channel ORA_AUX_DISK_1: restoring control file
channel ORA_AUX_DISK_1: restore complete, elapsed time: 00:00:01
output file name=/opt/app/oracle/oradata/ASANGAUAT/controlfile/current.256.867683577
output file name=/opt/app/oracle/fast_recovery_area/ASANGAUAT/controlfile/current.256.867683577
Finished restore at 17-AUG-15
However no such problems for datafile restore.
channel ORA_AUX_DISK_1: specifying datafile(s) to restore from backup set
channel ORA_AUX_DISK_1: restoring datafile 00001 to /opt/app/oracle/oradata/ASANGAUAT/datafile/o1_mf_system_%u_.dbf
channel ORA_AUX_DISK_1: restoring datafile 00002 to /opt/app/oracle/oradata/ASANGAUAT/datafile/o1_mf_sysaux_%u_.dbf
The full duplication command is given below where source database is travelu.
DUPLICATE TARGET DATABASE
TO asangapu
SPFILE PARAMETER_VALUE_CONVERT 'travelu', 'ASANGAUAT', '+DATA', '/opt/app/oracle/oradata', 
'+FLASH','/opt/app/oracle/fast_recovery_area','TRAVELU','ASANGAUAT'
set db_create_file_dest='/opt/app/oracle/oradata'
set db_recovery_file_dest='/opt/app/oracle/fast_recovery_area'
set diagnostic_dest='/opt/app/oracle'
set db_unique_name='asangauat'
set audit_file_dest='/opt/app/oracle/admin/asangapuat/adump'
BACKUP LOCATION '/opt/app/backup';