Monday, January 21, 2013

ORA-39127: unexpected error from call to export_string :=WMSYS.LT_EXPORT_PKG.SCHEMA_INFO_EXP

Following was seen while doing an export (expdp) in a 11gR2 standard edition database.
ORA-39127: unexpected error from call to export_string :=WMSYS.LT_EXPORT_PKG.SCHEMA_INFO_EXP('REPOS',0,1,'11.02.00.00.00',newblock) 
ORA-04063: package body "WMSYS.LT_EXPORT_PKG" has errors
ORA-06508: PL/SQL: could not find program unit being called: "WMSYS.LT_EXPORT_PKG"
ORA-06512: at line 1
ORA-06512: at "SYS.DBMS_METADATA", line 9427
Querying the dba_registry showed OWM (Oracle Workspace Manager) status invalid.
Execute on utl_file has been revoked from public for security and 1331906.1 says grant utl_file to wmsys but this did not resolve the issue.
Running (1263697.1)
dbms_registry.loading('OWM', 'Oracle Workspace Manager', 'VALIDATE_OWM', 'WMSYS');
didn't help either.
Finally reinstall the OWM (731576.1)
$ORACLE_HOME/rdbms/admin/owminst.plb
and OWM status became VALID in the dba_registry and expdp continue without any errors.



Useful metalink notes
ORA-39705 When Removing Oracle Workspace Manager [ID 1263697.1]
Errors During Full DataPump Export (EXPDP) In 11gR2 [ID 1331906.1]
How do you manually install/deinstall Oracle Workspace Manager [ID 731576.1]

Related Posts
ORA-39127: unexpected error from call to export_string :=SYS.DBMS_CUBE_EXP.SCHEMA_INFO_EXP while Exporting
Dictionary Scripts

Sunday, January 20, 2013

January 2013 PSU (11.2.0.3) Manual Steps for Apply/Rollback Patch vs OPatch Auto

It seems future of patch applying is to be "auto" if the January 2013 PSU for 11.2.0.3 is anything to go by. First is that manual steps has been taken out of the main readme.html that comes with the patch to a supplemental document (1494646.1). So more documents to go through than to have all the necessary information in one place.
Secondly from GI PSU 11.2.0.3.5 and up the GI PSU have a separate GI and DB portions, which means there's patch number GI component number and DB component number. These components must be substitued in the commands used for manually applying the patch and these commands now come in generic form such as
<GI_HOME>/OPatch/opatch napply -oh <GI_HOME> -local <UNZIPPED_PATCH_LOCATION>/<GI_components_number>
<GI_HOME>/OPatch/opatch apply -oh <GI_HOME> -local <UNZIPPED_PATCH_LOCATION>/<DB_PSU_number>
Manually applying patch gives the DBA the control and also to closely examine what goes on.
"opatch auto" has it's merits as well. One command to patch both GI and DB Homes. It brings down the clusterware stack and start it up once finished. One draw back is that even if one component fails rest of the patching will continue and one has to come back to it at the end of the patching. Manual patching is more interactive allowing any errors/failures to be corrected before continuing with the rest of the patching.
Auto patching is run as root and switch (or execute as) between the appropriate user (grid or oracle) to the relevant home as it apply the patch. Opatch auto also need an ocm response file. How To Create An OCM Response File For Opatch Silent Installation [ID 966023.1]
Output on the shell where opatch auto is run only shows the outcome of each patchs' apply process (pass or fail).
[root@rhel6m1 patches]# /opt/app/11.2.0/grid/OPatch/opatch auto /usr/local/patches -ocmrf ocm.rsp
Executing /usr/bin/perl /opt/app/11.2.0/grid/OPatch/crs/patch112.pl -patchdir /usr/local -patchn patches -ocmrf ocm.rsp -paramfile /opt/app/11.2.0/grid/crs/install/crsconfig_params
opatch auto log file location is /opt/app/11.2.0/grid/OPatch/crs/../../cfgtoollogs/opatchauto2013-01-17_11-33-35.log
Detected Oracle Clusterware install
Using configuration parameter file: /opt/app/11.2.0/grid/crs/install/crsconfig_params
patch /usr/local/patches/15876003/custom/server/15876003  apply successful for home  /opt/app/oracle/product/11.2.0/dbhome_1
patch /usr/local/patches/14727310  apply successful for home  /opt/app/oracle/product/11.2.0/dbhome_1
Successfully unlock /opt/app/11.2.0/grid
patch /usr/local/patches/15876003  apply successful for home  /opt/app/11.2.0/grid
patch /usr/local/patches/14727310  apply successful for home  /opt/app/11.2.0/grid
ACFS-9300: ADVM/ACFS distribution files found.
ACFS-9312: Existing ADVM/ACFS installation detected.
ACFS-9314: Removing previous ADVM/ACFS installation.
ACFS-9315: Previous ADVM/ACFS components successfully removed.
ACFS-9307: Installing requested ADVM/ACFS software.
ACFS-9308: Loading installed ADVM/ACFS drivers.
ACFS-9321: Creating udev for ADVM/ACFS.
ACFS-9323: Creating module dependencies - this may take some time.
ACFS-9154: Loading 'oracleoks.ko' driver.
ACFS-9154: Loading 'oracleadvm.ko' driver.
ACFS-9154: Loading 'oracleacfs.ko' driver.
ACFS-9327: Verifying ADVM/ACFS devices.
ACFS-9156: Detecting control device '/dev/asm/.asm_ctl_spec'.
ACFS-9156: Detecting control device '/dev/ofsctl'.
ACFS-9309: ADVM/ACFS installation correctness verified.
CRS-4123: Oracle High Availability Services has been started.
After apply query the lsinventory for the PSU
$GI_HOME/OPatch/opatch lsinventory -local -bugs_fixed | grep "GRID INFRASTRUCTURE PATCH SET UPDATE"
$ORACLE_HOME/OPatch/opatch lsinventory -local -bugs_fixed | grep "DATABASE PATCH SET UPDATE"
It all comes down to personal preference, manual or auto but it won't be a surprise if eventually "auto" becomes the "recommended" way while manual is only for "troubleshooting".




Below is a full log of applying GI PSU 11.2.0.3.5 on RAC system running on RHEL6 which shows the behind the scene work of opatch auto such as alternating between grid and oracle user.
2013-01-17 11:33:35: Using Oracle CRS home /opt/app/11.2.0/grid
2013-01-17 11:33:35: Checking parameters from paramfile /opt/app/11.2.0/grid/crs/install/crsconfig_params to validate installer variables
2013-01-17 11:33:35: The configuration parameter file /opt/app/11.2.0/grid/crs/install/crsconfig_params is valid
2013-01-17 11:33:35: ### Printing the configuration values from files:
2013-01-17 11:33:35:    /opt/app/11.2.0/grid/crs/install/crsconfig_params
2013-01-17 11:33:35:    /opt/app/11.2.0/grid/OPatch/crs/s_crsconfig_defs
2013-01-17 11:33:35: ASM_AU_SIZE=1
2013-01-17 11:33:35: ASM_DISCOVERY_STRING=/dev/sd*
2013-01-17 11:33:35: ASM_DISKS=/dev/sdb1,/dev/sdc1,/dev/sdd1
2013-01-17 11:33:35: ASM_DISK_GROUP=CLUSTER_DG
2013-01-17 11:33:35: ASM_REDUNDANCY=NORMAL
2013-01-17 11:33:35: ASM_SPFILE=
2013-01-17 11:33:35: ASM_UPGRADE=false
2013-01-17 11:33:35: CLSCFG_MISSCOUNT=
2013-01-17 11:33:35: CLUSTER_GUID=
2013-01-17 11:33:35: CLUSTER_NAME=rhel6m-cluster
2013-01-17 11:33:35: CRFHOME="/opt/app/11.2.0/grid"
2013-01-17 11:33:35: CRS_LIMIT_CORE=unlimited
2013-01-17 11:33:35: CRS_LIMIT_MEMLOCK=unlimited
2013-01-17 11:33:35: CRS_LIMIT_OPENFILE=65536
2013-01-17 11:33:35: CRS_LIMIT_STACK=2048
2013-01-17 11:33:35: CRS_NODEVIPS='rhel6m1-vip/255.255.255.0/eth0,rhel6m2-vip/255.255.255.0/eth0'
2013-01-17 11:33:35: CRS_STORAGE_OPTION=1
2013-01-17 11:33:35: CSS_LEASEDURATION=400
2013-01-17 11:33:35: DIRPREFIX=
2013-01-17 11:33:35: DISABLE_OPROCD=0
2013-01-17 11:33:35: EXTERNAL_ORACLE=/opt/oracle
2013-01-17 11:33:35: EXTERNAL_ORACLE_BIN=/opt/oracle/bin
2013-01-17 11:33:35: GNS_ADDR_LIST=
2013-01-17 11:33:35: GNS_ALLOW_NET_LIST=
2013-01-17 11:33:35: GNS_CONF=false
2013-01-17 11:33:35: GNS_DENY_ITF_LIST=
2013-01-17 11:33:35: GNS_DENY_NET_LIST=
2013-01-17 11:33:35: GNS_DOMAIN_LIST=
2013-01-17 11:33:35: GPNPCONFIGDIR=/opt/app/11.2.0/grid
2013-01-17 11:33:35: GPNPGCONFIGDIR=/opt/app/11.2.0/grid
2013-01-17 11:33:35: GPNP_PA=
2013-01-17 11:33:35: HOST_NAME_LIST=rhel6m1,rhel6m2
2013-01-17 11:33:35: ID=/etc/init.d
2013-01-17 11:33:35: INIT=/sbin/init
2013-01-17 11:33:35: ISROLLING=true
2013-01-17 11:33:35: IT=/etc/inittab
2013-01-17 11:33:35: JLIBDIR=/opt/app/11.2.0/grid/jlib
2013-01-17 11:33:35: JREDIR=/opt/app/11.2.0/grid/jdk/jre/
2013-01-17 11:33:35: LANGUAGE_ID=AMERICAN_AMERICA.AL32UTF8
2013-01-17 11:33:35: MSGFILE=/var/adm/messages
2013-01-17 11:33:35: NETWORKS="eth0"/192.168.0.0:public,"eth1"/192.168.1.0:cluster_interconnect
2013-01-17 11:33:35: NEW_HOST_NAME_LIST=
2013-01-17 11:33:35: NEW_NODEVIPS='rhel6m1-vip/255.255.255.0/eth0,rhel6m2-vip/255.255.255.0/eth0'
2013-01-17 11:33:35: NEW_NODE_NAME_LIST=
2013-01-17 11:33:35: NEW_PRIVATE_NAME_LIST=
2013-01-17 11:33:35: NODELIST=rhel6m1,rhel6m2
2013-01-17 11:33:35: NODE_NAME_LIST=rhel6m1,rhel6m2
2013-01-17 11:33:35: OCFS_CONFIG=
2013-01-17 11:33:35: OCRCONFIG=/etc/oracle/ocr.loc
2013-01-17 11:33:35: OCRCONFIGDIR=/etc/oracle
2013-01-17 11:33:35: OCRID=
2013-01-17 11:33:35: OCRLOC=ocr.loc
2013-01-17 11:33:35: OCR_LOCATIONS=NO_VAL
2013-01-17 11:33:35: OLASTGASPDIR=/etc/oracle/lastgasp
2013-01-17 11:33:35: OLD_CRS_HOME=
2013-01-17 11:33:35: OLRCONFIG=/etc/oracle/olr.loc
2013-01-17 11:33:35: OLRCONFIGDIR=/etc/oracle
2013-01-17 11:33:35: OLRLOC=olr.loc
2013-01-17 11:33:35: OPROCDCHECKDIR=/etc/oracle/oprocd/check
2013-01-17 11:33:35: OPROCDDIR=/etc/oracle/oprocd
2013-01-17 11:33:35: OPROCDFATALDIR=/etc/oracle/oprocd/fatal
2013-01-17 11:33:35: OPROCDSTOPDIR=/etc/oracle/oprocd/stop
2013-01-17 11:33:35: ORACLE_BASE=/opt/app/oracle
2013-01-17 11:33:35: ORACLE_HOME=/opt/app/11.2.0/grid
2013-01-17 11:33:35: ORACLE_OWNER=grid
2013-01-17 11:33:35: ORA_ASM_GROUP=asmadmin
2013-01-17 11:33:35: ORA_DBA_GROUP=oinstall
2013-01-17 11:33:35: PRIVATE_NAME_LIST=
2013-01-17 11:33:35: RCALLDIR=/etc/rc.d/rc0.d /etc/rc.d/rc1.d /etc/rc.d/rc2.d /etc/rc.d/rc3.d /etc/rc.d/rc4.d /etc/rc.d/rc5.d /etc/rc.d/rc6.d
2013-01-17 11:33:35: RCKDIR=/etc/rc.d/rc0.d /etc/rc.d/rc1.d /etc/rc.d/rc2.d /etc/rc.d/rc3.d /etc/rc.d/rc4.d /etc/rc.d/rc6.d
2013-01-17 11:33:35: RCSDIR=/etc/rc.d/rc3.d /etc/rc.d/rc5.d
2013-01-17 11:33:35: RC_KILL=K15
2013-01-17 11:33:35: RC_KILL_OLD=K96
2013-01-17 11:33:35: RC_KILL_OLD2=K19
2013-01-17 11:33:35: RC_START=S96
2013-01-17 11:33:35: REUSEDG=false
2013-01-17 11:33:35: SCAN_NAME=rhel6m-scan
2013-01-17 11:33:35: SCAN_PORT=1521
2013-01-17 11:33:35: SCRBASE=/etc/oracle/scls_scr
2013-01-17 11:33:35: SILENT=false
2013-01-17 11:33:35: SO_EXT=so
2013-01-17 11:33:35: SRVCFGLOC=srvConfig.loc
2013-01-17 11:33:35: SRVCONFIG=/var/opt/oracle/srvConfig.loc
2013-01-17 11:33:35: SRVCONFIGDIR=/var/opt/oracle
2013-01-17 11:33:35: TZ=Europe/London
2013-01-17 11:33:35: USER_IGNORED_PREREQ=false
2013-01-17 11:33:35: VNDR_CLUSTER=false
2013-01-17 11:33:35: VOTING_DISKS=NO_VAL
2013-01-17 11:33:35: ### Printing other configuration values ###
2013-01-17 11:33:35: CLSCFG_EXTRA_PARMS=
2013-01-17 11:33:35: HAS_GROUP=oinstall
2013-01-17 11:33:35: HAS_USER=root
2013-01-17 11:33:35: HOST=rhel6m1
2013-01-17 11:33:35: OLR_DIRECTORY=/opt/app/11.2.0/grid/cdata
2013-01-17 11:33:35: OLR_LOCATION=/opt/app/11.2.0/grid/cdata/rhel6m1.olr
2013-01-17 11:33:35: ORA_CRS_HOME=/opt/app/11.2.0/grid
2013-01-17 11:33:35: SUPERUSER=root
2013-01-17 11:33:35: UNLOCK=0
2013-01-17 11:33:35: VF_DISCOVERY_STRING=
2013-01-17 11:33:35: crscfg_trace=1
2013-01-17 11:33:35: crscfg_trace_file=/opt/app/11.2.0/grid/OPatch/crs/../../cfgtoollogs/opatchauto2013-01-17_11-33-35.log
2013-01-17 11:33:35: hosts=
2013-01-17 11:33:35: osdfile=/opt/app/11.2.0/grid/OPatch/crs/s_crsconfig_defs
2013-01-17 11:33:35: parameters_valid=1
2013-01-17 11:33:35: paramfile=/opt/app/11.2.0/grid/crs/install/crsconfig_params
2013-01-17 11:33:35: platform_family=unix
2013-01-17 11:33:35: srvctl_trc_suff=0
2013-01-17 11:33:35: user_is_superuser=1
2013-01-17 11:33:35: ### Printing of configuration values complete ###
2013-01-17 11:33:35: No -patchfile specified, assuming the patch is already uncompressed
2013-01-17 11:33:35: Oracle user for /opt/app/11.2.0/grid is grid
2013-01-17 11:33:35: silent mode option is -silent -ocmrf ocm.rsp
2013-01-17 11:33:35: Bundle.xml content is <bundle type = "GI_BUNDLE">
  <entities>
     <entity location="15876003">
       <target type="crs"/>
       <target type="siha"/>
     </entity>
     <entity location="15876003/custom/server/15876003">
       <target type="rac" />
       <target type="sidb"/>
     </entity>
     <entity location="14727310">
       <target type="crs"/>
       <target type="rac"/>
       <target type="sidb"/>
       <target type="siha"/>
     </entity>
   </entities>
 </bundle>

2013-01-17 11:33:35: The patch ids are 15876003 14727310
2013-01-17 11:33:35: The patch ids are 15876003 14727310
2013-01-17 11:33:35: Running as user grid: /opt/app/11.2.0/grid/OPatch/opatch query -get_patch_type /usr/local/patches/15876003 -oh /opt/app/11.2.0/grid
2013-01-17 11:33:35: s_run_as_user2: Running /bin/su grid -c ' /opt/app/11.2.0/grid/OPatch/opatch query -get_patch_type /usr/local/patches/15876003 -oh /opt/app/11.2.0/grid '
2013-01-17 11:33:39: Removing file /tmp/fileSKWEXl
2013-01-17 11:33:39: Successfully removed file: /tmp/fileSKWEXl
2013-01-17 11:33:39: /bin/su successfully executed

2013-01-17 11:33:39: output is  This patch is a "legacy_bundle_top" patch.

2013-01-17 11:33:39: Patch type is "legacy_bundle_top"
2013-01-17 11:33:39: Running as user grid: /opt/app/11.2.0/grid/OPatch/opatch query -get_patch_type /usr/local/patches/14727310 -oh /opt/app/11.2.0/grid
2013-01-17 11:33:39: s_run_as_user2: Running /bin/su grid -c ' /opt/app/11.2.0/grid/OPatch/opatch query -get_patch_type /usr/local/patches/14727310 -oh /opt/app/11.2.0/grid '
2013-01-17 11:33:40: Removing file /tmp/file1oyqPq
2013-01-17 11:33:40: Successfully removed file: /tmp/file1oyqPq
2013-01-17 11:33:40: /bin/su exited with rc=0
 75
2013-01-17 11:33:40: output is 
2013-01-17 11:33:40: Patch type is 
2013-01-17 11:33:40: GI patches are /usr/local/patches/15876003 /usr/local/patches/14727310
2013-01-17 11:33:40: DB patches are /usr/local/patches/15876003/custom/server/15876003 /usr/local/patches/14727310
2013-01-17 11:33:40: Running /opt/app/11.2.0/grid/bin/crsctl check cluster -n rhel6m1
2013-01-17 11:33:40: Executing cmd: /opt/app/11.2.0/grid/bin/crsctl check cluster -n rhel6m1
2013-01-17 11:33:40: Command output:
>  **************************************************************
>  rhel6m1:
>  CRS-4537: Cluster Ready Services is online
>  CRS-4529: Cluster Synchronization Services is online
>  CRS-4533: Event Manager is online
>  ************************************************************** 
>End Command output
2013-01-17 11:33:40: Looking for configured databases on node rhel6m1
2013-01-17 11:33:41: Databases configured on node rhel6m1 are: std11g2
2013-01-17 11:33:41: Determining ORACLE_HOME paths for configured databases
2013-01-17 11:33:41: Executing cmd: /opt/app/11.2.0/grid/bin/srvctl config database -d std11g2
2013-01-17 11:33:43: Command output:
>  Database unique name: std11g2
>  Database name: std11g2
>  Oracle home: /opt/app/oracle/product/11.2.0/dbhome_1
>  Oracle user: oracle
>  Spfile: +DATA/std11g2/spfilestd11g2.ora
>  Domain: 
>  Start options: open
>  Stop options: immediate
>  Database role: PRIMARY
>  Management policy: AUTOMATIC
>  Server pools: std11g2
>  Database instances: std11g21,std11g22
>  Disk Groups: DATA,FLASH
>  Mount point paths: 
>  Services: myservice
>  Type: RAC
>  Database is administrator managed 
>End Command output
2013-01-17 11:33:43: output is Oracle home: /opt/app/oracle/product/11.2.0/dbhome_1
2013-01-17 11:33:43: Oracle home for database std11g2 is /opt/app/oracle/product/11.2.0/dbhome_1
2013-01-17 11:33:43: Oracle Home /opt/app/oracle/product/11.2.0/dbhome_1 is configured with Database(s)-> std11g2
2013-01-17 11:33:43: Oracle user for /opt/app/oracle/product/11.2.0/dbhome_1 is oracle
2013-01-17 11:33:43: oracle home list is /opt/app/oracle/product/11.2.0/dbhome_1
2013-01-17 11:33:43: Processing oracle home /opt/app/oracle/product/11.2.0/dbhome_1
2013-01-17 11:33:43: Opening file /etc/oracle/ocr.loc
2013-01-17 11:33:43: Value (FALSE) is set for key=local_only
2013-01-17 11:33:43: Home type of /opt/app/oracle/product/11.2.0/dbhome_1 is DB
2013-01-17 11:33:43: Oracle user for /opt/app/oracle/product/11.2.0/dbhome_1 is oracle
2013-01-17 11:33:43: Oracle user for /opt/app/oracle/product/11.2.0/dbhome_1 is oracle
2013-01-17 11:33:43: Running as user oracle: /opt/app/oracle/product/11.2.0/dbhome_1/OPatch/opatch version -oh /opt/app/oracle/product/11.2.0/dbhome_1
2013-01-17 11:33:43: s_run_as_user2: Running /bin/su oracle -c ' /opt/app/oracle/product/11.2.0/dbhome_1/OPatch/opatch version -oh /opt/app/oracle/product/11.2.0/dbhome_1 '
2013-01-17 11:33:44: Removing file /tmp/fileTZwL3G
2013-01-17 11:33:44: Successfully removed file: /tmp/fileTZwL3G
2013-01-17 11:33:44: /bin/su successfully executed

2013-01-17 11:33:44: opatch version in oracle home /opt/app/oracle/product/11.2.0/dbhome_1  is 11.2.0.3.0

2013-01-17 11:33:44: Running as user oracle: /opt/app/oracle/product/11.2.0/dbhome_1/OPatch/opatch util checkMinimumOPatchVersion -ph /usr/local/patches/15876003/custom/server/15876003 -version 11.2.0.3.0 -oh /opt/app/oracle/product/11.2.0/dbhome_1
2013-01-17 11:33:44: s_run_as_user2: Running /bin/su oracle -c ' /opt/app/oracle/product/11.2.0/dbhome_1/OPatch/opatch util checkMinimumOPatchVersion -ph /usr/local/patches/15876003/custom/server/15876003 -version 11.2.0.3.0 -oh /opt/app/oracle/product/11.2.0/dbhome_1 '
2013-01-17 11:33:49: Removing file /tmp/filei5JgK0
2013-01-17 11:33:49: Successfully removed file: /tmp/filei5JgK0
2013-01-17 11:33:49: /bin/su successfully executed

2013-01-17 11:33:49: Running as user oracle: /opt/app/oracle/product/11.2.0/dbhome_1/OPatch/opatch util checkMinimumOPatchVersion -ph /usr/local/patches/14727310 -version 11.2.0.3.0 -oh /opt/app/oracle/product/11.2.0/dbhome_1
2013-01-17 11:33:49: s_run_as_user2: Running /bin/su oracle -c ' /opt/app/oracle/product/11.2.0/dbhome_1/OPatch/opatch util checkMinimumOPatchVersion -ph /usr/local/patches/14727310 -version 11.2.0.3.0 -oh /opt/app/oracle/product/11.2.0/dbhome_1 '
2013-01-17 11:33:52: Removing file /tmp/filekjM3wv
2013-01-17 11:33:52: Successfully removed file: /tmp/filekjM3wv
2013-01-17 11:33:52: /bin/su successfully executed

2013-01-17 11:33:52: Status of opatch version check  for /opt/app/oracle/product/11.2.0/dbhome_1 is 1
2013-01-17 11:33:52: Opatch version check passed for oracle home  /opt/app/oracle/product/11.2.0/dbhome_1
2013-01-17 11:33:52: Processing oracle home /opt/app/11.2.0/grid
2013-01-17 11:33:52: Opening file /etc/oracle/ocr.loc
2013-01-17 11:33:52: Value (FALSE) is set for key=local_only
2013-01-17 11:33:52: Home type of /opt/app/11.2.0/grid is CRS
2013-01-17 11:33:52: Oracle user for /opt/app/11.2.0/grid is grid
2013-01-17 11:33:52: Oracle user for /opt/app/11.2.0/grid is grid
2013-01-17 11:33:52: Running as user grid: /opt/app/11.2.0/grid/OPatch/opatch version -oh /opt/app/11.2.0/grid
2013-01-17 11:33:52: s_run_as_user2: Running /bin/su grid -c ' /opt/app/11.2.0/grid/OPatch/opatch version -oh /opt/app/11.2.0/grid '
2013-01-17 11:33:53: Removing file /tmp/fileQxFgZ8
2013-01-17 11:33:53: Successfully removed file: /tmp/fileQxFgZ8
2013-01-17 11:33:53: /bin/su successfully executed

2013-01-17 11:33:53: opatch version in oracle home /opt/app/11.2.0/grid  is 11.2.0.3.0

2013-01-17 11:33:53: Running as user grid: /opt/app/11.2.0/grid/OPatch/opatch util checkMinimumOPatchVersion -ph /usr/local/patches/15876003 -version 11.2.0.3.0 -oh /opt/app/11.2.0/grid
2013-01-17 11:33:53: s_run_as_user2: Running /bin/su grid -c ' /opt/app/11.2.0/grid/OPatch/opatch util checkMinimumOPatchVersion -ph /usr/local/patches/15876003 -version 11.2.0.3.0 -oh /opt/app/11.2.0/grid '
2013-01-17 11:33:57: Removing file /tmp/file1noRAO
2013-01-17 11:33:57: Successfully removed file: /tmp/file1noRAO
2013-01-17 11:33:57: /bin/su successfully executed

2013-01-17 11:33:57: Running as user grid: /opt/app/11.2.0/grid/OPatch/opatch util checkMinimumOPatchVersion -ph /usr/local/patches/14727310 -version 11.2.0.3.0 -oh /opt/app/11.2.0/grid
2013-01-17 11:33:57: s_run_as_user2: Running /bin/su grid -c ' /opt/app/11.2.0/grid/OPatch/opatch util checkMinimumOPatchVersion -ph /usr/local/patches/14727310 -version 11.2.0.3.0 -oh /opt/app/11.2.0/grid '
2013-01-17 11:34:00: Removing file /tmp/file3hDQCF
2013-01-17 11:34:00: Successfully removed file: /tmp/file3hDQCF
2013-01-17 11:34:00: /bin/su successfully executed

2013-01-17 11:34:00: Status of opatch version check  for /opt/app/11.2.0/grid is 1
2013-01-17 11:34:00: Opatch version check passed for oracle home  /opt/app/11.2.0/grid
2013-01-17 11:34:00: Opatch version check passed  for all oracle homes
2013-01-17 11:34:00: Opening file /etc/oracle/ocr.loc
2013-01-17 11:34:00: Value (FALSE) is set for key=local_only
2013-01-17 11:34:00: The cluster nodes are rhel6m1 rhel6m2
2013-01-17 11:34:00: checking if path /opt/app/oracle/product/11.2.0/dbhome_1 is shared
2013-01-17 11:34:00: Running as user grid: /opt/app/11.2.0/grid/bin/cluvfy comp ssa -t software -s /opt/app/oracle/product/11.2.0/dbhome_1 -n rhel6m1,rhel6m2 -display_status
2013-01-17 11:34:00: s_run_as_user2: Running /bin/su grid -c ' /opt/app/11.2.0/grid/bin/cluvfy comp ssa -t software -s /opt/app/oracle/product/11.2.0/dbhome_1 -n rhel6m1,rhel6m2 -display_status '
2013-01-17 11:35:02: Removing file /tmp/filejzzDOD
2013-01-17 11:35:02: Successfully removed file: /tmp/filejzzDOD
2013-01-17 11:35:02: /bin/su exited with rc=0
 1
2013-01-17 11:35:02: return code for shared check is 256
2013-01-17 11:35:02: output of sharedness check is 
 Verifying shared storage accessibility 
 
 Checking shared storage accessibility...
 
 "/opt/app/oracle/product/11.2.0/dbhome_1" is not shared
 
 
 Shared storage check failed on nodes "rhel6m2,rhel6m1"
 
 Verification of shared storage accessibility was unsuccessful on all the specified nodes. 
 NODE_STATUS::rhel6m2:VFAIL
 NODE_STATUS::rhel6m1:VFAIL
 OVERALL_STATUS::VFAIL

2013-01-17 11:35:02: the ishared value is 0
2013-01-17 11:35:02: The oracle home /opt/app/oracle/product/11.2.0/dbhome_1 is not shared
2013-01-17 11:35:02: Opening file /etc/oracle/ocr.loc
2013-01-17 11:35:02: Value (FALSE) is set for key=local_only
2013-01-17 11:35:02: The cluster nodes are rhel6m1 rhel6m2
2013-01-17 11:35:02: checking if path /opt/app/11.2.0/grid/crs/install is shared
2013-01-17 11:35:02: Running as user grid: /opt/app/11.2.0/grid/bin/cluvfy comp ssa -t software -s /opt/app/11.2.0/grid/crs/install -n rhel6m1,rhel6m2 -display_status
2013-01-17 11:35:02: s_run_as_user2: Running /bin/su grid -c ' /opt/app/11.2.0/grid/bin/cluvfy comp ssa -t software -s /opt/app/11.2.0/grid/crs/install -n rhel6m1,rhel6m2 -display_status '
2013-01-17 11:35:21: Removing file /tmp/file4Xaqdc
2013-01-17 11:35:21: Successfully removed file: /tmp/file4Xaqdc
2013-01-17 11:35:21: /bin/su exited with rc=0
 1
2013-01-17 11:35:21: return code for shared check is 256
2013-01-17 11:35:21: output of sharedness check is 
 Verifying shared storage accessibility 
 
 Checking shared storage accessibility...
 
 "/opt/app/11.2.0/grid/crs/install" is not shared
 
 
 Shared storage check failed on nodes "rhel6m2,rhel6m1"
 
 Verification of shared storage accessibility was unsuccessful on all the specified nodes. 
 NODE_STATUS::rhel6m2:VFAIL
 NODE_STATUS::rhel6m1:VFAIL
 OVERALL_STATUS::VFAIL

2013-01-17 11:35:21: the ishared value is 0
2013-01-17 11:35:21: The oracle home /opt/app/11.2.0/grid is not shared
2013-01-17 11:35:21: Processing oracle home /opt/app/oracle/product/11.2.0/dbhome_1
2013-01-17 11:35:21: Opening file /etc/oracle/ocr.loc
2013-01-17 11:35:21: Value (FALSE) is set for key=local_only
2013-01-17 11:35:21: Home type of /opt/app/oracle/product/11.2.0/dbhome_1 is DB
2013-01-17 11:35:21: Oracle user for /opt/app/oracle/product/11.2.0/dbhome_1 is oracle
2013-01-17 11:35:21: Running as user oracle: /opt/app/oracle/product/11.2.0/dbhome_1/OPatch/opatch prereq CheckComponents -ph /usr/local/patches/15876003/custom/server/15876003 -oh /opt/app/oracle/product/11.2.0/dbhome_1
2013-01-17 11:35:21: s_run_as_user2: Running /bin/su oracle -c ' /opt/app/oracle/product/11.2.0/dbhome_1/OPatch/opatch prereq CheckComponents -ph /usr/local/patches/15876003/custom/server/15876003 -oh /opt/app/oracle/product/11.2.0/dbhome_1 '
2013-01-17 11:35:24: Removing file /tmp/filedswEgx
2013-01-17 11:35:24: Successfully removed file: /tmp/filedswEgx
2013-01-17 11:35:24: /bin/su successfully executed

2013-01-17 11:35:24: Running as user oracle: /opt/app/oracle/product/11.2.0/dbhome_1/OPatch/opatch prereq CheckComponents -ph /usr/local/patches/14727310 -oh /opt/app/oracle/product/11.2.0/dbhome_1
2013-01-17 11:35:24: s_run_as_user2: Running /bin/su oracle -c ' /opt/app/oracle/product/11.2.0/dbhome_1/OPatch/opatch prereq CheckComponents -ph /usr/local/patches/14727310 -oh /opt/app/oracle/product/11.2.0/dbhome_1 '
2013-01-17 11:35:28: Removing file /tmp/filehlvsO1
2013-01-17 11:35:28: Successfully removed file: /tmp/filehlvsO1
2013-01-17 11:35:28: /bin/su successfully executed

2013-01-17 11:35:28: Oracle user for /opt/app/oracle/product/11.2.0/dbhome_1 is oracle
2013-01-17 11:35:28: Running as user oracle: /opt/app/oracle/product/11.2.0/dbhome_1/OPatch/opatch prereq CheckConflictAgainstOH -ph /usr/local/patches/15876003/custom/server/15876003 -oh /opt/app/oracle/product/11.2.0/dbhome_1
2013-01-17 11:35:28: s_run_as_user2: Running /bin/su oracle -c ' /opt/app/oracle/product/11.2.0/dbhome_1/OPatch/opatch prereq CheckConflictAgainstOH -ph /usr/local/patches/15876003/custom/server/15876003 -oh /opt/app/oracle/product/11.2.0/dbhome_1 '
2013-01-17 11:35:35: Removing file /tmp/fileBBS2qF
2013-01-17 11:35:35: Successfully removed file: /tmp/fileBBS2qF
2013-01-17 11:35:35: /bin/su successfully executed

2013-01-17 11:35:35: Running as user oracle: /opt/app/oracle/product/11.2.0/dbhome_1/OPatch/opatch prereq CheckConflictAgainstOH -ph /usr/local/patches/14727310 -oh /opt/app/oracle/product/11.2.0/dbhome_1
2013-01-17 11:35:35: s_run_as_user2: Running /bin/su oracle -c ' /opt/app/oracle/product/11.2.0/dbhome_1/OPatch/opatch prereq CheckConflictAgainstOH -ph /usr/local/patches/14727310 -oh /opt/app/oracle/product/11.2.0/dbhome_1 '
2013-01-17 11:35:42: Removing file /tmp/file1i2Z7B
2013-01-17 11:35:42: Successfully removed file: /tmp/file1i2Z7B
2013-01-17 11:35:42: /bin/su successfully executed

2013-01-17 11:35:42: Status of component/conflict check  for /opt/app/oracle/product/11.2.0/dbhome_1 is 1
2013-01-17 11:35:42:  Conflict check passes for oracle home  /opt/app/oracle/product/11.2.0/dbhome_1
2013-01-17 11:35:42: Processing oracle home /opt/app/11.2.0/grid
2013-01-17 11:35:42: Opening file /etc/oracle/ocr.loc
2013-01-17 11:35:42: Value (FALSE) is set for key=local_only
2013-01-17 11:35:42: Home type of /opt/app/11.2.0/grid is CRS
2013-01-17 11:35:42: Oracle user for /opt/app/11.2.0/grid is grid
2013-01-17 11:35:42: Running as user grid: /opt/app/11.2.0/grid/OPatch/opatch prereq CheckComponents -ph /usr/local/patches/15876003 -oh /opt/app/11.2.0/grid
2013-01-17 11:35:42: s_run_as_user2: Running /bin/su grid -c ' /opt/app/11.2.0/grid/OPatch/opatch prereq CheckComponents -ph /usr/local/patches/15876003 -oh /opt/app/11.2.0/grid '
2013-01-17 11:35:45: Removing file /tmp/filekTThuP
2013-01-17 11:35:45: Successfully removed file: /tmp/filekTThuP
2013-01-17 11:35:45: /bin/su successfully executed

2013-01-17 11:35:45: Running as user grid: /opt/app/11.2.0/grid/OPatch/opatch prereq CheckComponents -ph /usr/local/patches/14727310 -oh /opt/app/11.2.0/grid
2013-01-17 11:35:45: s_run_as_user2: Running /bin/su grid -c ' /opt/app/11.2.0/grid/OPatch/opatch prereq CheckComponents -ph /usr/local/patches/14727310 -oh /opt/app/11.2.0/grid '
2013-01-17 11:35:48: Removing file /tmp/filersHlyc
2013-01-17 11:35:48: Successfully removed file: /tmp/filersHlyc
2013-01-17 11:35:48: /bin/su successfully executed

2013-01-17 11:35:48: Oracle user for /opt/app/11.2.0/grid is grid
2013-01-17 11:35:48: Running as user grid: /opt/app/11.2.0/grid/OPatch/opatch prereq CheckConflictAgainstOH -ph /usr/local/patches/15876003 -oh /opt/app/11.2.0/grid
2013-01-17 11:35:48: s_run_as_user2: Running /bin/su grid -c ' /opt/app/11.2.0/grid/OPatch/opatch prereq CheckConflictAgainstOH -ph /usr/local/patches/15876003 -oh /opt/app/11.2.0/grid '
2013-01-17 11:35:56: Removing file /tmp/fileMd0bDH
2013-01-17 11:35:56: Successfully removed file: /tmp/fileMd0bDH
2013-01-17 11:35:56: /bin/su successfully executed

2013-01-17 11:35:56: Running as user grid: /opt/app/11.2.0/grid/OPatch/opatch prereq CheckConflictAgainstOH -ph /usr/local/patches/14727310 -oh /opt/app/11.2.0/grid
2013-01-17 11:35:56: s_run_as_user2: Running /bin/su grid -c ' /opt/app/11.2.0/grid/OPatch/opatch prereq CheckConflictAgainstOH -ph /usr/local/patches/14727310 -oh /opt/app/11.2.0/grid '
2013-01-17 11:36:03: Removing file /tmp/fileoLmZhv
2013-01-17 11:36:03: Successfully removed file: /tmp/fileoLmZhv
2013-01-17 11:36:03: /bin/su successfully executed

2013-01-17 11:36:03: Status of component/conflict check  for /opt/app/11.2.0/grid is 1
2013-01-17 11:36:03:  Conflict check passes for oracle home  /opt/app/11.2.0/grid
2013-01-17 11:36:03: Conflict check passed  for all oracle homes
2013-01-17 11:36:03: Processing oracle home /opt/app/oracle/product/11.2.0/dbhome_1
2013-01-17 11:36:03: Opening file /etc/oracle/ocr.loc
2013-01-17 11:36:03: Value (FALSE) is set for key=local_only
2013-01-17 11:36:03: Home type of /opt/app/oracle/product/11.2.0/dbhome_1 is DB
2013-01-17 11:36:03: Performing DB patch
2013-01-17 11:36:03: Oracle user for /opt/app/oracle/product/11.2.0/dbhome_1 is oracle
2013-01-17 11:36:03: Running as user oracle: /opt/app/oracle/product/11.2.0/dbhome_1/OPatch/opatch prereq CheckApplicable -ph /usr/local/patches/15876003/custom/server/15876003 -oh /opt/app/oracle/product/11.2.0/dbhome_1
2013-01-17 11:36:03: s_run_as_user2: Running /bin/su oracle -c ' /opt/app/oracle/product/11.2.0/dbhome_1/OPatch/opatch prereq CheckApplicable -ph /usr/local/patches/15876003/custom/server/15876003 -oh /opt/app/oracle/product/11.2.0/dbhome_1 '
2013-01-17 11:36:07: Removing file /tmp/file82jfJB
2013-01-17 11:36:07: Successfully removed file: /tmp/file82jfJB
2013-01-17 11:36:07: /bin/su successfully executed

2013-01-17 11:36:07: Running as user oracle: /opt/app/oracle/product/11.2.0/dbhome_1/OPatch/opatch prereq CheckApplicable -ph /usr/local/patches/14727310 -oh /opt/app/oracle/product/11.2.0/dbhome_1
2013-01-17 11:36:07: s_run_as_user2: Running /bin/su oracle -c ' /opt/app/oracle/product/11.2.0/dbhome_1/OPatch/opatch prereq CheckApplicable -ph /usr/local/patches/14727310 -oh /opt/app/oracle/product/11.2.0/dbhome_1 '
2013-01-17 11:36:11: Removing file /tmp/filegeYkOS
2013-01-17 11:36:11: Successfully removed file: /tmp/filegeYkOS
2013-01-17 11:36:11: /bin/su successfully executed

2013-01-17 11:36:11: Status of Applicable  check  for /opt/app/oracle/product/11.2.0/dbhome_1 is 1
2013-01-17 11:36:11: Executing cmd: /opt/app/11.2.0/grid/bin/crsctl query crs activeversion
2013-01-17 11:36:11: Command output:
>  Oracle Clusterware active version on the cluster is [11.2.0.3.0] 
>End Command output
2013-01-17 11:36:11: crs version is 11 2 0 3 0
2013-01-17 11:36:11: Running as user oracle: /opt/app/oracle/product/11.2.0/dbhome_1/bin/srvctl stop home -o /opt/app/oracle/product/11.2.0/dbhome_1 -s /opt/app/oracle/product/11.2.0/dbhome_1/srvm/admin/stophome.txt -n rhel6m1
2013-01-17 11:36:11: s_run_as_user2: Running /bin/su oracle -c ' /opt/app/oracle/product/11.2.0/dbhome_1/bin/srvctl stop home -o /opt/app/oracle/product/11.2.0/dbhome_1 -s /opt/app/oracle/product/11.2.0/dbhome_1/srvm/admin/stophome.txt -n rhel6m1 '
2013-01-17 11:36:29: Removing file /tmp/filegXdp4j
2013-01-17 11:36:29: Successfully removed file: /tmp/filegXdp4j
2013-01-17 11:36:29: /bin/su successfully executed

2013-01-17 11:36:29: /opt/app/oracle/product/11.2.0/dbhome_1/bin/srvctl stop home -o /opt/app/oracle/product/11.2.0/dbhome_1 -s /opt/app/oracle/product/11.2.0/dbhome_1/srvm/admin/stophome.txt -n rhel6m1 output is 
2013-01-17 11:36:29: Stopped resources from datbase home /opt/app/oracle/product/11.2.0/dbhome_1
2013-01-17 11:36:29: Running as user oracle: /usr/local/patches/15876003/custom/server/15876003/custom/scripts/prepatch.sh -dbhome /opt/app/oracle/product/11.2.0/dbhome_1
2013-01-17 11:36:29: s_run_as_user2: Running /bin/su oracle -c ' /usr/local/patches/15876003/custom/server/15876003/custom/scripts/prepatch.sh -dbhome /opt/app/oracle/product/11.2.0/dbhome_1 '
2013-01-17 11:36:29: Removing file /tmp/filePZl0Xw
2013-01-17 11:36:29: Successfully removed file: /tmp/filePZl0Xw
2013-01-17 11:36:29: /bin/su successfully executed

2013-01-17 11:36:29: Running as user oracle: true
2013-01-17 11:36:29: s_run_as_user2: Running /bin/su oracle -c ' true '
2013-01-17 11:36:29: Removing file /tmp/fileLQZBdK
2013-01-17 11:36:29: Successfully removed file: /tmp/fileLQZBdK
2013-01-17 11:36:29: /bin/su successfully executed

2013-01-17 11:36:29: prepatch execution for DB home ... success
2013-01-17 11:36:29: Oracle user for /opt/app/oracle/product/11.2.0/dbhome_1 is oracle
2013-01-17 11:36:29: Executing command /opt/app/oracle/product/11.2.0/dbhome_1/OPatch/opatch napply /usr/local/patches/15876003/custom/server/15876003 -local -silent -ocmrf ocm.rsp -oh /opt/app/oracle/product/11.2.0/dbhome_1 as oracle
2013-01-17 11:36:29: Running as user oracle: /opt/app/oracle/product/11.2.0/dbhome_1/OPatch/opatch napply /usr/local/patches/15876003/custom/server/15876003 -local -silent -ocmrf ocm.rsp -oh /opt/app/oracle/product/11.2.0/dbhome_1
2013-01-17 11:36:29: s_run_as_user2: Running /bin/su oracle -c ' /opt/app/oracle/product/11.2.0/dbhome_1/OPatch/opatch napply /usr/local/patches/15876003/custom/server/15876003 -local -silent -ocmrf ocm.rsp -oh /opt/app/oracle/product/11.2.0/dbhome_1 '
2013-01-17 11:37:17: Removing file /tmp/file2D3exX
2013-01-17 11:37:17: Successfully removed file: /tmp/file2D3exX
2013-01-17 11:37:17: /bin/su successfully executed

2013-01-17 11:37:17: status of apply patch is 0
2013-01-17 11:37:17: The apply patch output is Oracle Interim Patch Installer version 11.2.0.3.0
 Copyright (c) 2012, Oracle Corporation.  All rights reserved.
 
 
 Oracle Home       : /opt/app/oracle/product/11.2.0/dbhome_1
 Central Inventory : /opt/app/oraInventory
    from           : /opt/app/oracle/product/11.2.0/dbhome_1/oraInst.loc
 OPatch version    : 11.2.0.3.0
 OUI version       : 11.2.0.3.0
 Log file location : /opt/app/oracle/product/11.2.0/dbhome_1/cfgtoollogs/opatch/opatch2013-01-17_11-36-30AM_1.log
 
 Verifying environment and performing prerequisite checks...
 
 Conflicts/Supersets for each patch are:
 
 Patch : 15876003
 
  Bug Superset of 14275572
  Super set bugs are:
  14275572,  13919095,  13696251,  13348650,  12659561,  13039908,  13825231,  13036424,  12794268,  13011520,  13569812,  12758736,  13000491,  13498267,  13077654,  13001901,  13550689,  13430715,  13806545,  11675721,  14082976,  12771830,  12538907,  13947200,  13066371,  13483672,  12594616,  13540563,  12897651,  12897902,  13241779,  12896850,  12726222,  12829429,  12728585,  13079948,  12876314,  13090686,  12925041,  12995950,  13251796,  12650672,  12398492,  12848480,  13582411,  13652088,  12990582,  13857364,  12975811,  12917897,  13082238,  12947871,  13037709,  13371153,  12878750,  10114953,  11772838,  13058611,  13001955,  11836951,  12965049,  13440962,  12765467,  13727853,  13425727,  12885323,  13965075,  13339443,  12784559,  13332363,  13074261,  12971251,  13811209,  12709476,  13460353,  13523527,  12857064,  13719731,  13396284,  12899169,  13111013,  13323698,  12867511,  12639013,  12959140,  13085732,  12829917,  10317921,  13843080,  12934171,  12849377,  12349553,  13924431,  13869978,  12680491,  12914824,  13789135,  12730342,  13334158,  12950823,  10418841,  13355963,  13531373,  13776758,  12720728,  13620816,  13002015,  13023609,  13024624,  12791719
 
 Patches [   14275572 ] will be rolled back.
 
 
 Do you want to proceed? [y|n]
 Y (auto-answered by -silent)
 User Responded with: Y
 OPatch continues with these patches:   15876003  
 
 Do you want to proceed? [y|n]
 Y (auto-answered by -silent)
 User Responded with: Y
 All checks passed.
 
 Please shutdown Oracle instances running out of this ORACLE_HOME on the local system.
 (Oracle Home = '/opt/app/oracle/product/11.2.0/dbhome_1')
 
 
 Is the local system ready for patching? [y|n]
 Y (auto-answered by -silent)
 User Responded with: Y
 Backing up files...
 Applying interim patch '15876003' to OH '/opt/app/oracle/product/11.2.0/dbhome_1'
 Rolling back interim patch '14275572' from OH '/opt/app/oracle/product/11.2.0/dbhome_1'
 
 Patching component oracle.rdbms, 11.2.0.3.0...
 RollbackSession removing interim patch '14275572' from inventory
 
 
 OPatch back to application of the patch '15876003' after auto-rollback.
 
 
 Patching component oracle.rdbms, 11.2.0.3.0...
 
 Verifying the update...
 Patch 15876003 successfully applied.
 OPatch Session completed with warnings.
 Log file location: /opt/app/oracle/product/11.2.0/dbhome_1/cfgtoollogs/opatch/opatch2013-01-17_11-36-30AM_1.log
 
 OPatch completed with warnings.

2013-01-17 11:37:17: patch /usr/local/patches/15876003/custom/server/15876003  apply successful for home  /opt/app/oracle/product/11.2.0/dbhome_1
2013-01-17 11:37:17: Executing command /opt/app/oracle/product/11.2.0/dbhome_1/OPatch/opatch napply /usr/local/patches/14727310 -local -silent -ocmrf ocm.rsp -oh /opt/app/oracle/product/11.2.0/dbhome_1 as oracle
2013-01-17 11:37:17: Running as user oracle: /opt/app/oracle/product/11.2.0/dbhome_1/OPatch/opatch napply /usr/local/patches/14727310 -local -silent -ocmrf ocm.rsp -oh /opt/app/oracle/product/11.2.0/dbhome_1
2013-01-17 11:37:17: s_run_as_user2: Running /bin/su oracle -c ' /opt/app/oracle/product/11.2.0/dbhome_1/OPatch/opatch napply /usr/local/patches/14727310 -local -silent -ocmrf ocm.rsp -oh /opt/app/oracle/product/11.2.0/dbhome_1 '
2013-01-17 11:39:59: Removing file /tmp/file5tU7W9
2013-01-17 11:39:59: Successfully removed file: /tmp/file5tU7W9
2013-01-17 11:39:59: /bin/su successfully executed

2013-01-17 11:39:59: status of apply patch is 0
2013-01-17 11:39:59: The apply patch output is Oracle Interim Patch Installer version 11.2.0.3.0
 Copyright (c) 2012, Oracle Corporation.  All rights reserved.
 
 
 Oracle Home       : /opt/app/oracle/product/11.2.0/dbhome_1
 Central Inventory : /opt/app/oraInventory
    from           : /opt/app/oracle/product/11.2.0/dbhome_1/oraInst.loc
 OPatch version    : 11.2.0.3.0
 OUI version       : 11.2.0.3.0
 Log file location : /opt/app/oracle/product/11.2.0/dbhome_1/cfgtoollogs/opatch/opatch2013-01-17_11-37-18AM_1.log
 
 Verifying environment and performing prerequisite checks...
 OPatch continues with these patches:   14727310  
 
 Do you want to proceed? [y|n]
 Y (auto-answered by -silent)
 User Responded with: Y
 All checks passed.
 
 Please shutdown Oracle instances running out of this ORACLE_HOME on the local system.
 (Oracle Home = '/opt/app/oracle/product/11.2.0/dbhome_1')
 
 
 Is the local system ready for patching? [y|n]
 Y (auto-answered by -silent)
 User Responded with: Y
 Backing up files...
 Applying sub-patch '14727310' to OH '/opt/app/oracle/product/11.2.0/dbhome_1'
 
 Patching component oracle.rdbms, 11.2.0.3.0...
 
 Patching component oracle.rdbms.dbscripts, 11.2.0.3.0...
 
 Patching component oracle.rdbms.deconfig, 11.2.0.3.0...
 
 Patching component oracle.rdbms.rsf, 11.2.0.3.0...
 
 Patching component oracle.sdo.locator, 11.2.0.3.0...
 
 Patching component oracle.sysman.console.db, 11.2.0.3.0...
 
 Patching component oracle.sysman.oms.core, 10.2.0.4.4...
 
 Verifying the update...
 Composite patch 14727310 successfully applied.
 Log file location: /opt/app/oracle/product/11.2.0/dbhome_1/cfgtoollogs/opatch/opatch2013-01-17_11-37-18AM_1.log
 
 OPatch succeeded.

2013-01-17 11:39:59: patch /usr/local/patches/14727310  apply successful for home  /opt/app/oracle/product/11.2.0/dbhome_1
2013-01-17 11:39:59: Running as user oracle: /usr/local/patches/15876003/custom/server/15876003/custom/scripts/postpatch.sh -dbhome /opt/app/oracle/product/11.2.0/dbhome_1
2013-01-17 11:39:59: s_run_as_user2: Running /bin/su oracle -c ' /usr/local/patches/15876003/custom/server/15876003/custom/scripts/postpatch.sh -dbhome /opt/app/oracle/product/11.2.0/dbhome_1 '
2013-01-17 11:39:59: Removing file /tmp/fileeOwh17
2013-01-17 11:39:59: Successfully removed file: /tmp/fileeOwh17
2013-01-17 11:39:59: /bin/su successfully executed

2013-01-17 11:39:59: Running as user oracle: true
2013-01-17 11:39:59: s_run_as_user2: Running /bin/su oracle -c ' true '
2013-01-17 11:39:59: Removing file /tmp/file3qIa16
2013-01-17 11:39:59: Successfully removed file: /tmp/file3qIa16
2013-01-17 11:39:59: /bin/su successfully executed

2013-01-17 11:39:59: postpatch execution for DB home ... success
2013-01-17 11:39:59: Processing oracle home /opt/app/11.2.0/grid
2013-01-17 11:39:59: Opening file /etc/oracle/ocr.loc
2013-01-17 11:39:59: Value (FALSE) is set for key=local_only
2013-01-17 11:39:59: Home type of /opt/app/11.2.0/grid is CRS
2013-01-17 11:39:59: Unlock crshome...
2013-01-17 11:39:59: Exclude file used is /opt/app/11.2.0/grid/OPatch/crs/installPatch.excl
2013-01-17 11:39:59: Home location in olr.loc is 
2013-01-17 11:39:59: Executing cmd: /opt/app/11.2.0/grid/bin/crsctl stop crs -f
2013-01-17 11:41:59: Command output:
>  CRS-2791: Starting shutdown of Oracle High Availability Services-managed resources on 'rhel6m1'
>  CRS-2673: Attempting to stop 'ora.crsd' on 'rhel6m1'
>  CRS-2790: Starting shutdown of Cluster Ready Services-managed resources on 'rhel6m1'
>  CRS-2673: Attempting to stop 'ora.LISTENER.lsnr' on 'rhel6m1'
>  CRS-2673: Attempting to stop 'ora.CLUSTER_DG.dg' on 'rhel6m1'
>  CRS-2673: Attempting to stop 'ora.registry.acfs' on 'rhel6m1'
>  CRS-2673: Attempting to stop 'ora.DATA.dg' on 'rhel6m1'
>  CRS-2673: Attempting to stop 'ora.FLASH.dg' on 'rhel6m1'
>  CRS-2677: Stop of 'ora.LISTENER.lsnr' on 'rhel6m1' succeeded
>  CRS-2673: Attempting to stop 'ora.rhel6m1.vip' on 'rhel6m1'
>  CRS-2677: Stop of 'ora.rhel6m1.vip' on 'rhel6m1' succeeded
>  CRS-2672: Attempting to start 'ora.rhel6m1.vip' on 'rhel6m2'
>  CRS-2677: Stop of 'ora.DATA.dg' on 'rhel6m1' succeeded
>  CRS-2677: Stop of 'ora.registry.acfs' on 'rhel6m1' succeeded
>  CRS-2676: Start of 'ora.rhel6m1.vip' on 'rhel6m2' succeeded
>  CRS-2677: Stop of 'ora.FLASH.dg' on 'rhel6m1' succeeded
>  CRS-2677: Stop of 'ora.CLUSTER_DG.dg' on 'rhel6m1' succeeded
>  CRS-2673: Attempting to stop 'ora.asm' on 'rhel6m1'
>  CRS-2677: Stop of 'ora.asm' on 'rhel6m1' succeeded
>  CRS-2673: Attempting to stop 'ora.ons' on 'rhel6m1'
>  CRS-2677: Stop of 'ora.ons' on 'rhel6m1' succeeded
>  CRS-2673: Attempting to stop 'ora.net1.network' on 'rhel6m1'
>  CRS-2677: Stop of 'ora.net1.network' on 'rhel6m1' succeeded
>  CRS-2792: Shutdown of Cluster Ready Services-managed resources on 'rhel6m1' has completed
>  CRS-2677: Stop of 'ora.crsd' on 'rhel6m1' succeeded
>  CRS-2673: Attempting to stop 'ora.drivers.acfs' on 'rhel6m1'
>  CRS-2673: Attempting to stop 'ora.mdnsd' on 'rhel6m1'
>  CRS-2673: Attempting to stop 'ora.crf' on 'rhel6m1'
>  CRS-2673: Attempting to stop 'ora.ctssd' on 'rhel6m1'
>  CRS-2673: Attempting to stop 'ora.evmd' on 'rhel6m1'
>  CRS-2673: Attempting to stop 'ora.asm' on 'rhel6m1'
>  CRS-2677: Stop of 'ora.crf' on 'rhel6m1' succeeded
>  CRS-2677: Stop of 'ora.mdnsd' on 'rhel6m1' succeeded
>  CRS-2677: Stop of 'ora.evmd' on 'rhel6m1' succeeded
>  CRS-2677: Stop of 'ora.asm' on 'rhel6m1' succeeded
>  CRS-2673: Attempting to stop 'ora.cluster_interconnect.haip' on 'rhel6m1'
>  CRS-2677: Stop of 'ora.cluster_interconnect.haip' on 'rhel6m1' succeeded
>  CRS-2677: Stop of 'ora.ctssd' on 'rhel6m1' succeeded
>  CRS-2673: Attempting to stop 'ora.cssd' on 'rhel6m1'
>  CRS-2677: Stop of 'ora.drivers.acfs' on 'rhel6m1' succeeded
>  CRS-2677: Stop of 'ora.cssd' on 'rhel6m1' succeeded
>  CRS-2673: Attempting to stop 'ora.gipcd' on 'rhel6m1'
>  CRS-2677: Stop of 'ora.gipcd' on 'rhel6m1' succeeded
>  CRS-2673: Attempting to stop 'ora.gpnpd' on 'rhel6m1'
>  CRS-2677: Stop of 'ora.gpnpd' on 'rhel6m1' succeeded
>  CRS-2793: Shutdown of Oracle High Availability Services-managed resources on 'rhel6m1' has completed
>  CRS-4133: Oracle High Availability Services has been stopped. 
>End Command output
2013-01-17 11:41:59: /opt/app/11.2.0/grid/bin/crsctl stop crs -f
2013-01-17 11:41:59: Executing cmd: /opt/app/11.2.0/grid/bin/crsctl check cluster -n rhel6m1
2013-01-17 11:42:00: Command output:
>  CRS-4639: Could not contact Oracle High Availability Services
>  CRS-4000: Command Check failed, or completed with errors. 
>End Command output
2013-01-17 11:42:01: Executing cmd: /opt/app/11.2.0/grid/bin/crsctl check has
2013-01-17 11:42:02: Command output:
>  CRS-4639: Could not contact Oracle High Availability Services 
>End Command output
2013-01-17 11:44:10: Waiting for complete CRS stack to stop
2013-01-17 11:44:10: Invoking removeproc to clean oracle client procs
2013-01-17 11:44:10: Executing cmd: /sbin/fuser -k /opt/app/11.2.0/grid/bin/crsctl.bin
2013-01-17 11:44:10: fuser command output for /opt/app/11.2.0/grid/bin/crsctl.bin is 
2013-01-17 11:44:10: Oracle user for /opt/app/11.2.0/grid is grid
2013-01-17 11:44:10: Running as user grid: /opt/app/11.2.0/grid/OPatch/opatch prereq CheckApplicable -ph /usr/local/patches/15876003 -oh /opt/app/11.2.0/grid
2013-01-17 11:44:10: s_run_as_user2: Running /bin/su grid -c ' /opt/app/11.2.0/grid/OPatch/opatch prereq CheckApplicable -ph /usr/local/patches/15876003 -oh /opt/app/11.2.0/grid '
2013-01-17 11:44:14: Removing file /tmp/fileCAOkny
2013-01-17 11:44:14: Successfully removed file: /tmp/fileCAOkny
2013-01-17 11:44:14: /bin/su successfully executed

2013-01-17 11:44:14: Running as user grid: /opt/app/11.2.0/grid/OPatch/opatch prereq CheckApplicable -ph /usr/local/patches/14727310 -oh /opt/app/11.2.0/grid
2013-01-17 11:44:14: s_run_as_user2: Running /bin/su grid -c ' /opt/app/11.2.0/grid/OPatch/opatch prereq CheckApplicable -ph /usr/local/patches/14727310 -oh /opt/app/11.2.0/grid '
2013-01-17 11:44:17: Removing file /tmp/filepYiTfa
2013-01-17 11:44:17: Successfully removed file: /tmp/filepYiTfa
2013-01-17 11:44:17: /bin/su successfully executed

2013-01-17 11:44:17: Status of Applicable  check  for /opt/app/11.2.0/grid is 1
2013-01-17 11:44:17: Oracle user for /opt/app/11.2.0/grid is grid
2013-01-17 11:44:17: Executing command /opt/app/11.2.0/grid/OPatch/opatch napply /usr/local/patches/15876003 -local -silent -ocmrf ocm.rsp -oh /opt/app/11.2.0/grid as grid
2013-01-17 11:44:17: Running as user grid: /opt/app/11.2.0/grid/OPatch/opatch napply /usr/local/patches/15876003 -local -silent -ocmrf ocm.rsp -oh /opt/app/11.2.0/grid
2013-01-17 11:44:17: s_run_as_user2: Running /bin/su grid -c ' /opt/app/11.2.0/grid/OPatch/opatch napply /usr/local/patches/15876003 -local -silent -ocmrf ocm.rsp -oh /opt/app/11.2.0/grid '
2013-01-17 11:48:18: Removing file /tmp/fileftFTxU
2013-01-17 11:48:18: Successfully removed file: /tmp/fileftFTxU
2013-01-17 11:48:18: /bin/su successfully executed

2013-01-17 11:48:18: status of apply patch is 0
2013-01-17 11:48:18: The apply patch output is Oracle Interim Patch Installer version 11.2.0.3.0
 Copyright (c) 2012, Oracle Corporation.  All rights reserved.
 
 
 Oracle Home       : /opt/app/11.2.0/grid
 Central Inventory : /opt/app/oraInventory
    from           : /opt/app/11.2.0/grid/oraInst.loc
 OPatch version    : 11.2.0.3.0
 OUI version       : 11.2.0.3.0
 Log file location : /opt/app/11.2.0/grid/cfgtoollogs/opatch/opatch2013-01-17_11-44-18AM_1.log
 
 Verifying environment and performing prerequisite checks...
 
 Conflicts/Supersets for each patch are:
 
 Patch : 15876003
 
  Bug Superset of 14275572
  Super set bugs are:
  14275572,  13919095,  13696251,  13348650,  12659561,  13039908,  13825231,  13036424,  12794268,  13011520,  13569812,  12758736,  13000491,  13498267,  13077654,  13001901,  13550689,  13430715,  13806545,  11675721,  14082976,  12771830,  12538907,  13947200,  13066371,  13483672,  12594616,  13540563,  12897651,  12897902,  13241779,  12896850,  12726222,  12829429,  12728585,  13079948,  12876314,  13090686,  12925041,  12995950,  13251796,  12650672,  12398492,  12848480,  13582411,  13652088,  12990582,  13857364,  12975811,  12917897,  13082238,  12947871,  13037709,  13371153,  12878750,  10114953,  11772838,  13058611,  13001955,  11836951,  12965049,  13440962,  12765467,  13727853,  13425727,  12885323,  13965075,  13339443,  12784559,  13332363,  13074261,  12971251,  13811209,  12709476,  13460353,  13523527,  12857064,  13719731,  13396284,  12899169,  13111013,  13323698,  12867511,  12639013,  12959140,  13085732,  12829917,  10317921,  13843080,  12934171,  12849377,  12349553,  13924431,  13869978,  12680491,  12914824,  13789135,  12730342,  13334158,  12950823,  10418841,  13355963,  13531373,  13776758,  12720728,  13620816,  13002015,  13023609,  13024624,  12791719,  13886023,  13255295,  13821454,  12782756,  14625969,  14152875,  14186070,  12873909,  14214257,  12914722,  13243172,  12842804,  13045518,  12765868,  12772345,  12663376,  13345868,  14059576,  13683090,  12932852,  13889047,  12695029,  13146560,  13038806,  14251904,  14070200,  13820621,  14304758,  13396356,  13697828,  13258062,  12834777,  12996572,  13941934,  13657366,  13019958,  12810890,  13888719,  13502441,  13726162,  13880925,  14153867,  13506114,  12820045,  13604057,  12823838,  13877508,  12823042,  14494305,  13582706,  13617861,  12825835,  13263435,  13025879,  13853089,  14009845,  13410987,  13570879,  13637590,  12827493,  13247273,  13068077
 
 Patches [   14275572 ] will be rolled back.
 
 
 Do you want to proceed? [y|n]
 Y (auto-answered by -silent)
 User Responded with: Y
 OPatch continues with these patches:   15876003  
 
 Do you want to proceed? [y|n]
 Y (auto-answered by -silent)
 User Responded with: Y
 All checks passed.
 
 Please shutdown Oracle instances running out of this ORACLE_HOME on the local system.
 (Oracle Home = '/opt/app/11.2.0/grid')
 
 
 Is the local system ready for patching? [y|n]
 Y (auto-answered by -silent)
 User Responded with: Y
 Backing up files...
 Applying interim patch '15876003' to OH '/opt/app/11.2.0/grid'
 Rolling back interim patch '14275572' from OH '/opt/app/11.2.0/grid'
 
 Patching component oracle.crs, 11.2.0.3.0...
 
 Patching component oracle.usm, 11.2.0.3.0...
 RollbackSession removing interim patch '14275572' from inventory
 
 
 OPatch back to application of the patch '15876003' after auto-rollback.
 
 
 Patching component oracle.crs, 11.2.0.3.0...
 
 Patching component oracle.usm, 11.2.0.3.0...
 
 Verifying the update...
 Patch 15876003 successfully applied.
 OPatch Session completed with warnings.
 Log file location: /opt/app/11.2.0/grid/cfgtoollogs/opatch/opatch2013-01-17_11-44-18AM_1.log
 
 OPatch completed with warnings.

2013-01-17 11:48:18: patch /usr/local/patches/15876003  apply successful for home  /opt/app/11.2.0/grid
2013-01-17 11:48:18: Executing command /opt/app/11.2.0/grid/OPatch/opatch napply /usr/local/patches/14727310 -local -silent -ocmrf ocm.rsp -oh /opt/app/11.2.0/grid as grid
2013-01-17 11:48:18: Running as user grid: /opt/app/11.2.0/grid/OPatch/opatch napply /usr/local/patches/14727310 -local -silent -ocmrf ocm.rsp -oh /opt/app/11.2.0/grid
2013-01-17 11:48:18: s_run_as_user2: Running /bin/su grid -c ' /opt/app/11.2.0/grid/OPatch/opatch napply /usr/local/patches/14727310 -local -silent -ocmrf ocm.rsp -oh /opt/app/11.2.0/grid '
2013-01-17 11:49:39: Removing file /tmp/file5fgSjJ
2013-01-17 11:49:39: Successfully removed file: /tmp/file5fgSjJ
2013-01-17 11:49:39: /bin/su successfully executed

2013-01-17 11:49:39: status of apply patch is 0
2013-01-17 11:49:39: The apply patch output is Oracle Interim Patch Installer version 11.2.0.3.0
 Copyright (c) 2012, Oracle Corporation.  All rights reserved.
 
 
 Oracle Home       : /opt/app/11.2.0/grid
 Central Inventory : /opt/app/oraInventory
    from           : /opt/app/11.2.0/grid/oraInst.loc
 OPatch version    : 11.2.0.3.0
 OUI version       : 11.2.0.3.0
 Log file location : /opt/app/11.2.0/grid/cfgtoollogs/opatch/opatch2013-01-17_11-48-19AM_1.log
 
 Verifying environment and performing prerequisite checks...
 OPatch continues with these patches:   14727310  
 
 Do you want to proceed? [y|n]
 Y (auto-answered by -silent)
 User Responded with: Y
 All checks passed.
 
 Please shutdown Oracle instances running out of this ORACLE_HOME on the local system.
 (Oracle Home = '/opt/app/11.2.0/grid')
 
 
 Is the local system ready for patching? [y|n]
 Y (auto-answered by -silent)
 User Responded with: Y
 Backing up files...
 Applying sub-patch '14727310' to OH '/opt/app/11.2.0/grid'
 ApplySession: Optional component(s) [ oracle.sysman.console.db, 11.2.0.3.0 ] , [ oracle.sysman.oms.core, 10.2.0.4.4 ]  not present in the Oracle Home or a higher version is found.
 
 Patching component oracle.rdbms, 11.2.0.3.0...
 
 Patching component oracle.rdbms.dbscripts, 11.2.0.3.0...
 
 Patching component oracle.rdbms.deconfig, 11.2.0.3.0...
 
 Patching component oracle.rdbms.rsf, 11.2.0.3.0...
 
 Patching component oracle.sdo.locator, 11.2.0.3.0...
 
 Verifying the update...
 find: `./crf/admin/run/crfmond': Permission denied
 find: `./crf/admin/run/crflogd': Permission denied
 Composite patch 14727310 successfully applied.
 Log file location: /opt/app/11.2.0/grid/cfgtoollogs/opatch/opatch2013-01-17_11-48-19AM_1.log
 
 OPatch succeeded.

2013-01-17 11:49:39: patch /usr/local/patches/14727310  apply successful for home  /opt/app/11.2.0/grid
2013-01-17 11:49:39: Performing Post patch actions
2013-01-17 11:49:39: norestart flag is set to 
2013-01-17 11:49:39: Opening file /etc/oracle/ocr.loc
2013-01-17 11:49:39: Value (FALSE) is set for key=local_only
2013-01-17 11:49:39: Performing Post patch actions for Grid Home /opt/app/11.2.0/grid
2013-01-17 11:49:39: Executing cmd: /opt/app/11.2.0/grid/rdbms/install/rootadd_rdbms.sh
2013-01-17 11:49:39: setrdbmsfileperms succeeded
2013-01-17 11:49:39: Patching Oracle Clusterware
2013-01-17 11:49:39: norestart flag is set to 
2013-01-17 11:49:39: Executing cmd: /bin/rpm -q sles-release
2013-01-17 11:49:39: Command output:
>  package sles-release is not installed 
>End Command output
2013-01-17 11:49:39: init file = /opt/app/11.2.0/grid/crs/init/init.ohasd
2013-01-17 11:49:39: Copying file /opt/app/11.2.0/grid/crs/init/init.ohasd to /etc/init.d directory
2013-01-17 11:49:39: Setting init.ohasd permission in /etc/init.d directory
2013-01-17 11:49:39: init file = /opt/app/11.2.0/grid/crs/init/ohasd
2013-01-17 11:49:39: Copying file /opt/app/11.2.0/grid/crs/init/ohasd to /etc/init.d directory
2013-01-17 11:49:39: Setting ohasd permission in /etc/init.d directory
2013-01-17 11:49:39: Executing cmd: /bin/rpm -q sles-release
2013-01-17 11:49:39: Command output:
>  package sles-release is not installed 
>End Command output
2013-01-17 11:49:39: Removing "/etc/rc.d/rc3.d/S96ohasd"
2013-01-17 11:49:39: Removing file /etc/rc.d/rc3.d/S96ohasd
2013-01-17 11:49:39: Successfully removed file: /etc/rc.d/rc3.d/S96ohasd
2013-01-17 11:49:39: Creating a link "/etc/rc.d/rc3.d/S96ohasd" pointing to /etc/init.d/ohasd
2013-01-17 11:49:39: Removing "/etc/rc.d/rc5.d/S96ohasd"
2013-01-17 11:49:39: Removing file /etc/rc.d/rc5.d/S96ohasd
2013-01-17 11:49:39: Successfully removed file: /etc/rc.d/rc5.d/S96ohasd
2013-01-17 11:49:39: Creating a link "/etc/rc.d/rc5.d/S96ohasd" pointing to /etc/init.d/ohasd
2013-01-17 11:49:39: Removing "/etc/rc.d/rc0.d/K15ohasd"
2013-01-17 11:49:39: Removing file /etc/rc.d/rc0.d/K15ohasd
2013-01-17 11:49:39: Successfully removed file: /etc/rc.d/rc0.d/K15ohasd
2013-01-17 11:49:39: Creating a link "/etc/rc.d/rc0.d/K15ohasd" pointing to /etc/init.d/ohasd
2013-01-17 11:49:39: Removing "/etc/rc.d/rc1.d/K15ohasd"
2013-01-17 11:49:39: Removing file /etc/rc.d/rc1.d/K15ohasd
2013-01-17 11:49:39: Successfully removed file: /etc/rc.d/rc1.d/K15ohasd
2013-01-17 11:49:39: Creating a link "/etc/rc.d/rc1.d/K15ohasd" pointing to /etc/init.d/ohasd
2013-01-17 11:49:39: Removing "/etc/rc.d/rc2.d/K15ohasd"
2013-01-17 11:49:39: Removing file /etc/rc.d/rc2.d/K15ohasd
2013-01-17 11:49:39: Successfully removed file: /etc/rc.d/rc2.d/K15ohasd
2013-01-17 11:49:39: Creating a link "/etc/rc.d/rc2.d/K15ohasd" pointing to /etc/init.d/ohasd
2013-01-17 11:49:39: Removing "/etc/rc.d/rc3.d/K15ohasd"
2013-01-17 11:49:39: Removing file /etc/rc.d/rc3.d/K15ohasd
2013-01-17 11:49:39: Successfully removed file: /etc/rc.d/rc3.d/K15ohasd
2013-01-17 11:49:39: Creating a link "/etc/rc.d/rc3.d/K15ohasd" pointing to /etc/init.d/ohasd
2013-01-17 11:49:39: Removing "/etc/rc.d/rc4.d/K15ohasd"
2013-01-17 11:49:39: Removing file /etc/rc.d/rc4.d/K15ohasd
2013-01-17 11:49:39: Successfully removed file: /etc/rc.d/rc4.d/K15ohasd
2013-01-17 11:49:39: Creating a link "/etc/rc.d/rc4.d/K15ohasd" pointing to /etc/init.d/ohasd
2013-01-17 11:49:39: Removing "/etc/rc.d/rc6.d/K15ohasd"
2013-01-17 11:49:39: Removing file /etc/rc.d/rc6.d/K15ohasd
2013-01-17 11:49:39: Successfully removed file: /etc/rc.d/rc6.d/K15ohasd
2013-01-17 11:49:39: Creating a link "/etc/rc.d/rc6.d/K15ohasd" pointing to /etc/init.d/ohasd
2013-01-17 11:49:39: The file ohasd has been successfully linked to the RC directories
2013-01-17 11:49:39: Executing /opt/app/11.2.0/grid/bin/acfsroot install
2013-01-17 11:49:39: Executing cmd: /opt/app/11.2.0/grid/bin/acfsroot install
2013-01-17 11:51:21: Command output:
>  ACFS-9300: ADVM/ACFS distribution files found.
>  ACFS-9312: Existing ADVM/ACFS installation detected.
>  ACFS-9314: Removing previous ADVM/ACFS installation.
>  ACFS-9315: Previous ADVM/ACFS components successfully removed.
>  ACFS-9307: Installing requested ADVM/ACFS software.
>  ACFS-9308: Loading installed ADVM/ACFS drivers.
>  ACFS-9321: Creating udev for ADVM/ACFS.
>  ACFS-9323: Creating module dependencies - this may take some time.
>  ACFS-9154: Loading 'oracleoks.ko' driver.
>  ACFS-9154: Loading 'oracleadvm.ko' driver.
>  ACFS-9154: Loading 'oracleacfs.ko' driver.
>  ACFS-9327: Verifying ADVM/ACFS devices.
>  ACFS-9156: Detecting control device '/dev/asm/.asm_ctl_spec'.
>  ACFS-9156: Detecting control device '/dev/ofsctl'.
>  ACFS-9309: ADVM/ACFS installation correctness verified. 
>End Command output
2013-01-17 11:51:21: /opt/app/11.2.0/grid/bin/acfsroot install ... success
2013-01-17 11:51:21: USM driver install status is 1
2013-01-17 11:51:21: Validate crsctl command
2013-01-17 11:51:21: Validating /opt/app/11.2.0/grid/bin/crsctl
2013-01-17 11:51:21: Starting Oracle Clusterware
2013-01-17 11:51:21: Executing /opt/app/11.2.0/grid/bin/crsctl start crs
2013-01-17 11:51:21: Executing cmd: /opt/app/11.2.0/grid/bin/crsctl start crs
2013-01-17 11:51:28: Command output:
>  CRS-4123: Oracle High Availability Services has been started. 
>End Command output
2013-01-17 11:51:28: Executing cmd: /opt/app/11.2.0/grid/bin/crsctl stat resource
2013-01-17 11:51:30: Command output:
>  CRS-4535: Cannot communicate with Cluster Ready Services
>  CRS-4000: Command Status failed, or completed with errors. 
>End Command output
2013-01-17 11:51:30: Waiting for Oracle CRSD and EVMD to start
2013-01-17 11:51:35: Executing cmd: /opt/app/11.2.0/grid/bin/crsctl stat resource
2013-01-17 11:51:36: Command output:
>  CRS-4535: Cannot communicate with Cluster Ready Services
>  CRS-4000: Command Status failed, or completed with errors. 
>End Command output
2013-01-17 11:51:36: Waiting for Oracle CRSD and EVMD to start
2013-01-17 11:51:41: Executing cmd: /opt/app/11.2.0/grid/bin/crsctl stat resource
2013-01-17 11:51:43: Command output:
>  CRS-4535: Cannot communicate with Cluster Ready Services
>  CRS-4000: Command Status failed, or completed with errors. 
>End Command output
2013-01-17 11:51:43: Waiting for Oracle CRSD and EVMD to start
2013-01-17 11:51:48: Executing cmd: /opt/app/11.2.0/grid/bin/crsctl stat resource
2013-01-17 11:51:50: Command output:
>  CRS-4535: Cannot communicate with Cluster Ready Services
>  CRS-4000: Command Status failed, or completed with errors. 
>End Command output
2013-01-17 11:51:50: Waiting for Oracle CRSD and EVMD to start
2013-01-17 11:51:55: Executing cmd: /opt/app/11.2.0/grid/bin/crsctl stat resource
2013-01-17 11:51:57: Command output:
>  CRS-4535: Cannot communicate with Cluster Ready Services
>  CRS-4000: Command Status failed, or completed with errors. 
>End Command output
2013-01-17 11:51:57: Waiting for Oracle CRSD and EVMD to start
2013-01-17 11:52:02: Executing cmd: /opt/app/11.2.0/grid/bin/crsctl stat resource
2013-01-17 11:52:04: Command output:
>  CRS-4535: Cannot communicate with Cluster Ready Services
>  CRS-4000: Command Status failed, or completed with errors. 
>End Command output
2013-01-17 11:52:04: Waiting for Oracle CRSD and EVMD to start
2013-01-17 11:52:09: Executing cmd: /opt/app/11.2.0/grid/bin/crsctl stat resource
2013-01-17 11:52:10: Command output:
>  CRS-4535: Cannot communicate with Cluster Ready Services
>  CRS-4000: Command Status failed, or completed with errors. 
>End Command output
2013-01-17 11:52:10: Waiting for Oracle CRSD and EVMD to start
2013-01-17 11:52:15: Executing cmd: /opt/app/11.2.0/grid/bin/crsctl stat resource
2013-01-17 11:52:17: Command output:
>  CRS-4535: Cannot communicate with Cluster Ready Services
>  CRS-4000: Command Status failed, or completed with errors. 
>End Command output
2013-01-17 11:52:17: Waiting for Oracle CRSD and EVMD to start
2013-01-17 11:52:22: Executing cmd: /opt/app/11.2.0/grid/bin/crsctl stat resource
2013-01-17 11:52:24: Command output:
>  CRS-4535: Cannot communicate with Cluster Ready Services
>  CRS-4000: Command Status failed, or completed with errors. 
>End Command output
2013-01-17 11:52:24: Waiting for Oracle CRSD and EVMD to start
2013-01-17 11:52:29: Executing cmd: /opt/app/11.2.0/grid/bin/crsctl stat resource
2013-01-17 11:52:29: Command output:
>  CRS-4535: Cannot communicate with Cluster Ready Services
>  CRS-4000: Command Status failed, or completed with errors. 
>End Command output
2013-01-17 11:52:29: Waiting for Oracle CRSD and EVMD to start
2013-01-17 11:52:34: Executing cmd: /opt/app/11.2.0/grid/bin/crsctl stat resource
2013-01-17 11:52:34: Command output:
>  CRS-4535: Cannot communicate with Cluster Ready Services
>  CRS-4000: Command Status failed, or completed with errors. 
>End Command output
2013-01-17 11:52:34: Waiting for Oracle CRSD and EVMD to start
2013-01-17 11:52:39: Executing cmd: /opt/app/11.2.0/grid/bin/crsctl stat resource
2013-01-17 11:52:40: Command output:
>  NAME=ora.CLUSTER_DG.dg
>  TYPE=ora.diskgroup.type
>  TARGET=ONLINE           , ONLINE
>  STATE=ONLINE on rhel6m1, ONLINE on rhel6m2
>  
>  NAME=ora.DATA.dg
>  TYPE=ora.diskgroup.type
>  TARGET=ONLINE           , ONLINE
>  STATE=ONLINE on rhel6m1, ONLINE on rhel6m2
>  
>  NAME=ora.FLASH.dg
>  TYPE=ora.diskgroup.type
>  TARGET=ONLINE           , ONLINE
>  STATE=ONLINE on rhel6m1, ONLINE on rhel6m2
>  
>  NAME=ora.LISTENER.lsnr
>  TYPE=ora.listener.type
>  TARGET=ONLINE , ONLINE
>  STATE=OFFLINE, ONLINE on rhel6m2
>  
>  NAME=ora.LISTENER_SCAN1.lsnr
>  TYPE=ora.scan_listener.type
>  TARGET=ONLINE
>  STATE=ONLINE on rhel6m2
>  
>  NAME=ora.asm
>  TYPE=ora.asm.type
>  TARGET=ONLINE           , ONLINE
>  STATE=ONLINE on rhel6m1, ONLINE on rhel6m2
>  
>  NAME=ora.cvu
>  TYPE=ora.cvu.type
>  TARGET=ONLINE
>  STATE=ONLINE on rhel6m2
>  
>  NAME=ora.gsd
>  TYPE=ora.gsd.type
>  TARGET=OFFLINE, OFFLINE
>  STATE=OFFLINE, OFFLINE
>  
>  NAME=ora.net1.network
>  TYPE=ora.network.type
>  TARGET=ONLINE           , ONLINE
>  STATE=ONLINE on rhel6m1, ONLINE on rhel6m2
>  
>  NAME=ora.oc4j
>  TYPE=ora.oc4j.type
>  TARGET=ONLINE
>  STATE=ONLINE on rhel6m2
>  
>  NAME=ora.ons
>  TYPE=ora.ons.type
>  TARGET=ONLINE           , ONLINE
>  STATE=ONLINE on rhel6m1, ONLINE on rhel6m2
>  
>  NAME=ora.registry.acfs
>  TYPE=ora.registry.acfs.type
>  TARGET=ONLINE           , ONLINE
>  STATE=ONLINE on rhel6m1, ONLINE on rhel6m2
>  
>  NAME=ora.rhel6m1.vip
>  TYPE=ora.cluster_vip_net1.type
>  TARGET=ONLINE
>  STATE=ONLINE on rhel6m1
>  
>  NAME=ora.rhel6m2.vip
>  TYPE=ora.cluster_vip_net1.type
>  TARGET=ONLINE
>  STATE=ONLINE on rhel6m2
>  
>  NAME=ora.scan1.vip
>  TYPE=ora.scan_vip.type
>  TARGET=ONLINE
>  STATE=ONLINE on rhel6m2
>  
>  NAME=ora.std11g2.db
>  TYPE=ora.database.type
>  TARGET=OFFLINE, ONLINE
>  STATE=OFFLINE, ONLINE on rhel6m2
>  
>  NAME=ora.std11g2.myservice.svc
>  TYPE=ora.service.type
>  TARGET=OFFLINE, ONLINE
>  STATE=OFFLINE, ONLINE on rhel6m2
>   
>End Command output
2013-01-17 11:52:40: Running /opt/app/11.2.0/grid/bin/crsctl check cluster -n rhel6m1
2013-01-17 11:52:40: Executing cmd: /opt/app/11.2.0/grid/bin/crsctl check cluster -n rhel6m1
2013-01-17 11:52:40: Command output:
>  **************************************************************
>  rhel6m1:
>  CRS-4535: Cannot communicate with Cluster Ready Services
>  CRS-4529: Cluster Synchronization Services is online
>  CRS-4533: Event Manager is online
>  ************************************************************** 
>End Command output
2013-01-17 11:52:40: Checking the status of cluster
2013-01-17 11:52:45: Executing cmd: /opt/app/11.2.0/grid/bin/crsctl check cluster -n rhel6m1
2013-01-17 11:52:45: Command output:
>  **************************************************************
>  rhel6m1:
>  CRS-4535: Cannot communicate with Cluster Ready Services
>  CRS-4529: Cluster Synchronization Services is online
>  CRS-4533: Event Manager is online
>  ************************************************************** 
>End Command output
2013-01-17 11:52:45: Checking the status of cluster
2013-01-17 11:52:50: Executing cmd: /opt/app/11.2.0/grid/bin/crsctl check cluster -n rhel6m1
2013-01-17 11:52:50: Command output:
>  **************************************************************
>  rhel6m1:
>  CRS-4535: Cannot communicate with Cluster Ready Services
>  CRS-4529: Cluster Synchronization Services is online
>  CRS-4533: Event Manager is online
>  ************************************************************** 
>End Command output
2013-01-17 11:52:50: Checking the status of cluster
2013-01-17 11:52:55: Executing cmd: /opt/app/11.2.0/grid/bin/crsctl check cluster -n rhel6m1
2013-01-17 11:52:55: Command output:
>  **************************************************************
>  rhel6m1:
>  CRS-4537: Cluster Ready Services is online
>  CRS-4529: Cluster Synchronization Services is online
>  CRS-4533: Event Manager is online
>  ************************************************************** 
>End Command output
2013-01-17 11:52:55: Oracle CRS stack installed and running
2013-01-17 11:52:55: Performing Post patch actions
2013-01-17 11:52:55: norestart flag is set to 
2013-01-17 11:52:55: Opening file /etc/oracle/ocr.loc
2013-01-17 11:52:55: Value (FALSE) is set for key=local_only
2013-01-17 11:52:55: Performing Post Patch start  action for DB Home /opt/app/oracle/product/11.2.0/dbhome_1
2013-01-17 11:52:55: Executing cmd: /opt/app/11.2.0/grid/bin/crsctl stat resource -c rhel6m1
2013-01-17 11:52:56: Command output:
>  NAME=ora.CLUSTER_DG.dg
>  TYPE=ora.diskgroup.type
>  TARGET=ONLINE
>  STATE=ONLINE
>  
>  NAME=ora.DATA.dg
>  TYPE=ora.diskgroup.type
>  TARGET=ONLINE
>  STATE=ONLINE
>  
>  NAME=ora.FLASH.dg
>  TYPE=ora.diskgroup.type
>  TARGET=ONLINE
>  STATE=ONLINE
>  
>  NAME=ora.LISTENER.lsnr
>  TYPE=ora.listener.type
>  TARGET=ONLINE
>  STATE=ONLINE
>  
>  NAME=ora.asm
>  TYPE=ora.asm.type
>  TARGET=ONLINE
>  STATE=ONLINE
>  
>  NAME=ora.gsd
>  TYPE=ora.gsd.type
>  TARGET=OFFLINE
>  STATE=OFFLINE
>  
>  NAME=ora.net1.network
>  TYPE=ora.network.type
>  TARGET=ONLINE
>  STATE=ONLINE
>  
>  NAME=ora.ons
>  TYPE=ora.ons.type
>  TARGET=ONLINE
>  STATE=ONLINE
>  
>  NAME=ora.registry.acfs
>  TYPE=ora.registry.acfs.type
>  TARGET=ONLINE
>  STATE=ONLINE
>  
>  NAME=ora.rhel6m1.vip
>  TYPE=ora.cluster_vip_net1.type
>  CARDINALITY_ID=1
>  TARGET=ONLINE
>  STATE=ONLINE
>   
>End Command output
2013-01-17 11:52:56: Server assignments completed. Ready to start databases
2013-01-17 11:52:56: Running as user oracle: /opt/app/oracle/product/11.2.0/dbhome_1/bin/srvctl start home -o /opt/app/oracle/product/11.2.0/dbhome_1 -s /opt/app/oracle/product/11.2.0/dbhome_1/srvm/admin/stophome.txt -n rhel6m1
2013-01-17 11:52:56: s_run_as_user2: Running /bin/su oracle -c ' /opt/app/oracle/product/11.2.0/dbhome_1/bin/srvctl start home -o /opt/app/oracle/product/11.2.0/dbhome_1 -s /opt/app/oracle/product/11.2.0/dbhome_1/srvm/admin/stophome.txt -n rhel6m1 '
2013-01-17 11:53:16: Removing file /tmp/filePCaf67
2013-01-17 11:53:16: Successfully removed file: /tmp/filePCaf67
2013-01-17 11:53:16: /bin/su successfully executed

2013-01-17 11:53:16: /opt/app/oracle/product/11.2.0/dbhome_1/bin/srvctl start home -o /opt/app/oracle/product/11.2.0/dbhome_1 -s /opt/app/oracle/product/11.2.0/dbhome_1/srvm/admin/stophome.txt -n rhel6m1 output is 
2013-01-17 11:53:16: Started resources from datbase home /opt/app/oracle/product/11.2.0/dbhome_1

Saturday, January 19, 2013

Changing Listener and SCAN Listener Port in 11gR2 RAC

Unlike the previous edition listener port change in RAC doesn't require any database parameter modification. According to Real Application Clusters Installation Guide During Oracle Database creation, the LOCAL_LISTENER parameter is automatically configured to point to the local listener for the database. The Database Agent sets the LOCAL_LISTENER parameter to a connect descriptor that does not require a TNS alias. You can set a value manually for LOCAL_LISTENER. However, Oracle recommends that you leave the parameter unset so that the Database Agent can maintain it automatically. If you set LOCAL_LISTENER, then the Agent does not automatically update this value. If you do not set LOCAL_LISTENER, then the Database Agent automatically keeps the database associated with the Grid home's node listener updated, even as the ports or IP of that listener are changed.
Steps below will change the port from the default 1521 to 9120. The configuration is a two node 11gR2 Standard Edition RAC with role separation and the solution for Oracle Security Alert for CVE-2012-1675 applied.
1. Current listener and scan listener configurations (run as grid user)
srvctl status listener
Listener LISTENER is enabled
Listener LISTENER is running on node(s): db-02,db-01

srvctl config listener
Name: LISTENER
Network: 1, Owner: grid
Home: 
End points: TCP:1521

$ srvctl status scan_listener
SCAN Listener LISTENER_SCAN1 is enabled
SCAN listener LISTENER_SCAN1 is running on node db-02
SCAN Listener LISTENER_SCAN2 is enabled
SCAN listener LISTENER_SCAN2 is running on node db-01
SCAN Listener LISTENER_SCAN3 is enabled
SCAN listener LISTENER_SCAN3 is running on node db-01

$ srvctl config scan_listener
SCAN Listener LISTENER_SCAN1 exists. Port: TCP:1521/TCPS:2992
SCAN Listener LISTENER_SCAN2 exists. Port: TCP:1521/TCPS:2992
SCAN Listener LISTENER_SCAN3 exists. Port: TCP:1521/TCPS:2992
The TCPS on 2992 is due to the COST setup and not part of the port change mentioned here.
2. As mentioned earlier (on the RAC installation guide) the local_listener is auto set and current configuration of local_listener is using port 1521
SQL> show parameter local

NAME            TYPE    VALUE
--------------- ------- ------------------------------------------------------------------------------------
local_listener  string  (DESCRIPTION=(ADDRESS_LIST=(ADDRESS=(PROTOCOL=TCP)(HOST=192.168.100.171)(PORT=1521))))
The remote listener is restricted to TCPS on the scan vips (again this is due to COST setup.
SQL> show parameter remote

NAME            TYPE    VALUE
--------------- ------- -----------------------------------------------------------------------
remote_listener  string  (ADDRESS_LIST=(ADDRESS=(PROTOCOL=TCPS)(HOST=192.168.100.181)(PORT=2992))
                                      (ADDRESS=(PROTOCOL=TCPS)(HOST=192.168.100.182)(PORT=2992))
                                      (ADDRESS=(PROTOCOL=TCPS)(HOST=192.168.100.183)(PORT=2992)))
3. The listener.ora in GI_HOME does not contain any port information. There's endpoints_listener.ora with port information but this is for backward compatibility for DB versions lower than 11.2 and not applicable in this case as DB is 11.2.
4. Make a note of listener status information which shows where the default port is being used. Run this is as grid user and set ORACLE_HOME variable (ie. . oraenv to +ASM*) if not the command will fail.
lsnrctl status listener # on db1

Connecting to (DESCRIPTION=(ADDRESS=(PROTOCOL=IPC)(KEY=LISTENER)))
STATUS of the LISTENER
------------------------
Alias                     LISTENER
Version                   TNSLSNR for Linux: Version 11.2.0.3.0 - Production
Start Date                18-JAN-2013 12:55:04
Uptime                    0 days 3 hr. 57 min. 32 sec
Trace Level               off
Security                  ON: Local OS Authentication
SNMP                      OFF
Listener Parameter File   /opt/app/11.2.0/grid/network/admin/listener.ora
Listener Log File         /opt/app/oracle/diag/tnslsnr/db-01/listener/alert/log.xml
Listening Endpoints Summary...
  (DESCRIPTION=(ADDRESS=(PROTOCOL=ipc)(KEY=LISTENER)))
  (DESCRIPTION=(ADDRESS=(PROTOCOL=tcp)(HOST=192.168.100.171)(PORT=1521))) <---- vip
  (DESCRIPTION=(ADDRESS=(PROTOCOL=tcp)(HOST=192.168.100.170)(PORT=1521))) <---- ip
Services Summary...
Service "+ASM" has 1 instance(s).
  Instance "+ASM1", status READY, has 1 handler(s) for this service...

lsnrctl status listener  # on db2

Connecting to (DESCRIPTION=(ADDRESS=(PROTOCOL=IPC)(KEY=LISTENER)))
STATUS of the LISTENER
------------------------
Alias                     LISTENER
Version                   TNSLSNR for Linux: Version 11.2.0.3.0 - Production
Start Date                18-JAN-2013 12:54:52
Uptime                    0 days 4 hr. 1 min. 31 sec
Trace Level               off
Security                  ON: Local OS Authentication
SNMP                      OFF
Listener Parameter File   /opt/app/11.2.0/grid/network/admin/listener.ora
Listener Log File         /opt/app/oracle/diag/tnslsnr/db-02/listener/alert/log.xml
Listening Endpoints Summary...
  (DESCRIPTION=(ADDRESS=(PROTOCOL=ipc)(KEY=LISTENER)))
  (DESCRIPTION=(ADDRESS=(PROTOCOL=tcp)(HOST=192.168.100.172)(PORT=1521))) <-- vip
  (DESCRIPTION=(ADDRESS=(PROTOCOL=tcp)(HOST=192.168.100.173)(PORT=1521))) <-- ip
Services Summary...
Service "+ASM" has 1 instance(s).
  Instance "+ASM2", status READY, has 1 handler(s) for this service...
Scan listener status. It's important to check the status of the scan_listener on the node it's currently active. Use the srvctl status to find out on which node the particular scan listener is currently active.
lsnrctl status listener_scan1

Connecting to (DESCRIPTION=(ADDRESS=(PROTOCOL=IPC)(KEY=LISTENER_SCAN1)))
STATUS of the LISTENER
------------------------
Alias                     LISTENER_SCAN1
Version                   TNSLSNR for Linux: Version 11.2.0.3.0 - Production
Start Date                18-JAN-2013 12:55:06
Uptime                    0 days 3 hr. 58 min. 11 sec
Trace Level               off
Security                  ON: Local OS Authentication
SNMP                      OFF
Listener Parameter File   /opt/app/11.2.0/grid/network/admin/listener.ora
Listener Log File         /opt/app/11.2.0/grid/log/diag/tnslsnr/db-02/listener_scan1/alert/log.xml
Listening Endpoints Summary...
  (DESCRIPTION=(ADDRESS=(PROTOCOL=ipc)(KEY=LISTENER_SCAN1)))
  (DESCRIPTION=(ADDRESS=(PROTOCOL=tcps)(HOST=192.168.100.181)(PORT=2992)))<-- scan ip with TCPS due to COST
  (DESCRIPTION=(ADDRESS=(PROTOCOL=tcp)(HOST=192.168.100.181)(PORT=1521))) <-- scan ip with TCP
Services Summary...

lsnrctl status listener_scan2

Connecting to (DESCRIPTION=(ADDRESS=(PROTOCOL=IPC)(KEY=LISTENER_SCAN2)))
STATUS of the LISTENER
------------------------
Alias                     LISTENER_SCAN2
Version                   TNSLSNR for Linux: Version 11.2.0.3.0 - Production
Start Date                18-JAN-2013 12:55:04
Uptime                    0 days 3 hr. 57 min. 57 sec
Trace Level               off
Security                  ON: Local OS Authentication
SNMP                      OFF
Listener Parameter File   /opt/app/11.2.0/grid/network/admin/listener.ora
Listener Log File         /opt/app/11.2.0/grid/log/diag/tnslsnr/db-01/listener_scan2/alert/log.xml
Listening Endpoints Summary...
  (DESCRIPTION=(ADDRESS=(PROTOCOL=ipc)(KEY=LISTENER_SCAN2)))
  (DESCRIPTION=(ADDRESS=(PROTOCOL=tcps)(HOST=192.168.100.182)(PORT=2992))) <-- scan ip with TCPS due to COST
  (DESCRIPTION=(ADDRESS=(PROTOCOL=tcp)(HOST=192.168.100.182)(PORT=1521))) <-- scan ip with TCP
Services Summary...

lsnrctl status listener_scan3

Connecting to (DESCRIPTION=(ADDRESS=(PROTOCOL=IPC)(KEY=LISTENER_SCAN3)))
STATUS of the LISTENER
------------------------
Alias                     LISTENER_SCAN3
Version                   TNSLSNR for Linux: Version 11.2.0.3.0 - Production
Start Date                18-JAN-2013 12:55:04
Uptime                    0 days 3 hr. 57 min. 59 sec
Trace Level               off
Security                  ON: Local OS Authentication
SNMP                      OFF
Listener Parameter File   /opt/app/11.2.0/grid/network/admin/listener.ora
Listener Log File         /opt/app/11.2.0/grid/log/diag/tnslsnr/db-01/listener_scan3/alert/log.xml
Listening Endpoints Summary...
  (DESCRIPTION=(ADDRESS=(PROTOCOL=ipc)(KEY=LISTENER_SCAN3)))
  (DESCRIPTION=(ADDRESS=(PROTOCOL=tcps)(HOST=192.168.100.183)(PORT=2992))) <-- scan ip with TCPS due to COST
  (DESCRIPTION=(ADDRESS=(PROTOCOL=tcp)(HOST=192.168.100.183)(PORT=1521))) <-- scan ip with TCP
Services Summary...



5. To change the port run srvctl as grid user.
srvctl modify listener -l LISTENER -p 9120

srvctl config listener
Name: LISTENER
Network: 1, Owner: grid
Home: 
End points: TCP:9120

srvctl modify scan_listener -p TCP:9120/TCPS:2992

srvctl config scan_listener
SCAN Listener LISTENER_SCAN1 exists. Port: TCP:9120/TCPS:2992
SCAN Listener LISTENER_SCAN2 exists. Port: TCP:9120/TCPS:2992
SCAN Listener LISTENER_SCAN3 exists. Port: TCP:9120/TCPS:2992
Changes are not effective until the listeners are restarted.
srvctl stop listener
srvctl start listener
srvctl stop scan_listener
srvctl start scan_listener
6. Verify the listeners have picked up the new port.
lsnrctl status listener_scan1

Connecting to (DESCRIPTION=(ADDRESS=(PROTOCOL=IPC)(KEY=LISTENER_SCAN1)))
STATUS of the LISTENER
------------------------
Alias                     LISTENER_SCAN1
Version                   TNSLSNR for Linux: Version 11.2.0.3.0 - Production
Start Date                18-JAN-2013 17:57:47
Uptime                    0 days 0 hr. 1 min. 21 sec
Trace Level               off
Security                  ON: Local OS Authentication
SNMP                      OFF
Listener Parameter File   /opt/app/11.2.0/grid/network/admin/listener.ora
Listener Log File         /opt/app/11.2.0/grid/log/diag/tnslsnr/db-02/listener_scan1/alert/log.xml
Listening Endpoints Summary...
  (DESCRIPTION=(ADDRESS=(PROTOCOL=ipc)(KEY=LISTENER_SCAN1)))
  (DESCRIPTION=(ADDRESS=(PROTOCOL=tcps)(HOST=192.168.100.181)(PORT=2992)))
  (DESCRIPTION=(ADDRESS=(PROTOCOL=tcp)(HOST=192.168.100.181)(PORT=9120)))
Services Summary...

lsnrctl status listener_scan2

Connecting to (DESCRIPTION=(ADDRESS=(PROTOCOL=IPC)(KEY=LISTENER_SCAN2)))
STATUS of the LISTENER
------------------------
Alias                     LISTENER_SCAN2
Version                   TNSLSNR for Linux: Version 11.2.0.3.0 - Production
Start Date                18-JAN-2013 17:57:47
Uptime                    0 days 0 hr. 0 min. 45 sec
Trace Level               off
Security                  ON: Local OS Authentication
SNMP                      OFF
Listener Parameter File   /opt/app/11.2.0/grid/network/admin/listener.ora
Listener Log File         /opt/app/11.2.0/grid/log/diag/tnslsnr/db-01/listener_scan2/alert/log.xml
Listening Endpoints Summary...
  (DESCRIPTION=(ADDRESS=(PROTOCOL=ipc)(KEY=LISTENER_SCAN2)))
  (DESCRIPTION=(ADDRESS=(PROTOCOL=tcps)(HOST=192.168.100.182)(PORT=2992)))
  (DESCRIPTION=(ADDRESS=(PROTOCOL=tcp)(HOST=192.168.100.182)(PORT=9120)))
Services Summary...

lsnrctl status listener_scan3

Connecting to (DESCRIPTION=(ADDRESS=(PROTOCOL=IPC)(KEY=LISTENER_SCAN3)))
STATUS of the LISTENER
------------------------
Alias                     LISTENER_SCAN3
Version                   TNSLSNR for Linux: Version 11.2.0.3.0 - Production
Start Date                18-JAN-2013 17:57:49
Uptime                    0 days 0 hr. 0 min. 45 sec
Trace Level               off
Security                  ON: Local OS Authentication
SNMP                      OFF
Listener Parameter File   /opt/app/11.2.0/grid/network/admin/listener.ora
Listener Log File         /opt/app/11.2.0/grid/log/diag/tnslsnr/db-01/listener_scan3/alert/log.xml
Listening Endpoints Summary...
  (DESCRIPTION=(ADDRESS=(PROTOCOL=ipc)(KEY=LISTENER_SCAN3)))
  (DESCRIPTION=(ADDRESS=(PROTOCOL=tcps)(HOST=192.168.100.183)(PORT=2992)))
  (DESCRIPTION=(ADDRESS=(PROTOCOL=tcp)(HOST=192.168.100.183)(PORT=9120)))
Services Summary...

lsnrctl status # on db1 and db2

Connecting to (DESCRIPTION=(ADDRESS=(PROTOCOL=IPC)(KEY=LISTENER)))
STATUS of the LISTENER
------------------------
Alias                     LISTENER
Version                   TNSLSNR for Linux: Version 11.2.0.3.0 - Production
Start Date                18-JAN-2013 17:57:31
Uptime                    0 days 0 hr. 1 min. 54 sec
Trace Level               off
Security                  ON: Local OS Authentication
SNMP                      OFF
Listener Parameter File   /opt/app/11.2.0/grid/network/admin/listener.ora
Listener Log File         /opt/app/oracle/diag/tnslsnr/db-02/listener/alert/log.xml
Listening Endpoints Summary...
  (DESCRIPTION=(ADDRESS=(PROTOCOL=ipc)(KEY=LISTENER)))
  (DESCRIPTION=(ADDRESS=(PROTOCOL=tcp)(HOST=192.168.100.171/2)(PORT=9120)))
  (DESCRIPTION=(ADDRESS=(PROTOCOL=tcp)(HOST=192.168.100.170/3)(PORT=9120)))
6. The DB's local_listener port is still registered on earler port (only one instance shown below).
SQL> show parameter local

NAME            TYPE    VALUE
--------------- ------- ------------------------------------------------------------------------------------
local_listener  string  (DESCRIPTION=(ADDRESS_LIST=(ADDRESS=(PROTOCOL=TCP)(HOST=192.168.100.171)(PORT=1521))))
Restart the database so local_listener is registered with the new port. Run as oracle user
srvctl stop database -d std11g2
srvctl start database -d std11g2

SQL> show parameter local

NAME            TYPE    VALUE
--------------- ------- ------------------------------------------------------------------------------------
local_listener  string  (DESCRIPTION=(ADDRESS_LIST=(ADDRESS=(PROTOCOL=TCP)(HOST=192.168.100.171)(PORT=9120))))
7. If COST is not used (refer 11gR2 Grid Infrastructure Single Client Access Name (SCAN) Explained 887522.1 note and section titled "Is it recommended to use COST feature?") then remote_listener value which is by default set as scan-name:port must also be updated with the new port. Since here COST is used (1340831.1) this step is omitted.
8. Port information on the endpoints_listener.ora would have been automatically edited by the agents and will reflect the new values set.
9. Change the port on any tnsnames.ora files used for connectivity to reflect the new port.
10. EM repository may need recreation with the new port information or manually edit config/emoms.properties and emd/targets.xml with port information.

Useful metalink notes
Changing Default Listener Port Number [ID 359277.1]
Listener port changed after 11.2 upgrade [ID 1269679.1]
Changing Listener Ports On RAC/EXADATA [ID 1473035.1]
11.2 Scan and Node TNS Listener Setup Examples [ID 1070607.1]
How To Configure Scan Listeners With A TCPS Port? [ID 1092753.1]
How to Modify SCAN Setting or SCAN Listener Port after Installation [ID 972500.1]
How to Configure A Second Listener on a Separate Network in 11.2 Grid Infrastructure [ID 1063571.1]
Using the TNS_ADMIN variable and changing the default port number of all Listeners in an 11.2 RAC for an 11.2, 11.1, and 10.2 Database [ID 1306927.1]

Thursday, January 17, 2013

I/O Elevator Comparision Using CALIBRATE_IO

This is a comparison of three IO elevators namely Completely Fair Queuing (CFQ) which is the default one on Linux, Deadline and NOOP. Anticipatory elevator wasn't used though it's also one of the IO elevators on RHEL. Current elevator could be found out (on RHEL) with
cat /sys/block/sda/queue/scheduler
noop anticipatory [deadline] cfq
The load injector in this case is the DBMS_RESOURCE_MANAGER.CALIBRATE_IO. System is a two node RAC (with Dell EqualLogic SAN) running on RHEL 5(2.6.18-308.24.1.el5). Test was simply to set the elevator on the /etc/grub.conf (no configuration changes on the elevator parameters), reboot the server after each time elevator is set and once all nodes and DB instances are up run the CALIBRATE_IO function (same set of inputs were used in all cases). This was a new build and nothing else (application related) was running on the system. By no means this is an extensive test, just wanted to see what will be effect changing elevator alone (all others remaining the same).
Below is the output from the three runs.
ELEVATOR     MAX_IOPS   MAX_MBPS  MAX_PMBPS    LATENCY DURATION     START_TIME                     END_TIME
---------- ---------- ---------- ---------- ---------- ------------ ------------------------------ ------------------------------
cfq              4970        206        171         11 7:58.585988  17-JAN-13 10.35.25.114131 AM   17-JAN-13 10.43.23.700119 AM
deadline         4988        224        144         11 5:32.195376  17-JAN-13 01.34.14.094488 PM   17-JAN-13 01.39.46.289864 PM
noop             5016        206        145         11 7:14.051605  17-JAN-13 02.05.07.657428 PM   17-JAN-13 02.12.21.709033 PM



It seems each elevator is a winner in some category.
MAX IOPS (winner noops)

MAX MBPS (winner deadline)

MAX PMBPS (winner cfq)

Wednesday, January 16, 2013

ORA-15081: failed to submit an I/O operation to a disk

DBCA creation fails with ORA-15081.

This is a new 11gR2 (11.2.0.3) RAC installation with role separation. Oracle user and grid user both has correct user group setup and permission of oracle binaries are also correct.
This RAC setup does not use ASMLib instead uses direct block devices. When block devices are used their permission must be 660 and ownership must be grid:asmadmin. The storage vendor setup the block devices as below
ls -l /dev/eql/*
lrwxrwxrwx 1 root root 7 Jan 15 12:00 /dev/eql/data-01 -> ../dm-6
lrwxrwxrwx 1 root root 7 Jan 15 12:00 /dev/eql/data-02 -> ../dm-8
lrwxrwxrwx 1 root root 8 Jan 15 12:00 /dev/eql/clus-01 -> ../dm-16
lrwxrwxrwx 1 root root 8 Jan 15 12:00 /dev/eql/clus-02 -> ../dm-18
lrwxrwxrwx 1 root root 8 Jan 15 12:00 /dev/eql/clus-03 -> ../dm-20
lrwxrwxrwx 1 root root 8 Jan 15 12:00 /dev/eql/data-03 -> ../dm-22
lrwxrwxrwx 1 root root 8 Jan 15 12:00 /dev/eql/flash-01 -> ../dm-10
lrwxrwxrwx 1 root root 8 Jan 15 12:00 /dev/eql/flash-02 -> ../dm-14
lrwxrwxrwx 1 root root 8 Jan 15 12:00 /dev/eql/flash-03 -> ../dm-12
and asked to use /dev/eql/* to creat ASM diskgroups. This was using Dell EqualLogic iSCSI SAN. Even though the softlink had permission 777 the block device pointed by these only had 640 with grid:asmadmin set. (Storage vendor was asked to set 660 on these). 640 permission was fine (though wrong) for installing clusterware but the erroneous permission setup was only noticed during the dbca execution as oracle user.
brw-r----- 1 grid asmadmin 253,  6 Jan 15 14:22 /dev/dm-6
brw-r----- 1 grid asmadmin 253,  8 Jan 15 14:11 /dev/dm-8
brw-r----- 1 grid asmadmin 253, 10 Jan 15 14:22 /dev/dm-10
brw-r----- 1 grid asmadmin 253, 12 Jan 15 14:22 /dev/dm-12
brw-r----- 1 grid asmadmin 253, 14 Jan 15 13:12 /dev/dm-14
brw-r----- 1 grid asmadmin 253, 16 Jan 15 14:24 /dev/dm-16
brw-r----- 1 grid asmadmin 253, 18 Jan 15 14:24 /dev/dm-18
brw-r----- 1 grid asmadmin 253, 20 Jan 15 14:24 /dev/dm-20
brw-r----- 1 grid asmadmin 253, 22 Jan 15 14:22 /dev/dm-22
After asking the storage vendor to properly setup the permission (to 660) was able to create the database.
brw-rw---- 1 grid asmadmin 253,  6 Jan 16 13:55 /dev/dm-6
brw-rw---- 1 grid asmadmin 253,  8 Jan 16 13:38 /dev/dm-8
brw-rw---- 1 grid asmadmin 253, 10 Jan 16 13:55 /dev/dm-10
brw-rw---- 1 grid asmadmin 253, 12 Jan 16 13:38 /dev/dm-12
brw-rw---- 1 grid asmadmin 253, 14 Jan 16 13:38 /dev/dm-14
brw-rw---- 1 grid asmadmin 253, 16 Jan 16 13:55 /dev/dm-16
brw-rw---- 1 grid asmadmin 253, 18 Jan 16 13:55 /dev/dm-18
brw-rw---- 1 grid asmadmin 253, 20 Jan 16 13:55 /dev/dm-20
brw-rw---- 1 grid asmadmin 253, 22 Jan 16 13:38 /dev/dm-22

On a separate note: 605828.1,470913.1,602952.1 all say not to use /dev/dm-*.



Useful metalink notes
ORA-15081: failed to submit an I/O operation to a disk [ID 1297099.1]
Bug 11695285 - ORA-15081 I/O write error but disk is online [ID 11695285.8]
ORA-17502, OSD-4002 and ORA-15081 when creating a datafile on a ASM diskgroup [ID 369898.1]
RAC Database Terminates: ORA-27041, ORA-15080, ORA-63999, ORA-01114, ORA-15081 [ID 1476707.1]
10gR2 Database Creation Fails with 11gR2 ASM storage: ORA-15045, ORA-17502, ORA-15081 [ID 1384180.1]
Database Creation on 11.2 Grid Infrastructure with Role Separation ( ORA-15025, KFSG-00312, ORA-15081 ) [ID 1084186.1]

Wednesday, January 9, 2013

Installing Oracle RAC When SSH Is Listening On A Non-Default Port

It's not uncommon to see server environments where ssh is configured to listen on a non-default port (other than port 22). Though this has no consequence when installing a single instance Oracle DB.For clusterware or RAC installation this would make the OUI and runcluvfy to fail on the user equivalence if minor configuration changes are not made.
When ssh is running on a non-default port to ssh from one server to another the port must be specified, similar to
ssh -p 2020 remotehost
or an alias could be setup with ssh name and value with port such
alias ssh='ssh -p 2020'
and then simply use ssh remotehost.
It's even possible to create user equivalency manually for grid or oracle user using /usr/sbin/ssh-keygen and get pass-phraseless ssh access among the nodes considered for cluster installation by specifying the ssh port. However the sshUserSetup.sh script will fail as it tries create user equivalency as it uses /usr/bin/ssh in the script (without port) and this results in it being unable to copy the generated keys to nodes listed.
The user may be prompted for a password here since the script would be running SSH on host DB-01.
ssh: connect to host DB-01 port 22: Connection refused
Done with creating .ssh directory and setting permissions on remote host DB-01.
Creating .ssh directory and setting permissions on remote host DB-02
THE SCRIPT WOULD ALSO BE REVOKING WRITE PERMISSIONS FOR group AND others ON THE HOME DIRECTORY FOR grid. THIS IS AN SSH REQUIREMENT.
The script would create ~grid/.ssh/config file on remote host DB-02. If a config file exists already at ~grid/.ssh/config, it would be backed up to ~grid/.ssh/config.backup.
The user may be prompted for a password here since the script would be running SSH on host DB-02.
ssh: connect to host DB-02 port 22: Connection refused
Done with creating .ssh directory and setting permissions on remote host DB-02.
Copying local host public key to the remote host DB-01
The user may be prompted for a password or passphrase here since the script would be using SCP for host DB-01.
ssh: connect to host DB-01 port 22: Connection refused
lost connection
Done copying local host public key to the remote host DB-01
Copying local host public key to the remote host DB-02
The user may be prompted for a password or passphrase here since the script would be using SCP for host DB-02.
ssh: connect to host DB-02 port 22: Connection refused
lost connection
Done copying local host public key to the remote host DB-02
ssh: connect to host DB-01 port 22: Connection refused
ssh: connect to host DB-02 port 22: Connection refused
SSH setup is complete.
As mentioned earlier it's possible to manually create the user equivalency or even edit the script. But this will not make runcluvfy.sh or OUI pass the user equivalency test.
OUI failing even after creating user equivalency manually.

Running runcluvfy.sh will fail with
Checking user equivalence...
PRVF-4007 : User equivalence check failed for user "grid"
Check failed on nodes:
DB-01,DB-02
Alias on ssh as mentioned earlier is no use as these use the full path "/usr/bin/ssh" thus alias seem to get ignored. One of the other workaround tried was to create a script name ssh and call the original ssh executable with port and whatever parameter passed on to it.
cd /usr/bin
mv ssh sshi
and then create an file called ssh in /usr/bin with following text, assuming ssh runs on port 2020
/usr/bin/sshi -p 2020 $*
This managed to get the runcluvfy.sh working for local node but still failed checking pre-reqs on the remote node and didn't go into detail investigation into why it failed.
Finally raised an SR asking if it's possible to install Oracle RAC when ssh is listening on a non-default port and if so how to get runcluvfy.sh to pass the user equivalence.



Strangely Oracle's answer was to unblock the port 22 (run ssh on default port) and once installed change back to non-default port. SSH is only used during installation, upgrades, patches and etc not during "normal" database activities. But according to 220970.1 user equivalency must exists even after the installation as many assistants and scripts depends on it.
It was tested on a existing environment by changing the ssh port from 22 to 2020 and see if cluster stack would start without an error and all "seemed to be fine" (it wasn't an extensive test, just start and stop) but this approach didn't seem something good to have on a production system (changing ssh port every time some script fails due to missing user equivalency).
Oracle asked to trace the runcluvfy.sh comp nodecon and upload files and after looking at the output said "issue is similar to 4193093 and it's not a bug but an exception". Couldn't find anything on 4193093 so it must be some internal bug document. But still didn't resolve the issue at hand.
Used strace to get the system calls for the command executed by cluvfy to check user equivalency.
strace -o db01_output_ssh_p.log /usr/bin/ssh -p 2020 -o FallBackToRsh=no -o PasswordAuthentication=no -o StrictHostKeyChecking=yes -o NumberOfPasswordPrompts=0 DB-02 /bin/true
Looking at the output of the strace it could be seen the ssh is using the default port even though port change is specified in sshd_config (this is RHEL 5 server 2.6.18-308.24.1.el5).
connect(3, {sa_family=AF_INET, sin_port=htons(22), sin_addr=inet_addr("xxx.xxx.xx.xx")}, 16) = -1 ECONNREFUSED (Connection refused)
Oracle's answer was the above line shows that when port is not passed ssh still uses default port this is not a bug. But still if it's needed to pass a port to cluvfy then it must go through the product enhancement process. Not exactly what was needed for the problem at hand and not much help coming from the SR.
But the strace output did give an idea. Why is it ssh still using port 22 even when port 2020 is defined in the sshd_config? Answer is ssh configuration parameters are loaded from ~/.ssh/config file. This is already created as part of clusterware pre-req. All that's needed to get user equivalence working when ssh is listening on a non-default port is to specify the port in the user's (oracle and grid) .ssh/config file.
grid]$cat .ssh/config
Host *
        ForwardX11      no
        Port    2020
After this no need to specify the port (use ssh -p 2020) on the shell command and runcluvfy.sh runs without an error on user equivalence.
Check: User equivalence for user "grid"
  Node Name                             Status
  ------------------------------------  ------------------------
  DB-01                             passed
  DB-02                             passed
Result: User equivalence check passed for user "grid"
Confirmation that ssh is using the non-default port from the strace
connect(3, {sa_family=AF_INET, sin_port=htons(2020), sin_addr=inet_addr("xxx.xxx.xx.xx")}, 16) = 0
Bottom line is Oracle RAC can be installed when ssh is listening on a non-default port (something asked on the SR and Oracle didn't give an answer to). No changes are required on Oracle, simply add the ssh port to .ssh/config file.