Tuesday, March 17, 2015

Upgrading Single Instance on ASM from 11.2.0.3 to 11.2.0.4

This post list points to look out for when upgrading a 11.2.0.3 single instance DB on ASM (with role separation) to 11.2.0.4. It's not a comprehensive upgrade guide, refer oracle documentation for more information.
1. Before installing the new 11.2.0.4 GI check in the inventory.xml has crs=true against the existing GI home. In some cases crs=true is missing, usually if GI was installed with software only and later converted to HAS.
<HOME NAME="Ora11g_gridinfrahome1" LOC="/opt/app/oracle/product/11.2.0/grid_1" TYPE="O" IDX="1"/>
If there's no crs=true against the GI home the new version fails to detect the existing clusterware.
If crs=true is missing then update the inventory information by running the following command specifying the existing GI home and crs=true
./runInstaller -updateNodeList ORACLE_HOME=/opt/app/oracle/product/11.2.0/grid_1 CRS=true

<HOME NAME="Ora11g_gridinfrahome1" LOC="/opt/app/oracle/product/11.2.0/grid_1" TYPE="O" IDX="1" CRS="true"/>
Afterwards run the installer of the 11.2.0.4 and the existing GI will be detected. For more information on this issue refer 1953932.1 and 1117063.1

2. Before the upgrade the HAS shows version 11.2.0.3 for release and software version.
$ crsctl query has releaseversion
Oracle High Availability Services release version on the local node is [11.2.0.3.0]
$  crsctl query has softwareversion
Oracle High Availability Services version on the local node is [11.2.0.3.0]
GI upgrades are out of place. i.e install in a different location to existing GI home (grid_4 in this case).
3. Run the rootupgrade.sh when prompted. This will upgrade the ASM instance.
# /opt/app/oracle/product/11.2.0/grid_4/rootupgrade.sh
Performing root user operation for Oracle 11g

The following environment variables are set as:
    ORACLE_OWNER= grid
    ORACLE_HOME=  /opt/app/oracle/product/11.2.0/grid_4

Enter the full pathname of the local bin directory: [/usr/local/bin]:
The contents of "dbhome" have not changed. No need to overwrite.
The contents of "oraenv" have not changed. No need to overwrite.
The contents of "coraenv" have not changed. No need to overwrite.

Entries will be added to the /etc/oratab file as needed by
Database Configuration Assistant when a database is created
Finished running generic part of root script.
Now product-specific root actions will be performed.
Using configuration parameter file: /opt/app/oracle/product/11.2.0/grid_4/crs/install/crsconfig_params
Creating trace directory

ASM Configuration upgraded successfully.

Creating OCR keys for user 'grid', privgrp 'oinstall'..
Operation successful.
LOCAL ONLY MODE
Successfully accumulated necessary OCR keys.
Creating OCR keys for user 'root', privgrp 'root'..
Operation successful.
CRS-4664: Node rhel6m1 successfully pinned.
Replacing Clusterware entries in upstart
Replacing Clusterware entries in upstart

rhel6m1     2015/03/12 12:41:21     /opt/app/oracle/product/11.2.0/grid_4/cdata/rhel6m1/backup_20150312_124121.olr

rhel6m1     2015/03/11 17:57:18     /opt/app/oracle/product/11.2.0/grid_1/cdata/rhel6m1/backup_20150311_175718.olr
Successfully configured Oracle Grid Infrastructure for a Standalone Server
4. Afterwards the software version will reflect the new GI version but release version will remain the same.
$  crsctl query has releaseversion
Oracle High Availability Services release version on the local node is [11.2.0.3.0]
$  crsctl query has softwareversion
Oracle High Availability Services version on the local node is [11.2.0.4.0]
5. ASM instance and listener will run out of the new GI home.
$ srvctl config asm
ASM home: /opt/app/oracle/product/11.2.0/grid_4
ASM listener: LISTENER
Spfile: +DATA/asm/asmparameterfile/registry.253.874087179
ASM diskgroup discovery string: /dev/sd*

$ srvctl config listener
Name: LISTENER
Home: /opt/app/oracle/product/11.2.0/grid_4
End points: TCP:1521


6. Before installing the database software check if $ORACEL_BASE/cfgtoollogs has write permission for oracle user. As this is a setup with role separation this location must have write permission for oinstall group for oracle user to create directories/files inside cfgtoollogs.
chmod 770 $ORACEL_BASE/cfgtoollogs
7. Install the database software as an out of place upgrade. After database software is installed and before running the dbua make sure the oracle binaries have the correct permission on the new oracle home suited to a role separated setup. The group ownership should be asmadmin (or corresponding asm admin group used) but after the software install it remains as oinstall.
[oracle@rhel6m1 bin]$ ls -l oracle*
-rwsr-s--x. 1 oracle oinstall 226889744 Mar 12 13:29 oracle
-rwxr-x---. 1 oracle oinstall         0 Aug 24  2013 oracleO
Change this with setasmgidwrap to the correct ownership
[oracle@rhel6m1 bin]$ ls -l oracle*
-rwsr-s--x. 1 oracle asmadmin 226889744 Mar 12 13:29 oracle
-rwxr-x---. 1 oracle oinstall         0 Aug 24  2013 oracleO
9. Once oracle binary permissions are set run the DBUA to upgrade the database. Verify post upgrade status with cluvfy and orachk
cluvfy stage -post hacfg

orachk -u -o post
10. Finally if upgrade satisfactory increase the compatible parameter on the database and ASM disk attributes to the new 11.2.0.4 version. These changes cannot be rolled backed. On database
alter system set compatible='11.2.0.4.0' scope=spfile sid='*';
On ASM instance
alter diskgroup data set attribute 'compatible.asm'='11.2.0.4';
alter diskgroup flash set attribute 'compatible.asm'='11.2.0.4';
alter diskgroup data set attribute 'compatible.rdbms'='11.2.0.4';
alter diskgroup flash set attribute 'compatible.rdbms'='11.2.0.4';
The HAS shows 11.2.0.4 as the release version.
$  crsctl query has softwareversion
Oracle High Availability Services version on the local node is [11.2.0.4.0]
$  crsctl query has releaseversion
Oracle High Availability Services release version on the local node is [11.2.0.4.0]
This concludes the upgrade of single instance on ASM (with role separation) from 11.2.0.3 to 11.2.0.4.

There was incident where after the GI was upgraded the ASM instance didn't start and following error was shown
srvctl start asm
PRCR-1079 : Failed to start resource ora.asm
CRS-5017: The resource action "ora.asm start" encountered the following error:
ORA-00119: invalid specification for system parameter LOCAL_LISTENER
ORA-00132: syntax error or unresolved network name 'LISTENER_+ASM'
. For details refer to "(:CLSN00107:)" in "/opt/app/oracle/product/11.2.0/grid_4/log/rhel6m1/agent/ohasd/oraagent_grid/oraagent_grid.log".
This standalone system was created from a GI home first installed as software only and also had the spfile in the local file system (GI_HOME/dbs). Not sure if these were in any way contributed to this error being thrown. Looking at an ASM spfile it could be seen a local_listener entry has been added during the upgrade.
  large_pool_size          = 12M
  instance_type            = "asm"
  remote_login_passwordfile= "EXCLUSIVE"
  local_listener           = "LISTENER_+ASM"
  asm_diskstring           = "/dev/sd*"
  asm_diskgroups           = "FLASH"
  asm_diskgroups           = "DATA"
  asm_power_limit          = 1
  diagnostic_dest          = "/opt/app/oracle"
Since the listener.ora doesn't have such an entry and more over on 11.2. the local listener entries are auto updated. Once a ASM pfile was created without the local_listener entry the ASM instance started. A new SPfile had to be created and registered with the ASM configuration information.

Useful metalink notes
Oracle Restart: GI Upgrade From 12.1.0.1 to 12.1.0.2 Fails With INS-40406 [ID 1953932.1]
Oracle Restart ASM 11gR2: INS-40406 Upgrading ASM Instance To Release 11.2.0.1.0 [ID 1117063.1]

Related Posts
Upgrading RAC from 11.2.0.3 to 11.2.0.4 - Grid Infrastructure
Upgrading RAC from 11.2.0.3 to 11.2.0.4 - Database