One of the pre-reqs is time zone value
SQL> SELECT version FROM v$timezone_file; VERSION ---------- 4If this query reports version 4 (as in this case), no action is required. If this reports a version lower or higher than 4, see 1086400.1 for more information.
Current clusterware version
$ crsctl query crs activeversion CRS active version on the cluster is [10.2.0.4.0] $ crsctl query crs softwareversion CRS software version on node [rac1] is [10.2.0.4.0]10gR2 clusterware could be upgraded in a rolling manner even though it's an in-place upgrade but as the Oracle home in-place upgrades does requires database to be shutdown. If not following warning will be shown.
1. Before the start of the clusterware upgrade all cluster applications are online.
crs_stat -t Name Type Target State Host ------------------------------------------------------------ ora....SM1.asm application ONLINE ONLINE rac1 ora....C1.lsnr application ONLINE ONLINE rac1 ora.rac1.gsd application ONLINE ONLINE rac1 ora.rac1.ons application ONLINE ONLINE rac1 ora.rac1.vip application ONLINE ONLINE rac1 ora.rac10g2.db application ONLINE ONLINE rac2 ora....21.inst application ONLINE ONLINE rac1 ora....22.inst application ONLINE ONLINE rac2 ora....SM2.asm application ONLINE ONLINE rac2 ora....C2.lsnr application ONLINE ONLINE rac2 ora.rac2.gsd application ONLINE ONLINE rac2 ora.rac2.ons application ONLINE ONLINE rac2 ora.rac2.vip application ONLINE ONLINE rac22. Start the clusterware upgrade by executing runInstaller
Even though it is a rolling upgrade all nodes are selected by default.
When the installation has finished it will ask to run the root script on each node. This is where the rolling upgrade happens.
3. Stop the clusterware stack on first node
stop on first node crsctl stop crs Stopping resources. This could take several minutes. Successfully stopped CRS resources. Stopping CSSD. Shutting down CSS daemon. Shutdown request successfully issued.Cluster stack is running on other node(s) and could be used by applications.
rac2 ~]$ crs_stat -t Name Type Target State Host ------------------------------------------------------------ ora....SM1.asm application ONLINE OFFLINE ora....C1.lsnr application ONLINE OFFLINE ora.rac1.gsd application ONLINE OFFLINE ora.rac1.ons application ONLINE OFFLINE ora.rac1.vip application ONLINE ONLINE rac2 ora.rac10g2.db application ONLINE ONLINE rac2 ora....21.inst application ONLINE OFFLINE ora....22.inst application ONLINE ONLINE rac2 ora....SM2.asm application ONLINE ONLINE rac2 ora....C2.lsnr application ONLINE ONLINE rac2 ora.rac2.gsd application ONLINE ONLINE rac2 ora.rac2.ons application ONLINE ONLINE rac2 ora.rac2.vip application ONLINE ONLINE rac24. Run the root102.sh on first node
# /opt/crs/oracle/product/10.2.0/crs/install/root102.sh Creating pre-patch directory for saving pre-patch clusterware files Completed patching clusterware files to /opt/crs/oracle/product/10.2.0/crs Relinking some shared libraries. Relinking of patched files is complete. Preparing to recopy patched init and RC scripts. Recopying init and RC scripts. Startup will be queued to init within 30 seconds. Starting up the CRS daemons. Waiting for the patched CRS daemons to start. This may take a while on some systems. . 10205 patch successfully applied. clscfg: EXISTING configuration version 3 detected. clscfg: version 3 is 10G Release 2. Successfully deleted 1 values from OCR. Successfully deleted 1 keys from OCR. Successfully accumulated necessary OCR keys. Using ports: CSS=49895 CRS=49896 EVMC=49898 and EVMR=49897. nodeAt the end of this clusterware stack will be up and running on this node (rac1) and will be available for use while root script is run on other nodes.: node 1: rac1 rac1-pvt rac1 Creating OCR keys for user 'root', privgrp 'root'.. Operation successful. clscfg -upgrade completed successfully Creating '/opt/crs/oracle/product/10.2.0/crs/install/paramfile.crs' with data used for CRS configuration Setting CRS configuration values in /opt/crs/oracle/product/10.2.0/crs/install/paramfile.crs
rac1 oracle]# crs_stat -t Name Type Target State Host ------------------------------------------------------------ ora....SM1.asm application ONLINE ONLINE rac1 ora....C1.lsnr application ONLINE ONLINE rac1 ora.rac1.gsd application ONLINE ONLINE rac1 ora.rac1.ons application ONLINE ONLINE rac1 ora.rac1.vip application ONLINE ONLINE rac1 ora.rac10g2.db application ONLINE ONLINE rac2 ora....21.inst application ONLINE ONLINE rac1 ora....22.inst application ONLINE ONLINE rac2 ora....SM2.asm application ONLINE ONLINE rac2 ora....C2.lsnr application ONLINE ONLINE rac2 ora.rac2.gsd application ONLINE ONLINE rac2 ora.rac2.ons application ONLINE ONLINE rac2 ora.rac2.vip application ONLINE ONLINE rac25. Carry out the same steps on other node(s) (rac2)
$ crsctl stop crs Stopping resources. This could take several minutes. Successfully stopped CRS resources. Stopping CSSD. Shutting down CSS daemon. Shutdown request successfully issued.Run root102.sh
[root@rac2 oracle]# /opt/crs/oracle/product/10.2.0/crs/install/root102.sh Creating pre-patch directory for saving pre-patch clusterware files Completed patching clusterware files to /opt/crs/oracle/product/10.2.0/crs Relinking some shared libraries. Relinking of patched files is complete. Preparing to recopy patched init and RC scripts. Recopying init and RC scripts. Startup will be queued to init within 30 seconds. Starting up the CRS daemons. Waiting for the patched CRS daemons to start. This may take a while on some systems. . 10205 patch successfully applied. clscfg: EXISTING configuration version 3 detected. clscfg: version 3 is 10G Release 2. Successfully deleted 1 values from OCR. Successfully deleted 1 keys from OCR. Successfully accumulated necessary OCR keys. Using ports: CSS=49895 CRS=49896 EVMC=49898 and EVMR=49897. node6. Verify clusterware is upgraded: node 2: rac2 rac2-pvt rac2 Creating OCR keys for user 'root', privgrp 'root'.. Operation successful. clscfg -upgrade completed successfully Creating '/opt/crs/oracle/product/10.2.0/crs/install/paramfile.crs' with data used for CRS configuration Setting CRS configuration values in /opt/crs/oracle/product/10.2.0/crs/install/paramfile.crs
$ crsctl query crs activeversion CRS active version on the cluster is [10.2.0.5.0] $ crsctl query crs softwareversion CRS software version on node [rac2] is [10.2.0.5.0]7. After the upgrade opatch exhbit an error which could be resolved by installing the latest version of opatch.
8. Create a pfile of the database before the Oracle Home is upgraded. Shutdown all applications running out of the Oracle home before Oracle home ugprades
[oracle@rac1 ~]$ srvctl stop database -d rac10g2 [oracle@rac1 ~]$ srvctl stop asm -n rac1 [oracle@rac1 ~]$ srvctl stop asm -n rac2 [oracle@rac1 ~]$ srvctl stop nodeapps -n rac2 [oracle@rac1 ~]$ srvctl stop nodeapps -n rac19. Execute runInstaller and upgrade Oracle Home across the cluster
10. From the 10g Upgrade guide
The DBUA provides support for Real Application Clusters (RAC) and Automatic Storage Management (ASM). Support for Real Application Clusters In a Real Application Clusters (RAC) environment, the DBUA upgrades all the database and configuration files on all nodes in the cluster.
Support for Automatic Storage Management The DBUA supports upgrades of databases that use Automatic Storage Management (ASM). If an ASM instance is detected, you have the choice of updating both the database and ASM or only the ASM instance.
In this ASM is upgrade is not necessary.
11. Database upgrade could be done manually or using DBUA. If database upgrade is done manually then set cluster_database=false before starting the database in upgrade mode. If DBUA is used setting cluster_database=false will be done by DBUA itself.
Since the 10.2.0.4 database and Oracle home had the PSU Jan 2012 applied and a new initialization parameter (_external_scn_rejection_threshold_hours) was introduced.
As the newly upgrade 10.2.0.5 home doesn't have this patch (PSU Jan 2012) yet trying to start the database for upgrade could be a problem.
SQL> startup nomount ORA-01078: failure in processing system parameters LRM-00101: unknown parameter name '_external_scn_rejection_threshold_hours'This could be resolved with either by starting the database with the pfile created earlier (step 8) or applying the PSU Jan 2012 on the new 10.2.0.5 homes before running the DBUA.
In this case the latter option was selected. After the PSU Jan 2012 was applied on 10.2.0.5 DBUA ran without an issue.
12. After the upgrade has finished apply the post installation script of the PSU. This concludes the 10.2.0.4 to 10.2.0.5 upgrade.
Useful metalink notes
Oracle Clusterware (CRS or GI) Rolling Upgrades [ID 338706.1]