[grid@rhel7 ~]$ crsctl query has releaseversion Oracle High Availability Services release version on the local node is [12.1.0.2.0] [grid@rhel7 ~]$ crsctl query has softwareversion Oracle High Availability Services version on the local node is [12.1.0.2.0]The Oracle restart environment used is a role separate setup where GI is installed as grid user while Oracle software is installed Oracle user. The GI and Oracle homes had the latest PSU applied (Database Patch Set Update : 12.1.0.2.170117). However orachk pre-upgrade listed following patches as missing.
./orachk -u -o pre ... WARNING => Oracle patch 21856522 is not applied on RDBMS_HOME /opt/app/oracle/product/12.1.0/dbhome_2 WARNING => Oracle patch 20958816 is not applied on RDBMS_HOME /opt/app/oracle/product/12.1.0/dbhome_2 ...MOS note 2180188.1 list patches to apply before upgrading to 12.2. Cluvfy could be used to further check the system setup
./runcluvfy.sh stage -pre hacfgThis option is not for upgrade check but could be used to check readiness of oracle restart installation.
Unlike the previous versions, 12.2 grid infrastructure requires unzipping it into a grid home. There's no GI home selection on the OUI. Create the new directory path for 12.2 GI and unzip the GI bundle.
mkdir -p /opt/app/oracle/product/12.2.0/grid cp ~/linuxx64_12201_grid_home.zip /opt/app/oracle/product/12.2.0/grid cd /opt/app/oracle/product/12.2.0/grid unzip linuxx64_12201_grid_home.zipASM will be upgraded as part of the upgrade process. Therefore stop the database before starting the GI upgrade.
srvctl stop database -d se2dbStart the GI upgrade by running the gridSetup.sh from the grid home.
grid@rhel7 grid]$ ./gridSetup.shThis will start the GI installer and select the upgrade GI option. As mentioned earlier, the installer prompts to shutdown the DB as ASM is upgraded. If wanted GI could be registered with a cloud control. OS groups for authentication into ASM. These are same as 12.1 setup. Oracle base location could be selected but not the GI Home location. This will be fixed, based on where the GI software is extracted. Running of root scripts could be automated by providing the root password or user with sudo privileges. Prerequisite check. The minimum memory requirement is 8GB. As this is a test setup this requirement is ignored and proceeded to next step. Summary page and progress. Depending on the option selected, (either to automatically run root scripts or manually run the root scripts) one of the following options will be presented. In either case (auto run or manual run) the root upgrade script failed due to ACFS related known issue. Output below shows when rootupgrade.sh was run manually.
[root@rhel7 grid]# ./rootupgrade.sh Performing root user operation. The following environment variables are set as: ORACLE_OWNER= grid ORACLE_HOME= /opt/app/oracle/product/12.2.0/grid Enter the full pathname of the local bin directory: [/usr/local/bin]: The contents of "dbhome" have not changed. No need to overwrite. The file "oraenv" already exists in /usr/local/bin. Overwrite it? (y/n) [n]: y Copying oraenv to /usr/local/bin ... The contents of "coraenv" have not changed. No need to overwrite. Entries will be added to the /etc/oratab file as needed by Database Configuration Assistant when a database is created Finished running generic part of root script. Now product-specific root actions will be performed. Using configuration parameter file: /opt/app/oracle/product/12.2.0/grid/crs/install/crsconfig_params The log of current session can be found at: /opt/app/oracle/crsdata/rhel7/crsconfig/roothas_2017-04-03_05-38-44PM.log 2017/04/03 17:38:45 CLSRSC-363: User ignored prerequisites during installation ASM has been upgraded and started successfully. Creating OCR keys for user 'grid', privgrp 'oinstall'.. Operation successful. LOCAL ONLY MODE Successfully accumulated necessary OCR keys. Creating OCR keys for user 'root', privgrp 'root'.. Operation successful. CRS-4664: Node rhel7 successfully pinned. 2017/04/03 17:39:32 CLSRSC-329: Replacing Clusterware entries in file 'oracle-ohasd.service' CRS-4123: Oracle High Availability Services has been started. 2017/04/03 17:58:46 CLSRSC-482: Running command: 'srvctl upgrade model -s 12.1.0.2.0 -d 12.2.0.1.0 -p first' 2017/04/03 17:58:51 CLSRSC-482: Running command: 'srvctl upgrade model -s 12.1.0.2.0 -d 12.2.0.1.0 -p last' rhel7 2017/04/03 17:58:54 /opt/app/oracle/product/12.2.0/grid/cdata/rhel7/backup_20170403_175854.olr 0 rhel7 2015/09/29 16:40:16 /opt/app/oracle/product/12.1.0/grid/cdata/rhel7/backup_20150929_164016.olr 0 2017/04/03 17:59:27 CLSRSC-400: A system reboot is required to continue installing. The command '/opt/app/oracle/product/12.2.0/grid/perl/bin/perl -I/opt/app/oracle/product/12.2.0/grid/perl/lib -I/opt/app/oracle/product/12.2.0/grid/crs/install /opt/app/oracle/product/12.2.0/grid/crs/install/roothas.pl -upgrade' execution failedLooking at the log file it could be seen the error happens related to ACFS
2017-04-03 17:59:26: /opt/app/oracle/product/12.2.0/grid/bin/crsctl disable has ... disabling CRS in preparation for reboot. 2017-04-03 17:59:26: Executing cmd: /opt/app/oracle/product/12.2.0/grid/bin/crsctl disable has 2017-04-03 17:59:27: Command output: > CRS-4621: Oracle High Availability Services autostart is disabled. >End Command output 2017-04-03 17:59:27: /opt/app/oracle/product/12.2.0/grid/bin/crsctl disable has ... CRS disabled, ready for reboot. 2017-04-03 17:59:27: USM driver install status is 3 2017-04-03 17:59:27: ACFS drivers unable to be installed. 2017-04-03 17:59:27: Executing cmd: /opt/app/oracle/product/12.2.0/grid/bin/clsecho -p has -f clsrsc -m 400 2017-04-03 17:59:27: Command output: > CLSRSC-400: A system reboot is required to continue installing. >End Command output 2017-04-03 17:59:27: CLSRSC-400: A system reboot is required to continue installing.According to MOS note 2025056.1 this issue could be ignored if ACFS is not used or not supported. Rootupgrade scripts are restartable when upgrading to 11.2.0.2 or above (1364947.1). During the subsequent execute of the rootupgrade.sh the script executes without any issue.
[root@rhel7 grid]# ./rootupgrade.sh Performing root user operation. The following environment variables are set as: ORACLE_OWNER= grid ORACLE_HOME= /opt/app/oracle/product/12.2.0/grid Enter the full pathname of the local bin directory: [/usr/local/bin]: The contents of "dbhome" have not changed. No need to overwrite. The contents of "oraenv" have not changed. No need to overwrite. The contents of "coraenv" have not changed. No need to overwrite. Entries will be added to the /etc/oratab file as needed by Database Configuration Assistant when a database is created Finished running generic part of root script. Now product-specific root actions will be performed. Using configuration parameter file: /opt/app/oracle/product/12.2.0/grid/crs/install/crsconfig_params The log of current session can be found at: /opt/app/oracle/crsdata/rhel7/crsconfig/roothas_2017-04-03_06-09-16PM.log 2017/04/03 18:09:16 CLSRSC-363: User ignored prerequisites during installation CRS-2791: Starting shutdown of Oracle High Availability Services-managed resources on 'rhel7' CRS-2673: Attempting to stop 'ora.evmd' on 'rhel7' CRS-2677: Stop of 'ora.evmd' on 'rhel7' succeeded CRS-2793: Shutdown of Oracle High Availability Services-managed resources on 'rhel7' has completed CRS-4133: Oracle High Availability Services has been stopped. CRS-4123: Oracle High Availability Services has been started. 2017/04/03 18:10:35 CLSRSC-327: Successfully configured Oracle Restart for a standalone serverWhen the root script failure happens during auto run of the root script, click retry button to run the root script again. It was found that after the initial failure, the subsequent execution complete successfully. The GI software versions are now upgraded to 12.2
[grid@rhel7 ~]$ crsctl query has releaseversion Oracle High Availability Services release version on the local node is [12.2.0.1.0] [grid@rhel7 ~]$ crsctl query has softwareversion Oracle High Availability Services version on the local node is [12.2.0.1.0]Cluvfy could be used for post upgrade verification.
cluvfy stage -post hacfg Verifying Oracle Restart Integrity ...PASSED Verifying OLR Integrity ...PASSED Post-check for Oracle Restart configuration was successful. CVU operation performed: stage -post hacfg Date: 04-Apr-2017 13:05:07 CVU home: /opt/app/oracle/product/12.2.0/grid/ User: grid
The next step is the upgrade of database software. This is done by doing a out-of-place software only installation of the 12.2 DB software. One new thing introduced in 12.2 is the OS group specific for RAC administration. Following from install guide "You must designate a group as the OSRACDBA group during database installation. Members of this group are granted the SYSRAC privileges to perform day–to–day administration of Oracle databases on an Oracle RAC cluster". This user group could be used for oracle restart as well (This does lead to an issue which is mentioned towards the end of the post). In order to use this OS group, create it as root and assign it to oracle user.
[root@rhel7 oracle]# groupadd racdba [root@rhel7 oracle]# usermod -g oinstall -G dba,oper,asmoper,asmdba,asmadmin,backupdba,dgdba,kmdba,racdba oracle [root@rhel7 oracle]# id oracle uid=1001(oracle) gid=1001(oinstall) groups=1001(oinstall),1002(dba),1003(oper),1005(asmoper),1006(asmdba),1007(asmadmin),1008(backupdba),1009(dgdba),1010(kmdba),1011(racdba)During the software installation select this group for RAC administration. Summary Once the DB software is installed, the last step is to upgrade the database. 12.2 provide a new pre-upgrade check tool which comes in the form of a jar file. (12.2_Oracle_Home/rdbms/admin/preupgrade.jar). This could be used to verify the pre-upgrade status of the database. Before running the tool set the Oracle home, oracle base and oracle sid and path environment variables. Below is the output of running the pre-upgrade tool.
[oracle@rhel7 ~]$ $ORACLE_HOME/jdk/bin/java -jar preupgrade.jar TERMINAL Report generated by Oracle Database Pre-Upgrade Information Tool Version 12.2.0.1.0 Upgrade-To version: 12.2.0.1.0 ======================================= Status of the database prior to upgrade ======================================= Database Name: SE2DB Container Name: SE2DB Container ID: 0 Version: 12.1.0.2.0 Compatible: 12.1.0.2.0 Blocksize: 8192 Platform: Linux x86 64-bit Timezone File: 18 Database log mode: ARCHIVELOG Readonly: FALSE Edition: SE Oracle Component Upgrade Action Current Status ---------------- -------------- -------------- Oracle Server [to be upgraded] VALID Oracle Workspace Manager [to be upgraded] VALID Oracle Text [to be upgraded] VALID Oracle XML Database [to be upgraded] VALID ============== BEFORE UPGRADE ============== RunRun the preupgrade_fixups.sql script to rectify any pre-upgrade issues raised. The timezone filed used by 12.1 is 18 and this will be upgraded to 26 as part of the DB upgrade/preupgrade_fixups.sql to complete all of the BEFORE UPGRADE action items below marked with '(AUTOFIXUP)'. REQUIRED ACTIONS ================ + Adjust TABLESPACE SIZES as needed. Auto 12.2.0.1.0 Tablespace Size Extend Min Size Action ---------- ---------- -------- ---------- ------ SYSAUX 550 MB DISABLED 500 MB None SYSTEM 700 MB ENABLED 797 MB None TEMP 20 MB ENABLED 150 MB None UNDOTBS1 200 MB ENABLED 400 MB None Note that 12.2.0.1.0 minimum sizes are estimates. If you plan to upgrade multiple pluggable databases concurrently, then you must ensure that the UNDO tablespace size is equal to at least the number of pluggable databases that you upgrade concurrently, multiplied by that minimum. Failing to allocate sufficient space can cause the upgrade to fail. RECOMMENDED ACTIONS =================== + Run 12.1.0.2.0 $ORACLE_HOME/rdbms/admin/utlrp.sql to recompile invalid objects. You can view the individual invalid objects with SET SERVEROUTPUT ON; EXECUTE DBMS_PREUP.INVALID_OBJECTS; 5 objects are INVALID. There should be no INVALID objects in SYS/SYSTEM or user schemas before database upgrade. + (AUTOFIXUP) Gather stale data dictionary statistics prior to database upgrade in off-peak time using: EXECUTE DBMS_STATS.GATHER_DICTIONARY_STATS; Dictionary statistics do not exist or are stale (not up-to-date). Dictionary statistics help the Oracle optimizer find efficient SQL execution plans and are essential for proper upgrade timing. Oracle recommends gathering dictionary statistics in the last 24 hours before database upgrade. For information on managing optimizer statistics, refer to the 12.1.0.2 Oracle Database SQL Tuning Guide. ============= AFTER UPGRADE ============= Run /postupgrade_fixups.sql to complete all of the AFTER UPGRADE action items below marked with '(AUTOFIXUP)'. REQUIRED ACTIONS ================ None RECOMMENDED ACTIONS =================== + Upgrade the database time zone version using the DBMS_DST package. The database is using timezone datafile version 18 and the target 12.2.0.1.0 database ships with timezone datafile version 26. Oracle recommends using the most recent timezone data. For further information, refer to My Oracle Support Note 1585343.1. + (AUTOFIXUP) Gather dictionary statistics after the upgrade using the command: EXECUTE DBMS_STATS.GATHER_DICTIONARY_STATS; Oracle recommends gathering dictionary statistics after upgrade. Dictionary statistics provide essential information to the Oracle optimizer to help it find efficient SQL execution plans. After a database upgrade, statistics need to be re-gathered as there can now be tables that have significantly changed during the upgrade or new tables that do not have statistics gathered yet. Preupgrade generated files: /opt/app/oracle/cfgtoollogs/se2db/preupgrade/preupgrade_fixups.sql /opt/app/oracle/cfgtoollogs/se2db/preupgrade/postupgrade_fixups.sql
SQL> select * from v$timezone_file; FILENAME VERSION CON_ID -------------------- ---------- ---------- timezlrg_18.dat 18 0Run the DBUA from 12.2 home to begin the database upgrade. Upgrade SummaryUpgrade Results Once the DBUA completes the upgrade run the post upgrade fix script, mentioned by the pre-upgrade tool
/@opt/app/oracle/cfgtoollogs/se2db/preupgrade/postupgrade_fixups.sql Executing Oracle POST-Upgrade Fixup Script Auto-Generated by: Oracle Preupgrade Script Version: 12.2.0.1.0 Build: 1 Generated on: 2017-03-31 18:27:51 For Source Database: SE2DB Source Database Version: 12.1.0.2.0 For Upgrade to Version: 12.2.0.1.0 Fixup Check Name Status Further DBA Action ---------- ------ ------------------ old_time_zones_exist Passed None post_dictionary Passed None PL/SQL procedure successfully completed.Verify the timezone file has been updated
SQL> select * from v$timezone_file; FILENAME VERSION CON_ID -------------------- ---------- ---------- timezlrg_26.dat 26 0After the upgrade using srvctl to administer the database will result in error as shown below.
[oracle@rhel7 ~]$ srvctl stop database -db se2db PRCD-1334 : failed to stop database se2db PRCD-1124 : Failed to stop database se2db and its services PRCD-1131 : Failed to stop database se2db and its services on nodes PRCR-1133 : Failed to stop database se2db and its running services PRCR-1132 : Failed to stop resources using a filter ORA-01017: invalid username/password; logon denied CRS-2675: Stop of 'ora.se2db.db' on 'rhel7' failedThe reason for this is the rac admin OS group that was created during database software installation. Oracle document only mentions of making grid part of sysdba group. Following from Oracle install guide "When you manually add a database to the Oracle Restart configuration, you must also add the Oracle grid infrastructure software owner as a member of the OSDBA group of that database. This is because the grid infrastructure components must be able to connect to the database as SYSDBA to start and stop the database. For example, if the host user who installed the grid infrastructure home is named grid and the OSDBA group of the new database is named dba, then user grid must be a member of the dba group". However if a separate OS group is ued for RAC administration then grid user must be part of that group as well to prevent errors as above.
[root@rhel7 ~]# id grid uid=1002(grid) gid=1001(oinstall) groups=1001(oinstall),1002(dba),1005(asmoper),1006(asmdba),1007(asmadmin) [root@rhel7 ~]# usermod -g oinstall -G dba,asmoper,asmdba,asmadmin,racdba grid [root@rhel7 ~]# id grid uid=1002(grid) gid=1001(oinstall) groups=1001(oinstall),1002(dba),1005(asmoper),1006(asmdba),1007(asmadmin),1011(racdba)After this chagne srvctl could be usd to manged the DB without any issue.
If satisfied with the upgrade and application testing change the compatibility parameter on the DB and ASM disk groups.
SQL> alter system set compatible='12.2.0.1.0' scope=spfile; shutdown immediate;Login as grid user and then login to ASM instance as sysasm
SQL> alter diskgroup FRA SET attribute 'compatible.asm'='12.2.0.1.0'; SQL> alter diskgroup DATA SET attribute 'compatible.asm'='12.2.0.1.0'; SQL> alter diskgroup fra set attribute 'compatible.rdbms'='12.2.0.1.0'; SQL> alter diskgroup data set attribute 'compatible.rdbms'='12.2.0.1.0'; SQL> select g.name,a.name,a.value from v$asm_diskgroup g, v$asm_attribute a where g.group_number=a.group_number and a.name like '%compat%'; NAME NAME VALUE ---------------------------------------- ---------------------------------------- ---------------------------------------- DATA compatible.asm 12.2.0.1.0 DATA compatible.rdbms 12.2.0.1.0 FRA compatible.asm 12.2.0.1.0 FRA compatible.rdbms 12.2.0.1.0Start the database and run orachk post ugprade check and cluvfy post checks
./orachk -u -o post cluvfy stage -post hacfg Verifying Oracle Restart Integrity ...PASSED Verifying OLR Integrity ...PASSED Post-check for Oracle Restart configuration was successful. CVU operation performed: stage -post hacfg Date: 04-Apr-2017 13:05:07 CVU home: /opt/app/oracle/product/12.2.0/grid/ User: gridUseful metalink notes
ACFS Drivers Install reports CLSRSC-400: A system reboot is required to continue installing [ID 2025056.1]
Patches to apply before upgrading Oracle GI and DB to 12.2.0.1 [ID 2180188.1]
12.2 Grid Infrastructure Installation: What's New [ID 2024946.1]
How to Upgrade to/Downgrade from Grid Infrastructure 12.2 and Known Issues [ID 2240959.1]
Related Posts
Upgrading Single Instance on ASM from 11.2.0.3 to 11.2.0.4
Upgrading Grid Infrastructure Used for Single Instance from 11.2.0.4 to 12.1.0.2