Wednesday, May 6, 2026

Out of Place (OOP) Patching of Oracle Restart 26ai

Previous posts showed OOP for 19c and 21c Oracle restart. Things are even simpler with 26ai.
The current patch level is
crsctl query has releasepatch
Oracle Clusterware release patch level is [2107015493] and the complete list of patches [38743669 38743682 38743688 38743695 38743706 ] have been applied on the local node. The release patch string is [23.26.1.0.0].

1. First step is to setup the new GI home. The best way to do this is to use the GI gold image with the latest RU. Oracle documentation mentions unzipping the installer and then patching before switching home. During testing it was found that opatchauto brings down the HAS stack even though existing homes are not affected by the patching. With the use of gold image the down time could be kept to a minimum. In this case the new GI home is
/opt/company/app/oracle/product/23.26.2/grid
The gold image with the latest RU at the time of this posts is p39099896_230000_Linux-x86-64.zip. Unzip the file into above location.
unzip p39099896_230000_Linux-x86-64.zip -d /opt/company/app/oracle/product/23.26.2/grid
2. Register the new GI home location in the oracle inventory.
cd /opt/company/app/oracle/product/23.26.2/grid
./gridSetup.sh -silent -setupHomeAs /opt/company/app/oracle/product/23.26.1/grid
The -setupHomeAs command installs the GI software with Oracle base and privileged operating system groups identical to the existing GI home which is located in /opt/company/app/oracle/product/23.26.1/grid. This step also adds an entry to the inventory.xml.
<HOME NAME="OraGI23Home2" LOC="/opt/company/app/oracle/product/23.26.2/grid" TYPE="O" IDX="8"/>
After registering the GI home could check the patches in it.
./OPatch/opatch lspatches
39099119;MICRONAUT RELEASE UPDATE 23.26.2.0.0 (39099119) Gold Image
39099244;RHP RELEASE UPDATE 23.26.2.0.0 (39099244) Gold Image
39099110;ACFS RELEASE UPDATE 23.26.2.0.0 (39099110) Gold Image
39093738;OCW RELEASE UPDATE 23.26.2.0.0 (39093738) Gold Image
39093711;Database Release Update : 23.26.2.0.0 (39093711) Gold Image
3. Execute the swtiching GI homes with the following command
./gridSetup.sh -silent -switchGridHome
Launching Oracle Grid Infrastructure Setup Wizard...

As a root user, run the following script(s):
        1. /opt/company/app/oracle/product/23.26.2/grid/root.sh

Run /opt/company/app/oracle/product/23.26.2/grid/root.sh on the following nodes:
[ip-172-31-2-172]



4. When the root.sh is run this will bring down the existing HAS stack (downtime will begin) and carry out the pre and post patch config work and bring the HAS stack up on the new GI home.
/opt/company/app/oracle/product/23.26.2/grid/root.sh
Check /opt/company/app/oracle/product/23.26.2/grid/install/root_ip-172-31-2-172.eu-west-1.compute.internal_2026-05-06_11-50-33-113071205.log for the output of root script
5. Once the HAS stack is started check the patch level
crsctl query has releasepatch
Oracle Clusterware release patch level is [1283328329] and the complete list of patches [39093711 39093738 39099110 39099119 39099244 ] have been applied on the local node. The release patch string is [23.26.2.0.0].

This is the end of out-of-place patching of Oracle restart 26ai.

If gold image installation is not preferred then RU could be applied at the time of GI home registering. This would require use of applyRU which is deprecated in 26ai. This method also reduces the downtime caused by patchnig a base instalation.
./gridSetup.sh -silent -setupHomeAs /opt/company/app/oracle/product/23.26.1/grid -applyRU /opt/installs/26ai/39088031
[INFO] [INS-32830] Options -applyRU, -applyPSU and -applyRUR are deprecated in this release. For more details, refer to help documentation.

To switch back to the old use home run the following as root on the old GI home
# cd /opt/company/app/oracle/product/23.26.1/grid
# ./crs/install/roothas.sh -unlock -dstcrshome /opt/company/app/oracle/product/23.26.1/grid
Then run switch home command from the old GI home.
$cd /opt/company/app/oracle/product/23.26.1/grid
./gridSetup.sh -silent -switchGridHome
Related Posts
Out of Place (OOP) Patching of Oracle Restart
Out of Place (OOP) Patching of Oracle Restart 21c

Saturday, February 28, 2026

Upgrading Oracle Restart from 19c to 26ai on RHEL 8

This post shows the steps for upgrading the grid infrastructure portion of 19c Oracle Restart to 26ai.
The current patch level on GI is RU 19.29.
$ORACLE_HOME/OPatch/opatch lspatches
38380425;TOMCAT RELEASE UPDATE 19.0.0.0.0 (38380425)
38322923;OCW RELEASE UPDATE 19.29.0.0.0 (38322923)
38311528;ACFS RELEASE UPDATE 19.29.0.0.0 (38311528)
38291812;Database Release Update : 19.29.0.0.251021 (38291812)
36758186;DBWLM RELEASE UPDATE 19.0.0.0.0 (36758186)
The OS is RHEL8
cat /etc/redhat-release
Red Hat Enterprise Linux release 8.8 (Ootpa)

$ uname -r
4.18.0-477.13.1.el8_8.x86_64
The software and release version of the HAS are as below.
$ crsctl query has releaseversion
Oracle High Availability Services release version on the local node is [19.0.0.0.0]

$ crsctl query has softwareversion
Oracle High Availability Services version on the local node is [19.0.0.0.0]
For 26ai Oracle restart two new additional pre-reqs need to be completed. One is to set kernel parameter kernel.panic=1 and install RPM compat-openssl10(x86_64)-1.0.2. With these no other pre-req failures are expected (on this test system SWAP is flagged which could be ignored)
./runcluvfy.sh stage -pre hacfg

Performing following verification checks ...

  Physical Memory ...PASSED
  Available Physical Memory ...PASSED
  Swap Size ...FAILED (PRVF-7573)
  Free Space: ip-172-31-2-172:/usr,ip-172-31-2-172:/var,ip-172-31-2-172:/etc,ip-172-31-2-172:/sbin,ip-172-31-2-172:/tmp ...PASSED
  User Existence: oracle ...
    Users With Same UID: 1001 ...PASSED
  User Existence: oracle ...PASSED
  Group Existence: dba ...PASSED
  Group Membership: dba ...PASSED
  Run Level ...PASSED
  Architecture ...PASSED
  OS Kernel Version ...PASSED
  OS Kernel Parameter: semmsl ...PASSED
  OS Kernel Parameter: semmns ...PASSED
  OS Kernel Parameter: semopm ...PASSED
  OS Kernel Parameter: semmni ...PASSED
  OS Kernel Parameter: shmmax ...PASSED
  OS Kernel Parameter: shmmni ...PASSED
  OS Kernel Parameter: shmall ...PASSED
  OS Kernel Parameter: file-max ...PASSED
  OS Kernel Parameter: ip_local_port_range ...PASSED
  OS Kernel Parameter: rmem_default ...PASSED
  OS Kernel Parameter: rmem_max ...PASSED
  OS Kernel Parameter: wmem_default ...PASSED
  OS Kernel Parameter: wmem_max ...PASSED
  OS Kernel Parameter: aio-max-nr ...PASSED
  OS Kernel Parameter: panic_on_oops ...PASSED
  OS Kernel Parameter: kernel.panic ...PASSED
  Package: binutils-2.30-113.0.2 ...PASSED
  Package: compat-openssl10-1.0.2 (x86_64) ...PASSED
  Package: fontconfig-2.13.1 (x86_64) ...PASSED
  Package: libgcc-8.5.0 (x86_64) ...PASSED
  Package: libstdc++-8.5.0 (x86_64) ...PASSED
  Package: sysstat-11.7.3 ...PASSED
  Package: make-4.2.1 ...PASSED
  Package: glibc-2.28 (x86_64) ...PASSED
  Package: glibc-devel-2.28 (x86_64) ...PASSED
  Package: libaio-0.3.112 (x86_64) ...PASSED
  Package: nfs-utils-2.3.3-51 ...PASSED
  Package: smartmontools-7.1-1 ...PASSED
  Package: net-tools-2.0-0.52 ...PASSED
  Package: policycoreutils-2.9-1 ...PASSED
  Package: policycoreutils-python-utils-2.9-1 ...PASSED
  Users With Same UID: 0 ...PASSED
  Current Group ID ...PASSED
  Root user consistency ...PASSED

Pre-check for Oracle Restart configuration was unsuccessful.


Failures were encountered during execution of CVU verification request "stage -pre hacfg".

Swap Size ...FAILED
ip-172-31-2-172: PRVF-7573 : Sufficient swap size is not available on node
                 "ip-172-31-2-172" [Required = 15.3892GB (1.613672E7KB) ; Found
                 = 0.0 bytes]


CVU operation performed:      stage -pre hacfg
Date:                         Feb 27, 2026, 3:47:28 PM
CVU version:                  23.26.1.0.0 (010926x8664)
CVU home:                     /opt/_____/app/oracle/product/23.26.1/grid
Grid home:                    /opt/______/app/oracle/product/19.28.0/grid
User:                         oracle
Operating system:             Linux4.18.0-477.13.1.el8_8.x86_64
Run the gridSetup.sh and select upgrade option.

Shutdown databases using the ASM that are part of the GI being upgraded.

Register with OMS

Select the Oracle base location

Provide root password or sudo related details. If omitted root script could be run manually in the end.

Check pre-reqs

Verify the summary. At this stage the responses could be saved into a file which could be used for silent upgrade.

Begin the upgrade and run the rootupgrade.sh when prompted.


./rootupgrade.sh
Performing root user operation.

The following environment variables are set as:
    ORACLE_OWNER= oracle
    ORACLE_HOME=  /opt/company/app/oracle/product/23.26.1/grid

Enter the full pathname of the local bin directory: [/usr/local/bin]:
The contents of "dbhome" have not changed. No need to overwrite.
The file "oraenv" already exists in /usr/local/bin.  Overwrite it? (y/n)
[n]: y
   Copying oraenv to /usr/local/bin ...
The file "coraenv" already exists in /usr/local/bin.  Overwrite it? (y/n)
[n]: y
   Copying coraenv to /usr/local/bin ...

Entries will be added to the /etc/oratab file as needed by
Database Configuration Assistant when a database is created
Finished running generic part of root script.
Now product-specific root actions will be performed.
Executing command '/opt/company/app/oracle/product/23.26.1/grid/perl/bin/perl -I/opt/company/app/oracle/product/23.26/crs/install/roothas.pl  -upgrade'
Using configuration parameter file: /opt/company/app/oracle/product/23.26.1/grid/crs/install/crsconfig_params
The log of current session can be found at:
  /opt/company/app/oracle/crsdata/ip-172-31-2-172/crsconfig/roothas_2026-02-27_04-01-24PM.log
2026/02/27 16:01:25 CLSRSC-595: Executing upgrade step 1 of 11: 'UpgPrechecks'.
acfsutil info fs: ACFS-03036: no mounted ACFS file systems
2026/02/27 16:01:29 CLSRSC-595: Executing upgrade step 2 of 11: 'GetOldConfig'.
2026/02/27 16:01:32 CLSRSC-595: Executing upgrade step 3 of 11: 'GenSiteGUIDs'.
2026/02/27 16:01:32 CLSRSC-595: Executing upgrade step 4 of 11: 'SetupOSD'.
2026/02/27 16:01:32 CLSRSC-595: Executing upgrade step 5 of 11: 'PreUpgrade'.
2026/02/27 16:07:34 CLSRSC-595: Executing upgrade step 6 of 11: 'UpgradeOLR'.
clscfg: EXISTING configuration version 0 detected.
Creating OCR keys for user 'oracle', privgrp 'oinstall'..
Operation successful.
2026/02/27 16:07:42 CLSRSC-595: Executing upgrade step 7 of 11: 'UpgradeOCR'.
LOCAL ONLY MODE
Successfully accumulated necessary OCR keys.
Creating OCR keys for user 'root', privgrp 'root'..
Operation successful.
CRS-4664: Node ip-172-31-2-172 successfully pinned.
2026/02/27 16:07:48 CLSRSC-595: Executing upgrade step 8 of 11: 'CreateOHASD'.
2026/02/27 16:07:49 CLSRSC-595: Executing upgrade step 9 of 11: 'ConfigOHASD'.
2026/02/27 16:08:30 CLSRSC-329: Replacing Clusterware entries in file 'oracle-ohasd.service'
2026/02/27 16:08:39 CLSRSC-595: Executing upgrade step 10 of 11: 'UpgradeSIHA'.


ip-172-31-2-172     2026/02/27 16:09:39     /opt/company/app/oracle/crsdata/ip-172-31-2-172/olr/backup_20260227_160939.olr     2107015493

ip-172-31-2-172     2022/02/08 17:07:46     /opt/company/app/oracle/crsdata/ip-172-31-2-172/olr/backup_20220208_170746.olr     3342960164
2026/02/27 16:09:39 CLSRSC-595: Executing upgrade step 11 of 11: 'InstallACFS'.

2026/02/27 16:11:26 CLSRSC-327: Successfully configured Oracle Restart for a standalone server
This concludes the successful upgrade of Oracle restart GI to 26ai.

cluvfy stage -post hacfg

Performing following verification checks ...

  Oracle Restart Integrity ...PASSED
  OLR Integrity ...PASSED

Post-check for Oracle Restart configuration was successful.

CVU operation performed:      stage -post hacfg
Date:                         Feb 24, 2026, 3:35:01 PM
CVU version:                  23.26.1.0.0 (010926x8664)
CVU home:                     /opt/company/app/oracle/product/23.26.1/grid
Grid home:                    /opt/company/app/oracle/product/23.26.1/grid
User:                         oracle
Operating system:             Linux4.18.0-553.84.1.el8_10.x86_64

$ crsctl query has releaseversion
Oracle High Availability Services release version on the local node is [23.0.0.0.0]

$ crsctl query has softwareversion
Oracle High Availability Services version on the local node is [23.0.0.0.0]
The ASM diskgroup related attributes could also upgraded at this stage. Current attributes values are
select * from v$asm_attribute where name like '%compat%'

NAME                           VALUE                          GROUP_NUMBER ATTRIBUTE_INDEX ATTRIBUTE_INCARNATION READ_ON SYSTEM_     CON_ID
------------------------------ ------------------------------ ------------ --------------- --------------------- ------- ------- ----------
compatible.asm                 19.0.0.0.0                                1             110                     1 N       Y                0
compatible.rdbms               19.0.0.0.0                                1             112                     1 N       Y                0
compatible.advm                19.0.0.0.0                                1             113                     1 N       Y                0
compatible.asm                 19.0.0.0.0                                2             110                     1 N       Y                0
compatible.rdbms               19.0.0.0.0                                2             112                     1 N       Y                0
compatible.advm                19.0.0.0.0                                2             113                     1 N       Y                0
DO NOT upgrade the compatible.rdbms unless database compatibility parameter is upgraded first.
Both compatible.asm and compatible.advm are upgraded for each diskgroup using asmca
$ asmca -silent -editDiskGroupAttributes -diskGroupName data -attribute compatible.asm=23.0.0.0.0

[DBT-30025] UPDATE_DG_ATTRIBUTES_SUCCESS

$ asmca -silent -editDiskGroupAttributes -diskGroupName fra -attribute compatible.asm=23.0.0.0.0

[DBT-30025] UPDATE_DG_ATTRIBUTES_SUCCESS

$ asmca -silent -editDiskGroupAttributes -diskGroupName data -attribute compatible.advm=23.0.0.0.0

[DBT-30025] UPDATE_DG_ATTRIBUTES_SUCCESS

$ asmca -silent -editDiskGroupAttributes -diskGroupName fra -attribute compatible.advm=23.0.0.0.0

[DBT-30025] UPDATE_DG_ATTRIBUTES_SUCCESS
ASM alert log could be viewed for update messages
2026-02-27T16:18:19.683968+00:00
SQL> alter diskgroup data set attribute 'compatible.asm' = '23.0.0.0.0' /* ASMCA */
2026-02-27T16:18:19.709087+00:00
NOTE: Advancing ASM compatibility from 19.0.0.0.0 to 23.0.0.0.0 for grp 1
2026-02-27T16:18:19.739061+00:00
NOTE: Advancing compatible.asm (replicated) on grp 1 disk DATA_0000
NOTE: Advancing compatible.asm (replicated) on grp 1 disk DATA_0001
NOTE: Advancing compatible.asm on grp 1 disk DATA_0000
NOTE: Advancing compatible.asm on grp 1 disk DATA_0001
2026-02-27T16:18:19.741622+00:00
NOTE: set version 3 for asmCompat 23.0.0.0.0 for group 1
2026-02-27T16:18:19.745668+00:00
SUCCESS: Advanced compatible.asm to 23.0.0.0.0 for grp 1
2026-02-27T16:18:19.746091+00:00
NOTE: Instance updated compatible.asm to 23.0.0.0.0 for grp 1 (DATA).
2026-02-27T16:18:19.829708+00:00
NOTE: Advancing AVD compatibility to 23.0.0.0.0 for grp 1
2026-02-27T16:18:19.829918+00:00
SUCCESS: Advanced compatible.advm to 23.0.0.0.0 for grp 1
2026-02-27T16:18:19.842912+00:00
SUCCESS: alter diskgroup data set attribute 'compatible.asm' = '23.0.0.0.0' /* ASMCA */
After this upgrade additional compatibile attributes gets enabled on the disk group. At this time of this post ASM Admin guide did not have any mention of these attributes.
select * from v$asm_attribute where name like '%compat%'

NAME                                     VALUE           GROUP_NUMBER ATTRIBUTE_INDEX ATTRIBUTE_INCARNATION READ_ON SYSTEM_     CON_ID
---------------------------------------- --------------- ------------ --------------- --------------------- ------- ------- ----------
compatible.advm                          23.0.0.0.0                 1             103                     1 N       Y                0
compatible.asm                           23.0.0.0.0                 1             100                     1 N       Y                0
compatible.patch.asm.blkpatchredo        ENABLED                    1             248                     1 Y       Y                0
compatible.patch.asm.containerparity     ENABLED                    1             248                     1 Y       Y                0
compatible.patch.asm.crc32cksum          ENABLED                    1             248                     1 Y       Y                0
compatible.patch.asm.doubleparity        ENABLED                    1             248                     1 Y       Y                0
compatible.rdbms                         19.0.0.0.0                 1             102                     1 N       Y                0
compatible.advm                          23.0.0.0.0                 2             103                     1 N       Y                0
compatible.asm                           23.0.0.0.0                 2             100                     1 N       Y                0
compatible.patch.asm.blkpatchredo        ENABLED                    2             248                     1 Y       Y                0
compatible.patch.asm.containerparity     ENABLED                    2             248                     1 Y       Y                0
compatible.patch.asm.crc32cksum          ENABLED                    2             248                     1 Y       Y                0
compatible.patch.asm.doubleparity        ENABLED                    2             248                     1 Y       Y                0
compatible.rdbms                         19.0.0.0.0                 2             102                     1 N       Y                0
This conclude the upgrade of Oracle restart from 19c to 26ai.

Related Posts
Upgrading Oracle Restart from 19c to 21c on RHEL 7
Upgrading Oracle Single Instance with ASM (Oracle Restart) from 11.2.0.4 to 19c (19.6) on RHEL 7
Upgrading Oracle Restart from 18c (18.6) to 19c (19.3)
Upgrading Oracle Restart from 12.2.0.1 to 18c
Upgrading Oracle Single Instance with ASM (Oracle Restart) from 11.2.0.4 to 12.2.0.1
Upgrading Oracle Single Instance with ASM (Oracle Restart) from 12.1.0.2 to 12.2.0.1
Upgrading Single Instance on ASM from 11.2.0.3 to 11.2.0.4
Upgrading Grid Infrastructure Used for Single Instance from 11.2.0.4 to 12.1.0.2

The same thing could be done in silent mode using a response file. Invoke the gridSetup.sh with upgrade and response file optoins. When prompted run the root upgrade script. The output is written to log file.
./gridSetup.sh -upgrade -silent -responseFile /opt/installs/26ai/grids.rsp
Launching Oracle Grid Infrastructure Setup Wizard...

*********************************************
Swap Size: This is a prerequisite condition to test whether sufficient total swap space is available on the system.
Severity: IGNORABLE
Overall status: VERIFICATION_FAILED
Error message: PRVF-7573 : Sufficient swap size is not available on node "ip-172-31-8-174" [Required = 16GB (1.677721
Cause:  The swap size found does not meet the minimum requirement.
Action:  Increase swap size to at least meet the minimum swap space requirement.
-----------------------------------------------
[WARNING] [INS-13014] Target environment does not meet some optional requirements.
   CAUSE: Some of the optional prerequisites are not met. See logs for details. /opt/company/app/oraInventory/logs/Gr
   ACTION: Identify the list of failed prerequisite checks from the log: /opt/company/app/oraInventory/logs/GridSetupon manual find the appropriate configuration to meet the prerequisites and fix it manually.
The response file for this session can be found at:
 /opt/company/app/oracle/product/23.26.1/grid/install/response/grid_2026-02-24_03-26-13PM.rsp

You can find the log of this install session at:
 /opt/company/app/oraInventory/logs/GridSetupActions2026-02-24_03-26-13PM/gridSetupActions2026-02-24_03-26-13PM.log

As a root user, run the following script(s):
        1. /opt/company/app/oracle/product/23.26.1/grid/rootupgrade.sh

Run /opt/company/app/oracle/product/23.26.1/grid/rootupgrade.sh on the following nodes:
[ip-172-31-8-174]

Sunday, February 15, 2026

Upgrading Enterprise Manager Cloud Control from 13.5 to 24.1 - 3

This is the third post on upgrading EMCC from 13.5 to 24.1. Previous posts showed how to complete the pre-req work and upgrade the EM. This post shows how to upgrade the agents to 24.1. This post shows the steps for upgrading using EM Console.
To upgrade the central agent go to Setup -> Manage Enterprise Manager -> Upgrade Agents
This will open the agent upgrade console. Agent Upgrade Tasks tab will be selected. Under Agents for Upgrade click the Add button. This will open upgradeable agents box and select the 13.5 agents that will be upgraded. Once selected the agent details will be populated as shown below. It shows current version, version it will be upgraded to and the agent home.

Under the Choose credentails select "Preferred Privileged Credentials" (selected by default) even if no perferred credentails are available. This is not an issue as this privilege is needed to run the root.sh for agent upgrade. This could be done manually. This is also mentioned when the submit button is clicked. Click OK to continue with the upgrade.

Wait for upgrade job to complete successfully.

Agent upgrade results could also be viewd by clicking the job name available under "Agent Upgrade Results" page.



The agent will have a new home under agent base directory. It will have to be patched with latest RU that is relevant for the OMS+agent combination.

As the last step the old 13.5 agent home could be removed. This is done as a post agent upgrade task. Click on the cleanup agents and add the old 13.5 agent. It was noticed in multiple upgrades that the "installed version" for the old agent is listed as 24.1 even though correct 13.5 agent home is listed as the oracle home. Ignore this possible bug on the EM and continue with the 13.5 agent removal.

As before the agent removal progress could be monitored under job activity. Once complete view the results in cleanup-agent results page.

This concludes the upgrading of Enterprise Manager Cloud Control from 13.5 to 24ai.

Related Posts
Upgrading Enterprise Manager Cloud Control from 13.5 to 24.1 - 1
Upgrading Enterprise Manager Cloud Control from 13.5 to 24.1 - 2
Upgrading Enterprise Manager Cloud Control from 13.4 to 13.5 - 1
Upgrading Enterprise Manager Cloud Control from 13.4 to 13.5 - 2
Upgrading Enterprise Manager Cloud Control from 13.4 to 13.5 - 3
Installing Enterprise Manager Cloud Control 13c (13.4)
Converting EM Repository DB from Non-CDB to PDB
Adding Targets on EM Cloud Control 13c
Installing Enterprise Manager Cloud Control 13c (13.3)
Installing Grid Control 11gR1 and Deploying Agents
Upgrading Grid Control 11g to 12c - 1