Thursday, September 12, 2024

Restore Standby Database from Standby Backups

This post shows the steps for full restoring (controlfile + data files) for a standby database using standby database backups. The same could be achieved using restore from service (2283978.1). However, this method is useful when the database size is large and high network latencies are invovled.

1. As the first step disable log apply and transport.
DGMGRL> edit database fsfodr set state='apply-off';
Succeeded.
DGMGRL> edit database fsfopr set state='transport-off';
Succeeded.
DGMGRL>
2. Start the standby database in nomount mode and restore the standby controlfile.
$ rman target /
RMAN > startup nomount

RMAN> restore standby controlfile from '/opt/backup/fsfodr/full_c-1245564449-20230913-02.ctl';

Starting restore at 13-SEP-23
using target database control file instead of recovery catalog
allocated channel: ORA_DISK_1
channel ORA_DISK_1: SID=2836 device type=DISK

channel ORA_DISK_1: restoring control file
channel ORA_DISK_1: restore complete, elapsed time: 00:00:02
output file name=+DATA/FSFODR/CONTROLFILE/current.341.1147435431
output file name=+FRA/FSFODR/CONTROLFILE/current.573.1147435431
Finished restore at 13-SEP-23
3. Mount the database.
RMAN> alter database mount;
4. Since the standby controfile was restored from a backup taken on standby database no need to catalog backup file location. The controlfile is aware of the backup locations. Run a restore and recover statements.
RMAN> run {
2> restore database;
3> recover database;
4> }
5. Clear the online logfiles on the standby database.
SQL> begin
for log_cur in ( select group# group_no from v$log )
loop
execute immediate 'alter database clear logfile group '||log_cur.group_no;
end loop;
end;
/

PL/SQL procedure successfully completed.

SQL> select * from v$log;

    GROUP#    THREAD#  SEQUENCE#      BYTES  BLOCKSIZE    MEMBERS ARC STATUS           FIRST_CHANGE# FIRST_TIM NEXT_CHANGE# NEXT_TIME     CON_ID
---------- ---------- ---------- ---------- ---------- ---------- --- ---------------- ------------- --------- ------------ --------- ----------
         1          1          0  104857600        512          2 YES UNUSED                 2635375 13-SEP-23      2635378 13-SEP-23          0
         2          1          0  104857600        512          2 YES UNUSED                 2635378 13-SEP-23      2636371 13-SEP-23          0
         5          1          0  104857600        512          2 YES UNUSED                 2636825 13-SEP-23   9.2954E+18                    0
         4          1          0  104857600        512          2 YES UNUSED                 2636470 13-SEP-23      2636825 13-SEP-23          0
         3          1          0  104857600        512          2 YES UNUSED                 2636371 13-SEP-23      2636470 13-SEP-23          0
6. Clear the standby logfiles
begin
for log_cur in ( select group# group_no from v$standby_log )
loop
execute immediate 'alter database clear logfile group '||log_cur.group_no;
end loop;
end;
/


PL/SQL procedure successfully completed.

SQL> select * from v$standby_log;

    GROUP# DBID                                        THREAD#  SEQUENCE#      BYTES  BLOCKSIZE       USED ARC STATUS     FIRST_CHANGE# FIRST_TIM NEXT_CHANGE# NEXT_TIME LAST_CHANGE# LAST_TIME     CON_ID
---------- ---------------------------------------- ---------- ---------- ---------- ---------- ---------- --- ---------- ------------- --------- ------------ --------- ------------ --------- ----------
         6 UNASSIGNED                                        1          0  104857600        512          0 NO  UNASSIGNED                                                                                0
         7 UNASSIGNED                                        1          0  104857600        512          0 YES UNASSIGNED                                                                                0
         8 UNASSIGNED                                        1          0  104857600        512          0 YES UNASSIGNED                                                                                0
         9 UNASSIGNED                                        1          0  104857600        512          0 YES UNASSIGNED                                                                                0
        10 UNASSIGNED                                        1          0  104857600        512          0 YES UNASSIGNED                                                                                0
        11 UNASSIGNED                                        1          0  104857600        512          0 YES UNASSIGNED                                                                                0

6 rows selected.



7. Enable log transport and redo apply
DGMGRL>  edit database fsfopr set state='transport-on';
Succeeded.
DGMGRL>  edit database fsfodr set state='apply-on';
Succeeded.
8. Check data guard configuration status and valdiate the standby database
DGMGRL> show configuration

Configuration - fsfo_dg

  Protection Mode: MaxAvailability
  Members:
  fsfopr - Primary database
    fsfodr - Physical standby database

Fast-Start Failover:  Disabled

Configuration Status:
SUCCESS   (status updated 25 seconds ago)

DGMGRL> validate database fsfodr;

  Database Role:     Physical standby database
  Primary Database:  fsfopr

  Ready for Switchover:  Yes
  Ready for Failover:    Yes (Primary Running)

  Managed by Clusterware:
    fsfopr:  YES
    fsfodr:  YES

Useful metalink notes
Creating a Physical Standby database using RMAN restore database from service [ID 2283978.1]

Friday, July 21, 2023

Out of Place (OOP) Patching of Oracle Restart 21c

Previous post showed OOP for 19c Oracle restart. Things are much simpler in 21c and can expect the same for 23c once released. The -switchGridHome option is supported for Oracle Restart in 21c (Oracle doc here). As such the OOP is simply installing new GI home in a different location with -switchGridHome option.
The current GI home is in /opt/codegen/app/oracle/product/21.x.0/grid and release patch is
crsctl query has releasepatch
Oracle Clusterware release patch level is [3414221900] and the complete list of patches [35132583 35134934 35134943 35149778 35222143 35226235 ] have been applied on the local node. The release patch string is [21.10.0.0.0].


Run the gridsetup with -switchGridHome option and any RU and one-off patches. This step doesn't result in downtime.
./gridSetup.sh -silent -switchGridHome  -applyRU /opt/installs/patches/35427907
Preparing the home to patch...
Applying the patch /opt/installs/patches/35427907...
Successfully applied the patch.
The log can be found at: /opt/codegen/app/oraInventory/logs/GridSetupActions2023-07-21_01-55-36PM/installerPatchActions_2023-07-21_01-55-36PM.log
Launching Oracle Grid Infrastructure Setup Wizard...

You can find the log of this install session at:
 /opt/codegen/app/oraInventory/logs/GridSetupActions2023-07-21_01-55-36PM/gridSetupActions2023-07-21_01-55-36PM.log

As a root user, execute the following script(s):
        1. /opt/codegen/app/oracle/product/21.11.0/grid/root.sh

Execute /opt/codegen/app/oracle/product/21.11.0/grid/root.sh on the following nodes:
[ip-172-31-10-193]
When prompted run the root.sh. This is where the grid home switching happens and results in down time. In the course of running root.sh the HAS stack is brought down in the old GI home and started in the new GI home.
/opt/codegen/app/oracle/product/21.11.0/grid/root.sh
Check /opt/codegen/app/oracle/product/21.11.0/grid/install/root_ip-172-31-10-193.eu-west-1.compute.internal_2023-07-21_14-06-15-742998108.log for the output of root script
The output on the log files shows prepatch and postpatch steps run on the new GI home.
Performing root user operation.

The following environment variables are set as:
    ORACLE_OWNER= oracle
    ORACLE_HOME=  /opt/codegen/app/oracle/product/21.11.0/grid
   Copying dbhome to /usr/local/bin ...
   Copying oraenv to /usr/local/bin ...
   Copying coraenv to /usr/local/bin ...

Entries will be added to the /etc/oratab file as needed by
Database Configuration Assistant when a database is created
Finished running generic part of root script.
Now product-specific root actions will be performed.
Using configuration parameter file: /opt/codegen/app/oracle/product/21.11.0/grid/crs/install/crsconfig_params
The log of current session can be found at:
  /opt/codegen/app/oracle/crsdata/ip-172-31-10-193/crsconfig/hapatch_2023-07-21_02-06-16PM.log
2023/07/21 14:06:18 CLSRSC-347: Successfully unlock /opt/codegen/app/oracle/product/21.11.0/grid
2023/07/21 14:06:18 CLSRSC-671: Pre-patch steps for patching GI home successfully completed.
Using configuration parameter file: /opt/codegen/app/oracle/product/21.11.0/grid/crs/install/crsconfig_params
The log of current session can be found at:
  /opt/codegen/app/oracle/crsdata/ip-172-31-10-193/crsconfig/hapatch_2023-07-21_02-06-18PM.log
2023/07/21 14:07:16 CLSRSC-329: Replacing Clusterware entries in file 'oracle-ohasd.service'
2023/07/21 14:08:01 CLSRSC-672: Post-patch steps for patching GI home successfully completed.


Looking at the currently running processes will show the GI related processes running out of new GI home.
  32070 ?        Ssl    0:02 /opt/codegen/app/oracle/product/21.11.0/grid/bin/ohasd.bin reboot _ORA_BLOCKING_STACK_LOCALE=AMERICAN_AMERICA.AL32UTF8
  32281 ?        Ssl    0:00 /opt/codegen/app/oracle/product/21.11.0/grid/bin/oraagent.bin
  32308 ?        Ssl    0:00 /opt/codegen/app/oracle/product/21.11.0/grid/bin/evmd.bin
  32312 ?        Ss     0:00 /opt/codegen/app/oracle/product/21.11.0/grid/bin/tnslsnr LISTENER -no_crs_notify -inherit
  32368 ?        Ssl    0:00 /opt/codegen/app/oracle/product/21.11.0/grid/bin/evmlogger.bin -o /opt/codegen/app/oracle/product/21.11.0/grid/log/[HOSTNAME]/evmd/evmlogger.info -l /opt/codegen/app/oracle/product/21.11.0/grid/log/[HOSTNAME]
  32384 ?        Ssl    0:00 /opt/codegen/app/oracle/product/21.11.0/grid/bin/cssdagent
  32423 ?        Ssl    0:00 /opt/codegen/app/oracle/product/21.11.0/grid/bin/onmd.bin
  32425 ?        Ssl    0:00 /opt/codegen/app/oracle/product/21.11.0/grid/bin/ocssd.bin
Release patch is 21.11
 crsctl query has releasepatch
Oracle Clusterware release patch level is [1435465441] and the complete list of patches [35428978 35442014 35442022 35442029 35550598 35589155 ] have been applied on the local node. The release patch string is [21.11.0.0.0].

Unlike in 19c no manual work is needed for updating the oracle inventory. New GI home is auto added with crs=true and crs=true is removed from old home during the GI home switch processes.
<HOME NAME="OraGI21Home1" LOC="/opt/codegen/app/oracle/product/21.x.0/grid" TYPE="O" IDX="1"/>
<HOME NAME="OraGI21Home2" LOC="/opt/codegen/app/oracle/product/21.11.0/grid" TYPE="O" IDX="4" CRS="true"/>

Related Posts
Out of Place (OOP) Patching of Oracle Restart

Wednesday, June 14, 2023

Upgrading 11.2.0.4 to 19c Using AutoUpgrade

There is an earlier post which shows how to upgrade 11.2 Oracle restart to 19c. However, in that post the database was upgraded using DBUA and left as a non-CDB. Autoupgrade (AU) could also be used to convert a non-CDB to CDB. This post shows how to do these two steps in one step using AU.
Below is the configuraiton file used in this case.
global.autoupg_log_dir=/home/oracle/upgr_log

# ----- NonCDB to PDB conversion -----
# To upgrade and convert an existing NonCDB database into a PDB of a target CDB,
# use the target_cdb parameter to specify the destination CDB.
# The target_pdb_name and target_pdb_copy_option parameters can be used
# to determine how each PDB is created on the target CDB.
#
# When neither of these options are used, a full upgrade of the source DB/CDB is performed.
#

upg1.sid=testupg
upg1.source_home=/opt/app/oracle/product/11.2.0/dbhome_1
upg1.target_home=/opt/app/oracle/product/19.x.0/dbhome_1
upg1.run_utlrp=yes
upg1.timezone_upg=yes

### NonCDB to PDB parameters ###
upg1.target_cdb=testcdb
upg1.target_pdb_name=devpdb
upg1.target_pdb_copy_option=file_name_convert=NONE
The 11.2 DB is called testupg and it's Oracle home is the source_home. The configuration already has a 19c CDB and its Oracle home is the target_home. This CDB is the target_cdb and called testcdb.
The 11.2 DB will be upgraded and plugged into the target CDB as a PDB named devpdb. The target_pdb_copy_option parameter is set to NONE so that upgraded PDB's data files are copied under the target CDB's data file structure.
The 11.2 DB has the final patch set update applied on it and 19c CDB has 19.19 RU applied.
As the first step run the AU in analyze mode to identify any issues.
java -jar autoupgrade.jar -config noncdb11g2_cdb.cfg -mode analyze

Some prechecks may require manual intervention to fix. For others AU provides a fixup mode.
java -jar autoupgrade.jar -config noncdb11g2_cdb.cfg -mode fixups




When all the fixups are done and prechceks have passed, run the AU in deploy mode to begin the upgrade and plugin as a PDB.
java -jar autoupgrade.jar -config noncdb11g2_cdb.cfg -mode deploy

The 11.2 DB is now a PDB inside the target CDB. If there was a database service associated with it then this has to be manaully created using srvctl.
One of the things noticed during this upgrade test is with regard to timezone. The 19c CDB had timezone file 32 as the current timezone file. However, the 19.19 RU has newer timezone files but they are simply copied to Oracle home and not applied. This is mentioned in the RU readme.html.
Applying this Release Update (RU) does not change the DST version in any of your databases.
The DST patches are only copied into $ORACLE_HOME/oracore/zoneinfo and no DST upgrade is performed. 
You can decide to upgrade the DST version in your database to a newer DST version independently of 
applying the RU at any time.
During the upgrade, as timezone_upg is set to yes the 11.2 DB's timezone is upgraded to 41. When this is plugged into the CDB then the PDB will have a higher timezone than the CDB. To avoid this from happening, either upgrade the CDB to latest timezone before the upgrade or set timezone_upg to no in the config file. Then run the timezone upgrade in the PDB by setting the timezone value used by the CDB.

The copies of the 11.2 datafiles remains in their original location after the upgrade. This database could be opened as a 19c non-CDB from a 19c home. On the otherhand it could be mounted using the 11.2 home and flashback to 11.2 using the restore point created by AU. In the later case (flashback to 11.2) following error may occur when archivelog clean up is initiated using rman
ORA-19633: control file record 25 is out of sync with recovery catalog
This is due to mismatch in letter cases (upper cases vs lower case use in 11.2 and 19c) in archivelog file paths. More on this is on MOS 1105924.1. To fix this recreate the controlfile from trace
ALTER DATABASE BACKUP CONTROLFILE TO TRACE;

Related Posts
Unplug-Plug-Upgrade from 19c to 21c Using Autoupgrade
Plugging non-CDB as a PDB - Manual vs Autoupgrade
Upgrading Oracle Single Instance with ASM (Oracle Restart) from 11.2.0.4 to 19c (19.6) on RHEL 7