Monday, November 12, 2018

Data Guard With OCI PaaS DB (DBaaS) Primary and Standby on IaaS

Oracle Cloud Infrastructure (OCI) platform as a service (PaaS) DB offering (DBaaS) currently does not provide data guard capability for DBs created with VM shape types. This post shows how to create a data guard configuration with the PaaS DB as primary and a standby created on IaaS resources (compute instances + block storages).
The post only looks at the technical aspects of such a configuration. There's no consideration given to the licensing and supporting implication such a configuration would have. For example on PaaS the user tablespaces are by default encrypted (even for SE2) and backup uses compression. On the standby site, these additional options would need to be licensed. On the support side, in a PaaS setup as the provider of the service Oracle is expected provide a fully managed service. Since data guard is not supported as a PaaS offering, any support on the data guard needs to be acquired through traditional Oracle DB support channel not through cloud support channels. These are just the few of the concerns, there could be many more.
Having said that, below diagram shows how the setup would look like. Only the major components are shown or mentioned in the diagram.
Two regions from the OCI regions were chosen to act as primary region and standby region. In this case Frankfurt region was chosen as primary and London region was chosen as standby. The main reason for choosing these two regions is that currently remote peering is allowed between these two region (other two region where RPC is allowed is Ashburn and Phoenix).
London region would contain the IaaS components to create the standby DB instance's host. This include a compute instance and several block storage volumes attached to the compute instances. One of the block would be used as the mount point for installing Oracle binaries. Hence it is formatted and mounted on to the compute instance. Other two block storage volumes are used as block devices for ASM. The ASM disk groups will have the same names as the asm disk group on the PaaS DB, to keep file name conversions simple. The VCN on the London region has the CIDR 10.0.0.0/16, this must be different to the CIDR block in the PaaS DB's VCN. If not remote peering will not be possible. In this case the VCN was created as a public network so there's easy ssh connection to the DB servers. But for production it is best practice to keep DB servers in a private network. The setup on IaaS of the standby node is as below.
Once the infrastructure for standby node is in-place, install the Oracle binaries. For this setup 18.3 was chosen as the GI and DB version. Install 18.3 GI as Oracle restart. As mentioned earlier name the ASM disk group as DATA and RECO to keep file name conversions simple. Install the database software only.
Next create the PaaS DB in the Frankfurt region. For this a VCN was created with CIDR 172.21.0.0/16. The DB created was a virtual machine DB with a CDB and a PDB. Moreover auto backup was enabled for the DB. With auto backup enabled there is no need to manually setup any backup script. Oracle will carry out full and incremental backups.
When first created the 18c version of the DB is 18.1. To keep both sites DB versions same, patch the PaaS DB with Jul 2018 18c Database patch.
Primary site DB configuration is shown below.
Oracle Database 18c Enterprise Edition Release 18.0.0.0.0 - Production
Version 18.3.0.0.0

SQL> show pdbs

    CON_ID CON_NAME                       OPEN MODE  RESTRICTED
---------- ------------------------------ ---------- ----------
         2 PDB$SEED                       READ ONLY  NO
         3 FRAPDB                         READ ONLY  NO
Next step is to establish connectivity between the two VCNs in the two regions. This is accomplished by creating a remote peering connection (RPC) between the two VCNs. To create a RPC first create a dynamic routing gateway (DRG) in each region and attach it to the VCN used by the DB hosts. Afterwards create a RPC and establish a connection between the two VCNs and wait until status to change to "PEERED".
With RPC in place next step is to add routing rules so that traffic flows from one region to another. On the Frankfurt (FRA) region add a routing rule such that any traffic that points to a 10.0.0.0/16 address is routed to the DRG. Same way on the London (LON) region add a routing rule so any traffic for 172.21.0.0/16 is pointed to its DRG. Below image shows these two rules in-place. In addition to the DRG related two rule there's internet gateway rule to allow direct access to the DB servers. If the DB servers were in a private network this rule is not necessary.
Add security list rules allowing traffic coming from remote region to DB port 1521. On the LON VNC's security list add an entry for 172.21.0.0/16 and port 1521 and on FRA VCN's security list add an entry for 10.0.0.0/16 and port 1521.
Apart from these rules on security list, following iptable rule must be added to the standby compute node on the LON region(IaaS) for traffic to flow.
iptables -I INPUT -p tcp -s 172.21.0.0/16 --dport 1521 -j ACCEPT
firewall-cmd --runtime-to-permanent

iptables -L INPUT  2 -n --line-numbers
2    ACCEPT     tcp  --  172.21.0.0/16        0.0.0.0/0            tcp dpt:1521
At this point it is also better to add the hostname of each node to the other node's /etc/hosts file.
Create a TNS entry on the standby Oracle home's tnsnames.ora file and establishes a remote connection to the primary DB. If there are any network connectivity issues resolve them before next steps.



Creating a data guard configuration is similar to earlier post of creating a data guard with CDB. However, there are few points that are specific to cloud which must be taken into consideration.
PaaS DB use encryption by default for all user tablespaces. Therefore, in order to standby to use these encrypted tablespaces copy the key store files (*wallet.*) into standby. The location of the wallet files on standby is added to the sqlnet.ora file in the standby.
ENCRYPTION_WALLET_LOCATION=(SOURCE=(METHOD=FILE)(METHOD_DATA=(DIRECTORY=/opt/app/oracle/product/18.0.0/dbhome_1/network/admin/tde)))
PaaS DB's sqlnet.ora file also has several other encryption related entries that affect client/sever communication. Copy these entries to standby's sqlnet.ora file.
SQLNET.ENCRYPTION_SERVER=REQUIRED
SQLNET.CRYPTO_CHECKSUM_SERVER=REQUIRED
SQLNET.ENCRYPTION_TYPES_SERVER=(AES256,AES192,AES128)
SQLNET.CRYPTO_CHECKSUM_TYPES_SERVER=(SHA1)
SQLNET.ENCRYPTION_CLIENT=REQUIRED
SQLNET.CRYPTO_CHECKSUM_CLIENT=REQUIRED
SQLNET.ENCRYPTION_TYPES_CLIENT=(AES256,AES192,AES128)
SQLNET.CRYPTO_CHECKSUM_TYPES_CLIENT=(SHA1)
With encryption related work done, the standby DB creation could begin. Only the steps that differ or require additional consideration is mentioned below.
The PaaS DB has force logging enabled by default. Verify and if not for some reason it is not enabled, run command to enable force logging.
Paas DB does not have flashback option enabled. If needed (not a must for DG but good to have) enable flashback on primary.
Add required number of standby log files on the primary DB.
The PaaS DB has following entries related to DB name and domain. Key here is that db_name is different to db_unique_name.
*.db_domain='frasubnet.fravcn.oraclevcn.com'
*.db_name='fraDB'
*.db_unique_name='fraDB_fra1kk'
Create static listener entries for primary DB as below.
SID_LIST_LISTENER =
(SID_LIST =
        (SID_DESC =
                (GLOBAL_DBNAME = fraDB_fra1kk.frasubnet.fravcn.oraclevcn.com)
                (SID_NAME = fraDB)
                (ORACLE_HOME = /u01/app/oracle/product/18.2/dbhome_1)
        )
)
Create static listener entries for standby as below. The standby DB is called lonDB.
SID_LIST_LISTENER =
(SID_LIST =
        (SID_DESC =
                (GLOBAL_DBNAME = lonDB.lonsubnet.lonvcn.oraclevcn.com)
                (SID_NAME = lonDB)
                (ORACLE_HOME = /opt/app/oracle/product/18.0.0/dbhome_1)
        )
)
There's no need to create DGMGRL related static listener entries any more if Oracle clusterware is used to managed the DB.
Create TNS entries for both standby and primary in each other's tnsnames.ora files. TNS entry for connecting to primary
FRADBTNS =
  (DESCRIPTION =
    (ADDRESS = (PROTOCOL = TCP)(HOST = fravm)(PORT = 1521))
    (CONNECT_DATA =
      (SERVER = DEDICATED)
      (SERVICE_NAME = fraDB_fra1kk.frasubnet.fravcn.oraclevcn.com)
    )
  )
and TNS entry for connecting to standby
LONDBTNS =
  (DESCRIPTION =
    (ADDRESS = (PROTOCOL = TCP)(HOST = lonvm)(PORT = 1521))
    (CONNECT_DATA =
      (SERVER = DEDICATED)
      (SERVICE_NAME = lonDB.lonsubnet.lonvcn.oraclevcn.com)
    )
  )
At the same time add a local listener entry on the standby.
LISTENER_LONDB =
  (ADDRESS = (PROTOCOL = TCP)(HOST = lonvm)(PORT = 1521))
Add and modify data guard related initialization parameters on the primary DB. Restart the primary DB for changes to take effect. Whenever there's reference to the primary DB it's the db_unique_name of the primary DB that must be used as it's different to db_name in the PaaS DB.
alter system set log_archive_config='dg_config=(fraDB_fra1kk,lonDB)' scope=both ;
alter system set log_archive_dest_1='location=use_db_recovery_file_dest valid_for=(all_logfiles,all_roles) db_unique_name=fraDB_fra1kk' scope=both;
alter system set log_archive_dest_2='service=LONDBTNS LGWR ASYNC NOAFFIRM max_failure=10 max_connections=5 reopen=180 valid_for=(online_logfiles,primary_role) db_unique_name=lonDB' scope=both;
alter system set log_archive_dest_state_2='enable' scope=both;
alter system set log_archive_dest_state_1='enable' scope=both;
alter system set fal_server='LONDBTNS' scope=both;
alter system set log_archive_max_processes=10 scope=both;
alter system set db_file_name_convert='/londb/','/fraDB_fra1kk/' scope=spfile;
alter system set log_file_name_convert='/londb/','/fraDB_fra1kk/' scope=spfile;
alter system set standby_file_management='AUTO' scope=both;
Once the primary DB is restarted create a pfile from the spfile. Copy the pfile to standby home and modify the following entries to reflect the standby DB configuration. Only the values that are related to DG configuration is shown in the table. There were several hidden parameters set on the primary DB which were left on the standby pfile as well. The full list of pfile entries of primary and standby are given at the end of the post. One the parameters related to cloud is encrypt_new_tablespaces parameter. Default value for this is CLOUD_ONLY and as said earlier all user tablespaces created on PaaS DB are encrypted by default. The same behaviour must be reflected on the standby side as well, even when standby becomes a primary. For this reason on the standby this parameter is set to ALWAYS.
Item On Primary On Standby
audit_file_dest/u01/app/oracle/admin/fraDB_fra1kk/adump/opt/app/oracle/admin/lonDB/adump
control_files+RECO/FRADB_FRA1KK/CONTROLFILE/current.256.990973301'+DATA','+RECO'
db_domainfrasubnet.fravcn.oraclevcn.comlonsubnet.lonvcn.oraclevcn.com
db_unique_namefraDB_fra1kklonDB
db_file_name_convert'/londb/','/fraDB_fra1kk/''/fraDB_fra1kk/','/lonDB/'
diagnostic_dest/u01/app/oracle/opt/app/oracle
dispatchers(PROTOCOL=TCP) (SERVICE=fraDBXDB)(PROTOCOL=TCP) (SERVICE=lonDBXDB)
encrypt_new_tablespacesCLOUD_ONLYALWAYS
fal_serverLONDBTNSFRADBTNS
local_listenerLISTENER_FRADBLISTENER_LONDB
log_archive_configdg_config=(fraDB_fra1kk,lonDB)dg_config=(fraDB_fra1kk,lonDB)
log_archive_dest_1location=use_db_recovery_file_dest valid_for=(all_logfiles,all_roles) db_unique_name=fraDB_fra1kklocation=use_db_recovery_file_dest valid_for=(all_logfiles,all_roles) db_unique_name=lonDB
log_archive_dest_2service=LONDBTNS LGWR ASYNC NOAFFIRM max_failure=10 max_connections=5 reopen=180 valid_for=(online_logfiles,primary_role) db_unique_name=lonDBservice=FRADBTNS LGWR ASYNC NOAFFIRM max_failure=10 max_connections=5 reopen=180 valid_for=(online_logfiles,primary_role) db_unique_name=fraDB_fra1kk
log_file_name_convert'/lonDB/','/fraDB_fra1kk/''/fraDB_fra1kk/','/lonDB/'




Once the pfile modification are complete copy it to $ORACLE_HOME/dbs/initlonDB.ora. Start the DB in nomount mode and create a spfile from the pfile. Finally start the standby instance using the spfile in nomount mode. Steps are shown below.
startup nomount; -- uses pfile to start the instance
create spfile from pfile; -- spfile created as The spfile name is +DATA/spfilelonDB.ora
startup force nomount; -- start the instance using spfile
With standby instance started in nomount mode, duplication for standby could be done multiple ways. Two techniques are shown here. Use only one of them to make the standby DB.

Duplicating using active database option
To duplicate using active database option, connect to both target and auxiliary (standby) instances and create disk type rman channels. The default channel type for PaaS is tape (sbt_tape) and duplication will fail if no disk type channels are specified.
rman target / auxiliary sys/pw@londbtns

run {
allocate channel ch1 device type disk;
allocate auxiliary channel ch2 device type disk;
duplicate target database for standby from active database;
}
Duplicating on standby using primary backups
With autoback set on the PaaS primary db, the backups are stored in the OCI object storage. To simplify the duplication process create disk backups and copy them to the standby node. By default PaaS DB has encryption enabled. So use set encryption (refer 1551420.1) option to take the backup.
SET ENCRYPTION ON IDENTIFIED BY "password" ONLY;
run{
allocate channel ch1 device type disk;
backup current controlfile for standby format '/u01/app/oracle/backup/stdbycontro.ctl';
backup database format '/u01/app/oracle/backup/dbbackup%U' plus archivelog format '/u01/app/oracle/backup/archbackup%U' delete all input;
alter system switch logfile;
alter system switch logfile;
alter system switch logfile;
backup archivelog all format '/u01/app/oracle/backup/archbkp%U' delete all input;
}
Once the files are copied to the standby, run the duplication command connecting to the auxiliary channel. Set decryption option on the standby site to restore from the encrypted backups (316886.1).
rman auxiliary sys/pw

SET DECRYPTION IDENTIFIED BY "password";
run {
allocate auxiliary channel ch1 device type disk;
duplicate database for standby BACKUP LOCATION '/home/oracle/backup' dorecover nofilenamecheck;
}
Once the duplication has finished start the standby recovery and verify archive logs are being applied.
SQL> select sequence#,thread#,applied from v$archived_log;

 SEQUENCE#    THREAD# APPLIED
---------- ---------- ---------
        23          1 YES
        24          1 YES
        25          1 NO
        26          1 NO

SQL> alter database recover managed standby database disconnect;

Database altered.

SQL> select sequence#,thread#,applied from v$archived_log;

 SEQUENCE#    THREAD# APPLIED
---------- ---------- ---------
        23          1 YES
        24          1 YES
        25          1 YES
        26          1 YES
Add the standby to the Oracle restart configuration.
srvctl add database -db lonDB -oraclehome /opt/app/oracle/product/18.0.0/dbhome_1  -spfile "+DATA/spfilelonDB.ora" -role physical_standby -startoption mount -diskgroup "DATA,RECO"

srvctl config database -db lonDB
Database unique name: lonDB
Database name:
Oracle home: /opt/app/oracle/product/18.0.0/dbhome_1
Oracle user: oracle
Spfile: +DATA/spfilelonDB.ora
Password file:
Domain:
Start options: mount
Stop options: immediate
Database role: PHYSICAL_STANDBY
Management policy: AUTOMATIC
Disk Groups: DATA,RECO
Services:
OSDBA group:
OSOPER group:
Database instance: lonDB
Verify the user tablespace created in primary are created as encrypted tablespaces on standby. On primary create a tablespace with default options. This tablesapce will have encryption by default.
SQL> create tablespace test datafile size 10m;

SQL>  select con_id,tablespace_name,ENCRYPTED from cdb_tablespaces;

    CON_ID TABLESPACE_NAME                ENC
---------- ------------------------------ ---
         3 SYSTEM                         NO
         3 SYSAUX                         NO
         3 UNDOTBS1                       NO
         3 TEMP                           NO
         3 USERS                          YES
         3 TEST                           YES
Once the redo changes are applied on standby verify the tablespace was created as encrypted.
SQL> select con_id,tablespace_name,ENCRYPTED from cdb_tablespaces;

    CON_ID TABLESPACE_NAME                ENC
---------- ------------------------------ ---
         3 SYSTEM                         NO
         3 SYSAUX                         NO
         3 UNDOTBS1                       NO
         3 TEMP                           NO
         3 USERS                          YES
         3 TEST                           YES
Finally create the DG broker. As both DBs are managed by clusterware, DGMGRL related static listener entries are not needed. Simply create the DG broker configuration as shown below.
DGMGRL> create configuration PaaS_IaaS_DG as primary database is fraDB_fra1kk connect identifier is fraDBTNS;
DGMGRL> add database lonDB as connect identifier is lonDBTNS;
DGMGRL> enable configuration;
Enabled.

DGMGRL> show configuration
Configuration - paas_iaas_dg

  Protection Mode: MaxPerformance
  Members:
  fradb_fra1kk - Primary database
    londb        - Physical standby database

Fast-Start Failover: DISABLED

Configuration Status:
SUCCESS   (status updated 0 seconds ago)

DGMGRL> validate database lonDB

  Database Role:     Physical standby database
  Primary Database:  fradb_fra1kk

  Ready for Switchover:  Yes
  Ready for Failover:    Yes (Primary Running)

  Flashback Database Status:
    fradb_fra1kk:  Off
    londb       :  Off

  Managed by Clusterware:
    fradb_fra1kk:  YES
    londb       :  YES
This conclude the creation of the standby on the IaaS for a PaaS primary DB.



However, there are few points to consider if there's a role reversal. In this case the PaaS DB would become the standby.
DGMGRL> switchover to lonDB
Performing switchover NOW, please wait...
New primary database "londb" is opening...
Oracle Clusterware is restarting database "fradb_fra1kk" ...
Connected to "fraDB_fra1kk"
Connected to "fraDB_fra1kk"
Switchover succeeded, new primary is "londb"

DGMGRL> show configuration

Configuration - paas_iaas_dg

  Protection Mode: MaxPerformance
  Members:
  londb        - Primary database
    fradb_fra1kk - Physical standby database

Fast-Start Failover: DISABLED

Configuration Status:
SUCCESS   (status updated 54 seconds ago)
When the PaaS DB becomes a standby there are few issues related to automatic backups, which are managed by Oracle.
If the DB is in mount mode the backups will fails and console will have a message saying database is in an invalid state.
Further information on this could be viewed using dbcli.
# dbcli describe-job -i ee14045b-d51b-4dbb-826d-2de4ac34b000 -j
{
  "jobId" : "ee14045b-d51b-4dbb-826d-2de4ac34b000",
  "status" : "Failure",
  "message" : "DCS-10089:Database : fraDB_fra1kk is in invalid state: DB is mounted and is not consistently down.Backup cannot be taken...",
  "reports" : [ {
If the DB is open read only mode then backups succeed with a warning. However, on the console backup is listed as failed.
The dbcli output says succeeded with a warning.
# dbcli describe-job -i c8617912-945b-44ca-82db-c001fda55846 -j
{
  "jobId" : "c8617912-945b-44ca-82db-c001fda55846",
  "status" : "Failure",
  "message" : "DCS-10001:Internal error encountered: Unable to get the rman ...output:\nSTATUS\n--\n[COMPLETED WITH WARNINGS]\n\n error:.",
  "reports" : [ {
    "taskId" : "TaskZLockWrapper_1130",
    "taskName" : "task:TaskZLockWrapper_1130",
    "taskResult" : "DCS-10001:Internal error encountered: Unable to ... output:\nSTATUS\n--\n[COMPLETED WITH WARNINGS]\n\n error:.",
Backup job details view also reflect this information.
TIME_TAKEN_DISPLAY   RMAN_END_TIME              I_SIZE_GIG O_SIZE_GIG COMPRESSION_RATIO STATUS                  INPUT_TYPE    OUTPUT_DEVICE_TYP
-------------------- -------------------------- ---------- ---------- ----------------- ----------------------- ------------- -----------------
00:00:53             09-nov-2018 06:31          3.27423096 .743408203         4.4043514 COMPLETED WITH WARNINGS DB INCR       SBT_TAPE
00:00:56             08-nov-2018 06:21          2.62056637 .043457031         60.302471 COMPLETED WITH WARNINGS DB INCR       SBT_TAPE
00:00:47             07-nov-2018 06:21          .839805126 .115478516        7.27239281 COMPLETED               DB INCR       SBT_TAPE
00:00:47             06-nov-2018 06:22          .796207905 .108886719        7.31225914 COMPLETED               DB INCR       SBT_TAPE
The reason for this is RMAN-06820 warning encountered when the backup is taken. This warning is shown when standby is unable to connect to primary and archive the latest redo log before the backup. The full backup log on the PaaS is shown below.
Starting backup at 2018/11/09 06:31:41
RMAN-06820: warning: failed to archive current log at primary database
cannot connect to remote database
using channel ORA_SBT_TAPE_1
using channel ORA_SBT_TAPE_2
using channel ORA_SBT_TAPE_3
using channel ORA_SBT_TAPE_4
using channel ORA_SBT_TAPE_5
specification does not match any archived log in the repository
backup cancelled because there are no files to backup
Finished backup at 2018/11/09 06:31:42

Starting Control File and SPFILE Autobackup at 2018/11/09 06:31:42
piece handle=c-1042410484-20181109-00 comment=API Version 2.0,MMS Version 12.2.0.2
Finished Control File and SPFILE Autobackup at 2018/11/09 06:31:43

Starting backup at 2018/11/09 06:31:45
using channel ORA_SBT_TAPE_1
using channel ORA_SBT_TAPE_2
using channel ORA_SBT_TAPE_3
using channel ORA_SBT_TAPE_4
using channel ORA_SBT_TAPE_5
channel ORA_SBT_TAPE_1: starting compressed full datafile backup set
channel ORA_SBT_TAPE_1: specifying datafile(s) in backup set
channel ORA_SBT_TAPE_2: starting compressed full datafile backup set
channel ORA_SBT_TAPE_2: specifying datafile(s) in backup set
including current SPFILE in backup set
channel ORA_SBT_TAPE_2: starting piece 1 at 2018/11/09 06:31:46
including current control file in backup set
channel ORA_SBT_TAPE_1: starting piece 1 at 2018/11/09 06:31:47
channel ORA_SBT_TAPE_2: finished piece 1 at 2018/11/09 06:31:47
piece handle=DBTRegular-L01541744341311Ir3_spf_FRADB_1042410484_66thov6i_1_1_20181109_991722706 tag=DBTREGULAR-L01541744341311IR3 comment=API Version 2.0,MMS Version 12.2.0.2
channel ORA_SBT_TAPE_2: backup set complete, elapsed time: 00:00:01
channel ORA_SBT_TAPE_1: finished piece 1 at 2018/11/09 06:31:48
piece handle=DBTRegular-L01541744341311Ir3_cf_FRADB_1042410484_65thov6i_1_1_20181109_991722706 tag=DBTREGULAR-L01541744341311IR3 comment=API Version 2.0,MMS Version 12.2.0.2
channel ORA_SBT_TAPE_1: backup set complete, elapsed time: 00:00:01
Finished backup at 2018/11/09 06:31:48

Starting Control File and SPFILE Autobackup at 2018/11/09 06:31:48
piece handle=c-1042410484-20181109-01 comment=API Version 2.0,MMS Version 12.2.0.2
Finished Control File and SPFILE Autobackup at 2018/11/09 06:31:51
As seen from the above output, beside the initial warning there's no errors related to the actual backup. Reason for archiving the primary redo before standby backup is to make the backup consistent. More on this warning could be found on 2314136.1, 2025142.1 and 2098273.1 This issue could be fixed by connecting to the standby with a username and password when using rman. In PaaS DB this would require altering the backup agent login.

Related Posts
Removing a Standby Database From a Data Guard Configuration
Adding a New Physical Standby to Exiting Data Guard Setup
Creating Data Guard Broker for an Existing 12.2 Data Guard Setup with CDB
Oracle Data Guard on 12.2 CDB with Oracle Restart
Creating Data Guard Broker on 12c
Adding Far Sync Instances to Existing Data Guard Configuration
11gR2 RAC to RAC Data Guard
Data Guard Broker for 11gR2 RAC
11gR2 Standalone Data Guard (with ASM and Role Separation)
Setting Up Data Guard Broker for an Existing Data Guard Configuration with Far Sync
Enable Active Dataguard on 11gR2 RAC Standby



Primary (PaaS DB) pfile
*._datafile_write_errors_crash_instance=false
*._db_writer_coalesce_area_size=16777216
*._disable_interface_checking=TRUE
*._enable_numa_support=FALSE
*._file_size_increase_increment=2143289344
*._fix_control='18960760:on'
*._gc_policy_time=20
*._gc_undo_affinity=TRUE
*.audit_file_dest='/u01/app/oracle/admin/fraDB_fra1kk/adump'
*.audit_sys_operations=TRUE
*.audit_trail='db'
*.compatible='18.0.0.0'
*.control_file_record_keep_time=38
*.control_files='+RECO/FRADB_FRA1KK/CONTROLFILE/current.256.990973301'
*.control_management_pack_access='DIAGNOSTIC+TUNING'
*.cpu_count=0
*.cursor_sharing='EXACT'
*.db_block_checking='FULL'
*.db_block_checksum='FULL'
*.db_block_size=8192
*.db_create_file_dest='+DATA'
*.db_create_online_log_dest_1='+RECO'
*.db_domain='frasubnet.fravcn.oraclevcn.com'
*.db_file_name_convert='/londb/','/fraDB_fra1kk/'
*.db_files=1024
*.db_lost_write_protect='TYPICAL'
*.db_name='fraDB'
*.db_recovery_file_dest='+RECO'
*.db_recovery_file_dest_size=34816m
*.db_unique_name='fraDB_fra1kk'
*.diagnostic_dest='/u01/app/oracle'
*.dispatchers='(PROTOCOL=TCP) (SERVICE=fraDBXDB)'
*.enable_ddl_logging=FALSE
*.enable_pluggable_database=true
*.encrypt_new_tablespaces='CLOUD_ONLY'
*.fal_server='LONDBTNS'
*.fast_start_mttr_target=300
*.filesystemio_options='setall'
*.global_names=TRUE
*.local_listener='LISTENER_FRADB'
*.log_archive_config='dg_config=(fraDB_fra1kk,lonDB)'
*.log_archive_dest_1='location=use_db_recovery_file_dest valid_for=(all_logfiles,all_roles) db_unique_name=fraDB_fra1kk'
*.log_archive_dest_2='service=LONDBTNS LGWR ASYNC NOAFFIRM max_failure=10 max_connections=5 reopen=180 valid_for=(online_logfiles,primary_role) db_unique_name=lonDB'
*.log_archive_dest_state_1='enable'
*.log_archive_dest_state_2='enable'
*.log_archive_format='%t_%s_%r.dbf'
*.log_archive_max_processes=10
*.log_buffer=33554432
*.log_file_name_convert='/londb/','/fraDB_fra1kk/'
*.nls_language='AMERICAN'
*.nls_territory='AMERICA'
*.open_cursors=1000
*.os_authent_prefix='ops$'
*.parallel_execution_message_size=16384
*.parallel_threads_per_cpu=2
*.pga_aggregate_limit=30720m
*.pga_aggregate_target=15360m
*.processes=800
*.remote_login_passwordfile='EXCLUSIVE'
*.session_cached_cursors=100
*.sga_target=30720m
*.sql92_security=TRUE
*.standby_file_management='AUTO'
*.undo_retention=900
*.undo_tablespace='UNDOTBS1'
*.use_large_pages='false'
Standby (DB on IaaS) pfile
*._datafile_write_errors_crash_instance=false
*._db_writer_coalesce_area_size=16777216
*._disable_interface_checking=TRUE
*._enable_numa_support=FALSE
*._file_size_increase_increment=2143289344
*._fix_control='18960760:on'
*._gc_policy_time=20
*._gc_undo_affinity=TRUE
*.audit_file_dest='/opt/app/oracle/admin/lonDB/adump'
*.audit_sys_operations=TRUE
*.audit_trail='db'
*.compatible='18.0.0.0'
*.control_file_record_keep_time=38
*.control_files='+DATA','+RECO'
*.control_management_pack_access='DIAGNOSTIC+TUNING'
*.cpu_count=0
*.cursor_sharing='EXACT'
*.db_block_checking='FULL'
*.db_block_checksum='FULL'
*.db_block_size=8192
*.db_create_file_dest='+DATA'
*.db_create_online_log_dest_1='+RECO'
*.db_domain='lonsubnet.lonvcn.oraclevcn.com'
*.db_file_name_convert='/fraDB_fra1kk/','/lonDB/'
*.db_files=1024
*.db_lost_write_protect='TYPICAL'
*.db_name='fraDB'
*.db_recovery_file_dest='+RECO'
*.db_recovery_file_dest_size=34816m
*.db_unique_name='lonDB'
*.diagnostic_dest='/opt/app/oracle'
*.dispatchers='(PROTOCOL=TCP) (SERVICE=lonDBXDB)'
*.enable_ddl_logging=FALSE
*.enable_pluggable_database=true
*.encrypt_new_tablespaces='ALWAYS'
*.fal_server='FRADBTNS'
*.fast_start_mttr_target=300
*.filesystemio_options='setall'
*.global_names=TRUE
*.local_listener='LISTENER_LONDB'
*.log_archive_config='dg_config=(fraDB_fra1kk,lonDB)'
*.log_archive_dest_1='location=use_db_recovery_file_dest valid_for=(all_logfiles,all_roles) db_unique_name=lonDB'
*.log_archive_dest_2='service=FRADBTNS LGWR ASYNC NOAFFIRM max_failure=10 max_connections=5 reopen=180 valid_for=(online_logfiles,primary_role) db_unique_name=fraDB_fra1kk'
*.log_archive_dest_state_1='enable'
*.log_archive_dest_state_2='enable'
*.log_archive_format='%t_%s_%r.dbf'
*.log_archive_max_processes=10
*.log_buffer=33554432
*.log_file_name_convert='/fraDB_fra1kk/','/lonDB/'
*.nls_language='AMERICAN'
*.nls_territory='AMERICA'
*.open_cursors=1000
*.os_authent_prefix='ops$'
*.parallel_execution_message_size=16384
*.parallel_threads_per_cpu=2
*.pga_aggregate_limit=30720m
*.pga_aggregate_target=15360m
*.processes=800
*.remote_login_passwordfile='EXCLUSIVE'
*.session_cached_cursors=100
*.sga_target=30720m
*.sql92_security=TRUE
*.standby_file_management='AUTO'
*.undo_retention=900
*.undo_tablespace='UNDOTBS1'
*.use_large_pages='false'

Thursday, November 8, 2018

DBCA Templates and PDBs - 12.1, 12.2 and 18c

This post looks the option of creating a database using a template, where the template was created from a CDB containing a PDB. During the testing it was observed that 12.1, 12.2 and 18c behaves differently.
The following steps were done on databases of all three versions. Two additional tablespaces were created in the root container (called ROOTBS) and in the PBD (called TEST).
SQL> show pdbs

    CON_ID CON_NAME                       OPEN MODE  RESTRICTED
---------- ------------------------------ ---------- ----------
         2 PDB$SEED                       READ ONLY  NO
         3 PDBONE                         READ WRITE NO

    CON_ID NAME
---------- -------
         1 USERS
         1 ROOTBS
         1 SYSAUX
         1 SYSTEM
         1 UNDOTBS1
         1 TEMP
         2 SYSAUX
         2 SYSTEM
         2 TEMP
         3 SYSAUX
         3 TEMP
         3 TEST
         3 SYSTEM
The basic template creation steps are shown below (for 12.1 only).



Creating CDB Using Template in 12.1
Select the template created earlier during the create database using DBCA. The show details button will give a detail list of tablespace included in the template. In this case it shows the two tablespaces created earlier.
Summary page also list the tablespaces.
During the DB creation process following error is shown but DBCA is able to run to completion.
At the end of the CDB creation, the newly created CDB will have a PDB, with the same name as the PDB where the template was created from.
SQL> show pdbs

    CON_ID CON_NAME                       OPEN MODE  RESTRICTED
---------- ------------------------------ ---------- ----------
         2 PDB$SEED                       READ ONLY  NO
         3 PDBONE                         READ WRITE NO
The new CDB will also have the both root container level tablespace and PDB level tablespace.

SQL> select con_id,name from v$tablespace order by 1;

    CON_ID NAME
---------- ------------------------------
         1 ROOTBS
         1 USERS
         1 SYSAUX
         1 SYSTEM
         1 UNDOTBS1
         1 TEMP
         2 SYSAUX
         2 SYSTEM
         2 TEMP
         3 SYSTEM
         3 SYSAUX
         3 TEMP
         3 TEST

Creating CDB Using Template in 12.2
Similar to 12.1 a template was created on 12.2 DB. Additional tablespace created for root container and on PDB.
SQL> show pdbs

    CON_ID CON_NAME                       OPEN MODE  RESTRICTED
---------- ------------------------------ ---------- ----------
         2 PDB$SEED                       READ ONLY  NO
         3 DEVPDB                         READ WRITE NO

SQL>  select con_id,name from v$tablespace order by 1;

    CON_ID NAME
---------- ------------------------------
         1 USERS
         1 SYSAUX
         1 TEMP
         1 SYSTEM
         1 UNDOTBS1
         1 ROOTBS
         2 TEMP
         2 UNDOTBS1
         2 SYSAUX
         2 SYSTEM
         3 SYSAUX
         3 UNDOTBS1
         3 TEMP
         3 USERS
         3 ASANGA
         3 SYSTEM
The DB creation was done using the template.
The summary page also list the additional tablespaces.
However due to existing issue (tracked under bug 26921308) DB creation fails.



Creating CDB Using Template in 18c
The 18c database had following additional tablespaces for root container and PDB.
SQL> show pdbs

    CON_ID CON_NAME                       OPEN MODE  RESTRICTED
---------- ------------------------------ ---------- ----------
         2 PDB$SEED                       READ ONLY  NO
         3 DOCKLAND                       READ WRITE NO

SQL> select con_id,name from v$tablespace order by 1;

    CON_ID NAME
---------- ------------------------------
         1 USERS
         1 SYSAUX
         1 TEMP
         1 SYSTEM
         1 UNDOTBS1
         1 ROOTBS
         2 TEMP
         2 UNDOTBS1
         2 SYSAUX
         2 SYSTEM
         3 SYSAUX
         3 UNDOTBS1
         3 TEMP
         3 USERS
         3 ASANGA
         3 SYSTEM
The Database was created using the template which showed the tablespaces for PDB as well.
However, the newly created CDB didn't have any new PDBs. But it had a the root container tablespaces which was listed on the template.
SQL> show pdbs

    CON_ID CON_NAME                       OPEN MODE  RESTRICTED
---------- ------------------------------ ---------- ----------
         2 PDB$SEED                       READ ONLY  NO

SQL> select con_id,name from v$tablespace order by 1;

    CON_ID NAME
---------- ------------------------------
         1 SYSTEM
         1 UNDOTBS1
         1 ROOTBS
         1 TEMP
         1 USERS
         1 SYSAUX
         2 TEMP
         2 SYSAUX
         2 UNDOTBS1
         2 SYSTEM
This shows that DBCA templates for CDB and for PDBs other methods such as cloning or transporting must be used.

Related Metalink Notes
Can PDB Templates be Created in DBCA? [ID 2128673.1]
12.2 Dbca: Custom Template Doesn't Work As Expected With Cdb Option [ID 2283829.1]
ORA-65101 Wrong option for CDB parameter in a Template which created by DBCA with silent mode [ID 2270420.1]
12.2 DBCA TEMPLATE NOT SAVE SETTING FOR "INCLUDE IN PDBS" OF "DATABASE OPTIONS" [ID 2274760.1]

Thursday, November 1, 2018

Verifying Redo Transport Compression in 11.2, 12.1 and 12.2 Data Guard Configuraitons

Redo shipped to standby could be compressed for transport in two ways. One is external to the database and uses SSH tunnelling. Other method is enabling compression for the remote log archive destination. Second option requires advance compression license. Once redo transport compression (RTC) is enabled it could verified by enabling log archive tracing. However, there's a difference as to level of tracing and information for DB versions 11.2, 12.1 and 12.2. This posts explore these variations.

Verifying RTC in 11.2.0.4
The DG configuration for 11.2 is as follows
DGMGRL> show configuration

Configuration - db_dg

  Protection Mode: MaxAvailability
  Databases:
    prod  - Primary database
    stdby - Physical standby database

Fast-Start Failover: DISABLED

Configuration Status:
SUCCESS
Enable RTC on the standby database. This updates the log archive destination setting on the primary by adding compression=enable option.
DGMGRL> edit database stdby set property RedoCompression='enable';
Property "redocompression" updated
On the alert log of the primary following could be seen.
Tue Aug 14 14:29:41 2018
ALTER SYSTEM SET log_archive_dest_3='service="stdbytns"','LGWR SYNC AFFIRM delay=0 optional compression=enable max_failure=10 max_connections=5 reopen=180 db_unique_name="stdby" net_timeout=30','valid_for=(all_logfiles,primary_role)' SCOPE=BOTH;
Enabling log archive tracing level of 1 for primary database
DGMGRL>  edit database prod set property LogArchiveTrace='1';
Property "logarchivetrace" updated
Enable log archive tracing level of 16 for standby
DGMGRL> edit database stdby set property LogArchiveTrace='16';
Property "logarchivetrace" updated
With these setting in place on the primary *nss*.trc files will contain entries similar to following to show that RTC is enabled.
RTC: Compression ratio for this I/O is 81%
RTC: Compression ratio for this I/O is 81%
RTC: Compression ratio for this I/O is 81%
RTC: Compression ratio for this I/O is 80%
On the standby the decompression information is shown in *rsf*.trc files as follows
RTC: decompressed 94 bytes to 512 bytes
RTC: decompressed 93 bytes to 512 bytes
RTC: decompressed 99 bytes to 512 bytes
RTC: decompressed 88 bytes to 512 bytes
RTC: decompressed 5221 bytes to 24576 bytes
If the tracing on the primary is increased to 16 (which Oracle document says as "Track detailed ARCHIVELOG destination activity") only limited RTC information is given out.
edit database prod set property LogArchiveTrace='16';
Both *arc*.trc and *lgwr*.trc files will have entries similar to following.
cat prod_arc7_3372.trc | grep RTC
RTC enabled.
cat prod_lgwr_3297.trc | grep RTC
RTC enabled.
RTC enabled.
Verifying RTC in 12.1.0.2
The 12.1.0.2 DG configuration is as follows
DGMGRL> show configuration;

Configuration - ent12c1_dgb

  Protection Mode: MaxAvailability
  Members:
  ent12c1  - Primary database
    fs12c1   - Far sync instance
      ent12c1s - Physical standby database

  Members Not Receiving Redo:
  fs12c1s  - Far sync instance

Fast-Start Failover: DISABLED

Configuration Status:
SUCCESS
On 12.1 setting log archive trace to 1 has no effect on tracing RTC. Setting the value to 16 on primary gives descriptive RTC information which indicates if RTC is enabled or not.
DGMGRL>  edit database ent12c1 set property logarchivetrace='16';
Property "logarchivetrace" updated
The RTC related information appear in *arc*.trc files and *tt*.trc files
cat ent12c1_arc3_15498.trc | grep RTC
RTC enabled.
RTC: Allocating 11534336 buffer bytes for compression.
RTC: header-batch write block 1, count 1 - RTC disabled
RTC: Enable compression for subsequent batches
RTC: actual compressed bytes# 1295

cat ent12c1_tt01_15532.trc | grep RTC
RTC enabled.
Re-enabling RTC following backoff interval 1.
RTC: actual compressed bytes# 10270
RTC: actual compressed bytes# 536
RTC: actual compressed bytes# 1697
RTC: actual compressed bytes# 508
Redo compression could be enabled on far sync instances as well. Setting trace level to 16 on far sync instance will also generate RTC information on far sync trace files.
DGMGRL> edit far_sync fs12c1 set property RedoCompression='enable';
Property "redocompression" updated

DGMGRL> edit far_sync fs12c1 set property LogArchiveTrace='16';
Property "logarchivetrace" updated

cat fs12c1_tt00_2515.trc | grep RTC
RTC: actual compressed bytes# 97
RTC: actual compressed bytes# 79
To get decompression related information from standby set log archive trace value to 1024 (1024: Tracks RFS physical client).
DGMGRL> edit database ent12c1s set property logarchivetrace='1024';
Property "logarchivetrace" updated
Decompression related information is shown in *rfs*.trc files.
cat ent12c1s_rfs_8859.trc | grep RTC
RTC: decompressed 79 bytes to 512 bytes
RTC: decompressed 81 bytes to 512 bytes
RTC: decompressed 97 bytes to 512 bytes
RTC: decompressed 79 bytes to 512 bytes


Verifying RTC in 12.2.0.1 (and 18.3.0.0)
The 12.2 DG configuration is as follows
DGMGRL> show configuration

Configuration - dg12c2

  Protection Mode: MaxAvailability
  Members:
  prodcdb - Primary database
    stbycdb - Physical standby database

Fast-Start Failover: DISABLED

Configuration Status:
SUCCESS   (status updated 35 seconds ago)
On 12.2 setting log archive trace to 16 on primary generates the RTC information.
DGMGRL> edit database prodcdb set property LogArchiveTrace='16';
Property "logarchivetrace" updated
The RTC information is available on *lgwr*.trc and *nss*.trc files.
cat prodcdb_lgwr_2616.trc | grep RTC
RTC enabled.
RTC enabled.
RTC enabled.

cat prodcdb_nss2_2708.trc | grep RTC
RTC: Enable compression following header-only I/O
RTC: Enable compression following header-only I/O
RTC: Enable compression following header-only I/O
No other values such as 512 (512: Tracks LGWR redo shipping network activity) or combination of values seems to give more descriptive information such as compression ratio or compressed bytes as in the case with previous versions. However, decompression information on standby could be traced with 1024.
DGMGRL> edit database stbycdb set property LogArchiveTrace='1024';
Property "logarchivetrace" updated
Decompression information available in *rfs*.trc file.
cat stbycdb_rfs_2243.trc
krsr_rfs_wda: RTC: decompressed 82 bytes to 512 bytes
krsr_rfs_wda: RTC: decompressed 81 bytes to 512 bytes
krsr_rfs_wda: RTC: decompressed 675 bytes to 1536 bytes
krsr_rfs_wda: RTC: decompressed 1167 bytes to 3072 bytes
krsr_rfs_wda: RTC: decompressed 76 bytes to 512 bytes
Useful Metalink Notes
How to confirm if Redo Transport Compression is used In Oracle Data Guard? [ID 1927057.1]
Redo Transport Compression in a Data Guard Environment [ID 729551.1]
LOG_ARCHIVE_TRACE=16 DOES NOT SHOW REDO COMPRESSION DETAIL IN LOG TRANSPORT in 11gr2 [ID 2193605.1]
How to find out the compression rate of Redo Transport Compression ? [ID 1490751.1]