Tuesday, December 8, 2015

Direct NFS Setup

This post shows steps for setting up directNFS for 11.2 and 12.1 (steps are identical). For 11.1 please refer here.
Direct NFS requires a working NFS setup to already exists. Create a NFS setup and mount the NFS on the local server with oracle recommended nfs mount options (refer 359515.1) For this test case (tested with 11.2 and 12.1 both SE, SE2 and EE) nfs mount is as follows
# mount
...
sunrpc on /var/lib/nfs/rpc_pipefs type rpc_pipefs (rw)
nfsd on /proc/fs/nfsd type nfsd (rw)
192.168.0.104:/opt/backup on /usr/local/ochm/nfsdir type nfs (rw,hard,rsize=32768,wsize=32768,nfsvers=3,nointr,timeo=600,tcp,nolock,actimeo=0,addr=192.168.0.104)
Add an entry to fstab as well
# cat /etc/fstab
...
192.168.0.104:/opt/backup /usr/local/ochm/nfsdir        nfs     rw,hard,rsize=32768,wsize=32768,nfsvers=3,nointr,timeo=600,tcp,nolock,actimeo=0,addr=192.168.0.104 0 0
Test to see if the oracle user has write permission on the nfs mount point by creating dummy files. When the nfs is working without any issues configure the oracle to use direct nfs.
Enable direct nfs by executing the rdbms make with dnfs on.
cd $ORACLE_HOME/rdbms/lib
make -f ins_rdbms.mk dnfs_on
rm -f /opt/app/oracle/product/12.1.0/std2/rdbms/lib/odm/libnfsodm12.so; \
    cp /opt/app/oracle/product/12.1.0/std2/lib/libnfsodm12.so /opt/app/oracle/product/12.1.0/std2/rdbms/lib/odm/libnfsodm12.so
To disable direct NFS use
make -f ins_rdbms.mk dnfs_off
rm -f  /opt/app/oracle/product/12.1.0/std2/rdbms/lib/odm/libnfsodm12.so
Verify if the oradism is owned by root and has setuid.
# ls -l $ORACLE_HOME/bin/oradism
-rwsr-x--- 1 root oinstall 71758 Sep 17  2011 /opt/app/oracle/product/12.1.0/std2/bin/oradism
If oradism has wrong permission or ownership correct it as follows
# chown root:oinstall $ORACLE_HOME/bin/oradism
# chmod 4755 $ORACLE_HOME/bin/oradism
Direct NFS searches for mount entries in following files in the given order
1.$ORACLE_HOME/dbs/oranfstab
2. /etc/oranfstab
3. /etc/mtab
In this case an oranfstab file is created as follows
cat oranfstab
server: hpc1nfsmount
local: 192.168.0.66
path:   192.168.0.104
export: /opt/backup mount: /usr/local/ochm/nfsdir
local: 192.168.0.66 is the IP of the local server where the database is running. path: 192.168.0.104 is the IP of the server which provides the NFS storage (i.e. NFS server). export: /opt/backup is the export location on the NFS server. mount: /usr/local/ochm/nfsdir is the mount point local server.

Once the dnfs is enabled start the database. If the dnfs is in use the alert log will have the following line
Oracle instance running with ODM: Oracle Direct NFS ODM Library Version 3.0




However there won't be any dnfs related info available on the views
SQL> select * from v$dnfs_servers;

no rows selected

SQL> select * from v$dnfs_files;

no rows selected
Create a tablespace using the nfs location.
SQL> create tablespace nfstbs datafile '/usr/local/ochm/nfsdir/nfs.dbf'  size 10m;
When the tablespace is being created the alert log will show the dnfs being used
Fri Oct 09 11:16:01 2015
create tablespace nfstbs datafile '/usr/local/ochm/nfsdir/nfs.dbf'  size 10m
Fri Oct 09 11:16:02 2015
Direct NFS: channel id [0] path [192.168.0.104] to filer [192.168.0.104] via local [] is UP
Completed: create tablespace nfstbs datafile '/usr/local/ochm/nfsdir/nfs.dbf'  size 10m
Once the tablespace is created the dnfs views will show the relevant information
SQL> select * from v$dnfs_servers;

        ID SVRNAME              DIRNAME                           MNTPORT    NFSPORT      WTMAX      RTMAX
---------- -------------------- ------------------------------ ---------- ---------- ---------- ----------
         1 192.168.0.104        /opt/backup                         56904       2049    1048576    1048576
   
SQL> select * from v$dnfs_files;

FILENAME                         FILESIZE       PNUM     SVR_ID
------------------------------ ---------- ---------- ----------
/usr/local/ochm/nfsdir/nfs.dbf   10493952         10          1
The same servers and setup was used to test the direct nfs on 11.1 SE as well. The dNFS ODM version is lower than 11.2 and 12.1
Oracle instance running with ODM: Oracle Direct NFS ODM Library Version 2.0
The NFS version was v3
# nfsstat | grep nfs
Server nfs v3:
Client nfs v3:
However this did not succeed creating tablespaces in the 11.1 even after mounting the nfs with a lower version
192.168.0.104:/opt/backup on /usr/local/ochm/nfsdir type nfs (rw,hard,rsize=32768,wsize=32768,nfsvers=2,nointr,timeo=600,tcp,nolock,actimeo=0,addr=192.168.0.104)
Related Post
RMAN Backups on NFS

Tuesday, December 1, 2015

Parameterizing Backup Tags

At times it may be necessary to customize the back tag value. This could be achieved in two ways.
First method is used when a backups are run using a separate rman backup script file, which is used by the session invoking the rman. In this case the customized backup tags could be passed with the "USING" clause. Below is a simple rman backup script which takes a full database backup and archive log
cat backup.rmn
run {
backup database tag='&1' plus archivelog tag='&2';
delete noprompt obsolete;
}
Tags has been parameterized with the use of '&n'. Values for these are passed via the shell script that invokes rman
cat backup.sh
export ORACLE_SID=racse11g2
...
today_date=$(date +%F)
arctag=arch_bkp_$(date +%Y_%m_%d)
bkptag=full_bkp_$(date +%Y_%m_%d)
logfile=$ORACLE_SID"_"$today_date".log"
rman target / @/home/oracle/cronjobs/backup.rmn log /home/oracle/cronjobs/logs/$logfile using $bkptag $arctag
Result of this is customized tags for each days backup.
List of Backups
===============
Key     TY LV S Device Type Completion Time #Pieces #Copies Compressed Tag
------- -- -- - ----------- --------------- ------- ------- ---------- ---
79      B  A  A DISK        06-OCT-15       1       1       NO         ARCH_BKP_2015_10_06
80      B  F  A DISK        06-OCT-15       1       1       NO         FULL_BKP_2015_10_06
81      B  A  A DISK        06-OCT-15       1       1       NO         ARCH_BKP_2015_10_06
...
179     B  A  A DISK        07-OCT-15       1       1       NO         ARCH_BKP_2015_10_07
180     B  F  A DISK        07-OCT-15       1       1       NO         FULL_BKP_2015_10_07
181     B  A  A DISK        07-OCT-15       1       1       NO         ARCH_BKP_2015_10_07



Second method is used when rman is invoked within a shell script. In this case customized tag is appended to a string
cat backupdisk.sh
export ORACLE_SID=racse11g1
...
logfile="disk_"$ORACLE_SID"_"$today_date".log"
tag=$(date +%Y_%m_%d)

rman target / log /home/oracle/cronjobs/logs/$logfile <<EOF
run
{
backup  database tag = 'full_disk_$tag' format '/backup/full_bkp_%d_%T_%U'  plus archivelog tag = 'full_arc_disk_$tag' format '/backup/full_arc_%d_%T_%U';
delete noprompt obsolete;
}
exit;
EOF
This will create customized tags as follows
List of Backups
===============
Key     TY LV S Device Type Completion Time #Pieces #Copies Compressed Tag
------- -- -- - ----------- --------------- ------- ------- ---------- ---
83      B  A  A DISK        06-OCT-15       1       1       NO         FULL_ARC_DISK_2015_10_06
84      B  F  A DISK        06-OCT-15       1       1       NO         FULL_DISK_2015_10_06
85      B  A  A DISK        06-OCT-15       1       1       NO         FULL_ARC_DISK_2015_10_06
...
183     B  A  A DISK        07-OCT-15       1       1       NO         FULL_ARC_DISK_2015_10_07
184     B  F  A DISK        07-OCT-15       1       1       NO         FULL_DISK_2015_10_07
185     B  A  A DISK        07-OCT-15       1       1       NO         FULL_ARC_DISK_2015_10_07

Sunday, November 22, 2015

Relink Fails due to Requirement Checks Failure

Oracle recommends re-linking oracle binaries after OS upgrade. There are few earlier posts on this topic. In this case the server in question was a Amazon EC2 server and didn't have any swap configured.
$ cat /proc/meminfo | grep "Swap"
SwapCached:          0 kB
SwapTotal:           0 kB
SwapFree:            0 kB
After the OS upgrade , relinking of oracle binaries failed due to insufficient swap space.
$ ./relink all
writing relink log to: /opt/app/oracle/product/testbox/11.2.0/install/relink.log

$ more ../install/relink.log
Starting Oracle Universal Installer...

Checking swap space: 0 MB available, 500 MB required.    Failed <<<<

Some requirement checks failed. You must fulfill these requirements before

continuing with the installation,

Exiting Oracle Universal Installer, log for this session can be found at /opt/app/oracle/oraInventory/logs/installActions2015-09-25_12-11-54PM.log
Only parameters that could be passed on to relink is all and as_installed. However looking inside relink script it could be seen that it is using runInstaller.
#-----------------------------------
# location of runInstaller executable
RUNINSTALLER=$ORACLE_HOME/oui/bin/runInstaller

#--------------------------
Runinstaller command gets set of arguments.
#-----------------------------------
# full argument list for runInstaller
#
ARGS="-relink -waitForCompletion -maketargetsxml $MAKEORDER $LOGDIR_ARG ORACLE_HOME=$ORACLE_HOME"

#--------------------------
Adding -ignoreSysPrereqs as the first argument resolved the issue.
ARGS="-ignoreSysPrereqs -relink -waitForCompletion -maketargetsxml $MAKEORDER $LOGDIR_ARG ORACLE_HOME=$ORACLE_HOME"
Relink completes successfully afterwards and relink.log will end with the following for oracle home relinks
test ! -f /opt/app/oracle/product/testbox/11.2.0/bin/oracle ||\
           mv -f /opt/app/oracle/product/testbox/11.2.0/bin/oracle /opt/app/oracle/product/testbox/11.2.0/bin/oracleO
mv /opt/app/oracle/product/testbox/11.2.0/rdbms/lib/oracle /opt/app/oracle/product/testbox/11.2.0/bin/oracle
chmod 6751 /opt/app/oracle/product/testbox/11.2.0/bin/oracle
and with the following for GI home relinking
test ! -f /grid/oracle/product/11.2.0/bin/oracle ||\
           mv -f /grid/oracle/product/11.2.0/bin/oracle /grid/oracle/product/11.2.0/bin/oracleO
mv /grid/oracle/product/11.2.0/rdbms/lib/oracle /grid/oracle/product/11.2.0/bin/oracle
chmod 6751 /grid/oracle/product/11.2.0/bin/oracle
Once the relinking is completed remove the ignoreSysPrereqs parameter from the relink file.
If the server already has swap created and want to recreate the same scenario as above for testing, use
swapoff -a
swapon -a
to disable and enable swapping
Also if the environment uses role separation then after the relink it may be necessary to set the correct permission on the oracle binary using setasmgidwrap.

Related Posts
Upgrading RHEL 6 OS in a 11gR2 RAC Environment
Upgrading OS in 11gR2 RAC Environment (RHEL 5)
Upgrading ASMLib and OS in 11gR1 RAC Environment


Sunday, November 15, 2015

12c Encryption

This post gives a highlight of using TDE in 12c. For detail information refer advance security guide. In 12c the orapki/alter system commands related to key management has been replaced with ADMINISTER KEY MANAGEMENT commands.
The first set of steps shows setting a non-CDB for use of TDE. 1. Create a location for wallet files (key store location).
mkdir -p /opt/app/oracle/product/12.1.0/dbhome_1/network/admin/tde
chmod 700 /opt/app/oracle/product/12.1.0/dbhome_1/network/admin/tde
2. Add ENCRYPTION_WALLET_LOCATION to sqlnet.ora file
ENCRYPTION_WALLET_LOCATION =
  (SOURCE =
   (METHOD = FILE)
    (METHOD_DATA =
     (DIRECTORY = /opt/app/oracle/product/12.1.0/dbhome_2/network/admin/tde)
    )
  )
3. Create the software key store (wallet) by specifying key store location and password or key store (asanga123 in this case)
SQL> ADMINISTER KEY MANAGEMENT CREATE KEYSTORE '/opt/app/oracle/product/12.1.0/dbhome_1/network/admin/tde' identified by asanga123;
Result of this is a key store file (wallet file).
SQL> ! ls
ewallet.p12
At this stage the status of the key store would be closed
SQL> select WRL_TYPE,STATUS,WALLET_TYPE,WALLET_ORDER,FULLY_BACKED_UP,CON_ID,WRL_PARAMETER from v$ENCRYPTION_WALLET;

WRL_TYPE   STATUS   WALLET_TYPE   WALLET_OR FULLY_BAC     CON_ID WRL_PARAMETER
---------- -------  ------------- --------- --------- ---------- -----------------------------------------------------------
FILE       CLOSED   UNKNOWN       SINGLE    UNDEFINED          0 /opt/app/oracle/product/12.1.0/dbhome_1/network/admin/tde/ 
4. Open the key store by providing the key store password
SQL> ADMINISTER KEY MANAGEMENT SET KEYSTORE OPEN IDENTIFIED BY asanga123;
keystore altered.

SQL> select WRL_TYPE,STATUS,WALLET_TYPE,WALLET_ORDER,FULLY_BACKED_UP,CON_ID,WRL_PARAMETER from v$ENCRYPTION_WALLET;

WRL_TYPE   STATUS             WALLET_TYPE   WALLET_OR FULLY_BAC     CON_ID WRL_PARAMETER
---------- ------------------ ------------- --------- --------- ---------- -----------------------------------------------------------
FILE       OPEN_NO_MASTER_KEY PASSWORD      SINGLE    NO                 0 /opt/app/oracle/product/12.1.0/dbhome_1/network/admin/tde/ 
5. Usually at this stage auto login key store is also created. But this lead to an issue (refer 1944507.1).
SQL> ADMINISTER KEY MANAGEMENT SET KEY IDENTIFIED BY asanga123 WITH BACKUP;
ADMINISTER KEY MANAGEMENT SET KEY IDENTIFIED BY asanga123 WITH BACKUP
*
ERROR at line 1:
ORA-28417: password-based keystore is not open

SQL> ADMINISTER KEY MANAGEMENT SET KEYSTORE open identified by asanga123;
ADMINISTER KEY MANAGEMENT SET KEYSTORE open identified by asanga123
*
ERROR at line 1:
ORA-28354: Encryption wallet, auto login wallet, or HSM is already open
Therefore master encryption key is created before creating auto login wallet.
SQL>  ADMINISTER KEY MANAGEMENT SET KEY IDENTIFIED BY asanga123 with backup ;
keystore altered.
This will change wallet status from open_no_master_key to open.
SQL> select WRL_TYPE,STATUS,WALLET_TYPE,WALLET_ORDER,FULLY_BACKED_UP,CON_ID,WRL_PARAMETER from v$ENCRYPTION_WALLET;

WRL_TYPE   STATUS  WALLET_TYPE   WALLET_OR FULLY_BAC     CON_ID WRL_PARAMETER
---------- ------- ------------- --------- --------- ---------- -----------------------------------------------------------
FILE       OPEN    PASSWORD      SINGLE    NO                 0 /opt/app/oracle/product/12.1.0/dbhome_1/network/admin/tde/
On the database alert log following could be observed
Creating new database key for new master key and wallet
Creating new database key with the new master key
Retiring: ena 2 flag 6 mkloc 0
   encrypted key 8dd09c987ef966198af992379477f13900000000000000000000000000000000
   mkid b81a02de82664fbcbf2c9bcdcec4a3ae
Creating: ena 2 flag e mkloc 1
   encrypted key 92c72aeada0197dda6da3e4d64ac875c00000000000000000000000000000000
   mkid ac6f7e2c97ff4fdfbf9d900a653e2c21
Switching out all online logs for the new master key
6. Finally create the auto login key store so that key store is auto opened when the database starts
SQL> ADMINISTER KEY MANAGEMENT CREATE local AUTO_LOGIN KEYSTORE FROM KEYSTORE '/opt/app/oracle/product/12.1.0/dbhome_1/network/admin/tde' identified by asanga123;
keystore altered.
This will create the cwallet.sso file and wallet type will be changed to local_autologin
SQL> ! ls
cwallet.sso  ewallet_2015101517371593.p12  ewallet.p12  ewallet.p12.bak

select WRL_TYPE,STATUS,WALLET_TYPE,WALLET_ORDER,FULLY_BACKED_UP,CON_ID,WRL_PARAMETER from v$ENCRYPTION_WALLET;

WRL_TYPE   STATUS WALLET_TYPE       WALLET_OR FULLY_BAC     CON_ID WRL_PARAMETER
---------- ------ ----------------- --------- --------- ---------- -----------------------------------------------------------
FILE       OPEN   LOCAL_AUTOLOGIN   SINGLE    NO                 0 /opt/app/oracle/product/12.1.0/dbhome_1/network/admin/tde/ 
Encryption could be enabled on CDB with PDB same way as above. Below is an example of enabling TDE for CDB with two PDBs.
SQL> show pdbs

    CON_ID CON_NAME                       OPEN MODE  RESTRICTED
---------- ------------------------------ ---------- ----------
         2 PDB$SEED                       READ ONLY  NO
         3 PDBONE                         READ WRITE NO
         4 PDBTWO                         READ WRITE NO
Create key store, open and set master encryption key.
SQL> ADMINISTER KEY MANAGEMENT CREATE KEYSTORE '/opt/app/oracle/product/12.1.0/dbhome_1/network/admin/tde' identified by asanga123;
keystore altered.

SQL> ADMINISTER KEY MANAGEMENT SET KEYSTORE OPEN IDENTIFIED BY asanga123 container=all;
keystore altered.

SQL>  ADMINISTER KEY MANAGEMENT SET KEY IDENTIFIED BY asanga123 with backup container=all;
keystore altered.

SQL> select WRL_TYPE,STATUS,WALLET_TYPE,WALLET_ORDER,FULLY_BACKED_UP,CON_ID,WRL_PARAMETER from v$ENCRYPTION_WALLET;

WRL_TYPE   STATUS  WALLET_TYPE   WALLET_OR FULLY_BAC     CON_ID WRL_PARAMETER
---------- ------- ------------- --------- --------- ---------- -----------------------------------------------------------
FILE       OPEN    PASSWORD      SINGLE    NO                 0 /opt/app/oracle/product/12.1.0/dbhome_1/network/admin/tde/
Enable auto login
SQL> ADMINISTER KEY MANAGEMENT CREATE local AUTO_LOGIN KEYSTORE FROM KEYSTORE '/opt/app/oracle/product/12.1.0/dbhome_1/network/admin/tde' identified by asanga123;
keystore altered.
Login to a PDB and check the wallet status
SQL> show con_name

CON_NAME
-----------
PDBTWO

select WRL_TYPE,STATUS,WALLET_TYPE,WALLET_ORDER,FULLY_BACKED_UP,CON_ID,WRL_PARAMETER from v$ENCRYPTION_WALLET;

WRL_TYPE   STATUS WALLET_TYPE       WALLET_OR FULLY_BAC     CON_ID WRL_PARAMETER
---------- ------ ----------------- --------- --------- ---------- -----------------------------------------------------------
FILE       OPEN   LOCAL_AUTOLOGIN   SINGLE    NO                 0 /opt/app/oracle/product/12.1.0/dbhome_1/network/admin/tde/ 
v$encryption_keys view could be used to identify each containers key ids even though container id remains 0 for all.
SQL> select CON_ID,KEY_ID,KEYSTORE_TYPE,CREATOR_DBNAME,CREATOR_PDBNAME from v$encryption_keys;

    CON_ID KEY_ID                                                 KEYSTORE_TYPE     CREATOR_DBNAME  CREATOR_PDBNAME
---------- ------------------------------------------------------ ----------------- --------------- ------------------------------
         0 AQugdpFHIk9yv1tQiZj0EhUAAAAAAAAAAAAAAAAAAAAAAAAAAAAA   SOFTWARE KEYSTORE PDBENC          CDB$ROOT
         0 Af8ZFXnsWk+/v9Z4RFalEHwAAAAAAAAAAAAAAAAAAAAAAAAAAAAA   SOFTWARE KEYSTORE PDBENC          PDBONE
         0 AU3e99wEOk8lv4QEBp4Ow3AAAAAAAAAAAAAAAAAAAAAAAAAAAAAA   SOFTWARE KEYSTORE PDBENC          PDBTWO
Following test could be used to verify TDE is working. In this case a tablespace is created with encryption enabled. Few rows are inserted to tables created in those table spaces. At times buffer cache flushing or manual check points may be required to force database writer to write to data files
SQL>  create tablespace enctest datafile size 10m ENCRYPTION DEFAULT STORAGE(ENCRYPT);
Tablespace created.
SQL>  create tablespace nonenctest datafile size 10m ;
Tablespace created.

SQL>select TABLESPACE_NAME,ENCRYPTED from dba_tablespaces;

TABLESPACE_NAME                ENC
------------------------------ ---
ENCTEST                        YES
NONENCTEST                     NO

create table enctable (a varchar2(100)) tablespace ENCTEST;
create table nonecntbale (a varchar2(100)) tablespace NONENCTEST;

begin
  for i in 1 .. 100
  loop
  insert into enctable values ('top secret text');
  insert into nonecntbale values ('top secret text');
  end loop;
  commit;
  end;
/
alter system checkpoint;
Strings command is used to look into the data file content.First the data file belonging to non-encrypted tablespace
strings /opt/app/oracle/oradata/ENT12C1/datafile/o1_mf_nonencte_c1zsml08_.dbf
top secret text,
top secret text,
Secondly the data file belonging to the encrypted tablespace
strings /opt/app/oracle/oradata/ENT12C1/datafile/o1_mf_enctest_c1zsl0d9_.dbf
Vj#>{
O+l;2
c1ax|
        drl
fzc1
Lbqby%}
u;Fa
=B]Wv
~/th
9hHW
=Jc;
@s|J84
|3M*
2ATG
As seen above encrypted tablespace doesn't show it's content in clear text.



Encryption requires additional CPU. Next is a minor test that was done to compare the CPU usage for inserting to a encryption enabled tablespace vs non-encrypted tablespace.
create table NOencryptiontbl(a number, b varchar2(100)) tablespace NONENCTEST;
create table encryptiontbl(a number, b varchar2(100)) tablespace ENCTEST;
  
  begin
    for i in 1 .. 1000000
    loop
    --insert into encryptiontbl values (i,'asdfghjkllqwertyuiopzxcvbnm134567890'||i);
    insert into NOencryptiontbl values (i,'asdfghjkllqwertyuiopzxcvbnm134567890'||i);
    commit;
    end loop;
  end;
  /
The inserts were first done on table created on non-encrypted and then on table created on encryption enabled tablespace. Value of CPU used by this session statistic was used to compare the CPU usage of each session once the inserts have completed. Graph below shows the CPU used, with TDE enabled it took on average 51 CPU seconds more for inserts to complete.

If TDE is used in a data guard environment then the standby must have the key store (wallet) copied over from the primary. Without wallet open the recovery at standby will fail. In the below output datafile 17 was part of a encrypted tablespace created at primary.
Datafile #17: '/opt/app/oracle/oradata/ENT12C1S/datafile/o1_mf_t2_bv215f2m_.dbf'
kcrf_decrypt_redokey: wallet is not opened..(err 28365)
Errors with log /opt/app/oracle/fast_recovery_area/ENT12C1S/archivelog/2015_07_23/o1_mf_1_2726_bv214gg4_.arc
MRP0: Background Media Recovery terminated with error 28365
Thu Jul 23 16:13:49 2015
Errors in file /opt/app/oracle/diag/rdbms/ent12c1s/ent12c1s/trace/ent12c1s_pr00_2541.trc:
ORA-28365: wallet is not open
Managed Standby Recovery not using Real Time Apply
Recovery interrupted!
Recovered data files to a consistent state at change 41627576
Thu Jul 23 16:13:50 2015
Errors in file /opt/app/oracle/diag/rdbms/ent12c1s/ent12c1s/trace/ent12c1s_pr00_2541.trc:
ORA-28365: wallet is not open
Thu Jul 23 16:13:51 2015
MRP0: Background Media Recovery process shutdown (ent12c1s)
Another thing to watch out for is exporting using exp as oppose to expdp. Tables residing in encrypted tablespaces cannot be exported using exp.
exp asanga/asa file=asa.dmp tables=p1,p2
Connected to: Oracle Database 12c Enterprise Edition Release 12.1.0.2.0 - 64bit Production
About to export specified tables via Conventional Path ...
EXP-00111: Table P1 resides in an Encrypted Tablespace T2 and will not be exported
EXP-00111: Table P2 resides in an Encrypted Tablespace T2 and will not be exported
Export terminated successfully with warnings.
Useful metalink notes
TDE Wallet Problem in 12c: Cannot do a Set Key operation when an auto-login wallet is present [ID 1944507.1]
Master Note For Transparent Data Encryption ( TDE ) [ID 1228046.1]
How to migrate a non pluggable database that uses TDE to pluggable database ? [ID 1678525.1]
Known TDE Wallet Issues [ID 1301365.1]
Auto Login Wallet Cannot Be Closed [ID 1204604.1]
Removing TDE, but V$ENCRYPTION_WALLET / GV$ENCRYPTION_WALLET still has rows. [ID 2003528.1]
Step by Step Troubleshooting Guide for TDE Error ORA-28374 [ID 1541818.1]
How To Import Encrypted Datapump Data from 11G To 12c DB [ID 1642059.1]

Related Post
19c Encryption

Sunday, November 8, 2015

Moving Grid Infrastructure Management Repository (GIMR) Database (MGMTDB) Out of Default Disk Group

With 12.1.0.2 creation of Grid Infrastructure Management Repository (GIMR) is mandatory. This results in a CDB called -MGMTDB being created at the end of the GI installation. This CDB (-MGMTDB) contains a single PDB which has the same name as the cluster name. If ASM is used for storing OCR/Vote then, when creating the cluster, by default this database is created on the same disk group where ocr/vote resides. As of 12.1.0.2 there's no way to specify a different disk group for GIMR alone. This post shows steps to moving the GIMR to a different disk group from the default disk group.
1. Create a new ASM disk group to store GIMR. In this case this new disk group is called GIMR and has external redundancy. The GIMR currently reside in a disk group called CLUSFS with normal redundancy. It's important that newly created disk group has compatible.asm and compatible.rdbms set to 12.1.
SQL> select name,type total_mb,free_mb,REQUIRED_MIRROR_FREE_MB,USABLE_FILE_MB from v$asm_diskgroup;

NAME                           TOTAL_    FREE_MB REQUIRED_MIRROR_FREE_MB USABLE_FILE_MB
------------------------------ ------ ---------- ----------------------- --------------
CLUSFS                         NORMAL      17400                   10236           3582
GIMR                           EXTERN      10108                       0          10108

SQL> select name,os_mb,total_mb,free_mb from v$asm_disk where name is not null;

NAME                                OS_MB   TOTAL_MB    FREE_MB
------------------------------ ---------- ---------- ----------
GIMR_0000                           10236      10236      10108
CLUSFS_0002                         10236      10236       5804
CLUSFS_0000                         10236      10236       5812
CLUSFS_0001                         10236      10236       5800
2. Stop and disable ora.crf resource
# crsctl stop res ora.crf -init
CRS-2673: Attempting to stop 'ora.crf' on 'rhel12c1'
CRS-2677: Stop of 'ora.crf' on 'rhel12c1' succeeded

# crsctl modify res ora.crf -attr ENABLED=0 -init
3. Find the node GIMR database is running and run the delete database command from that node. This could drop the current GIMR databse.
$ srvctl status mgmtdb
Database is enabled
Instance -MGMTDB is running on node rhel12c1

[grid@rhel12c1 grid2]$ dbca -silent -deleteDatabase -sourceDB -MGMTDB
Connecting to database
4% complete
9% complete
14% complete
19% complete
23% complete
28% complete
47% complete
Updating network configuration files
48% complete
52% complete
Deleting instance and datafiles
76% complete
100% complete
Look at the log file "/opt/app/oracle/cfgtoollogs/dbca/_mgmtdb.log" for further details.
4. Create the GIMR with the name "-MGMTDB" and specifying new ASM diskgroup
$ dbca -silent -createDatabase -sid -MGMTDB -createAsContainerDatabase true -templateName MGMTSeed_Database.dbc 
-gdbName _mgmtdb -storageType ASM -diskGroupName +GIMR -datafileJarLocation $ORACLE_HOME/assistants/dbca/templates -characterset AL32UTF8 -autoGeneratePasswords -skipUserTemplateCheck
Registering database with Oracle Grid Infrastructure
5% complete
Copying database files
7% complete
9% complete
16% complete
23% complete
30% complete
37% complete
41% complete
Creating and starting Oracle instance
43% complete
48% complete
49% complete
50% complete
55% complete
60% complete
61% complete
64% complete
Completing Database Creation
68% complete
79% complete
89% complete
100% complete
Look at the log file "/opt/app/oracle/cfgtoollogs/dbca/_mgmtdb/_mgmtdb0.log" for further details.


5. Before creating the PDB connected with the GIMR find out the cluster name. There are many ways to find out the cluster name simplest is to use cemutlo.
$ cemutlo -n
rhel12c-cluster
6. Trying to create PDB with above cluster name will fail.
$ dbca -silent -createPluggableDatabase -sourceDB -MGMTDB -pdbName rhel12c-cluster -createPDBFrom RMANBACKUP -PDBBackUpfile $GI_HOME/assistants/dbca/templates/mgmtseed_pdb.dfb -PDBMetadataFile $GI_HOME/assistants/dbca/templates/mgmtseed_pdb.xml -createAsClone true -internalSkipGIHomeCheck
Look at the log file "/opt/app/oracle/cfgtoollogs/dbca/_mgmtdb0.log" for further details.

$ more /opt/app/oracle/cfgtoollogs/dbca/_mgmtdb0.log
The Container database selected is in the open state
rhel12c-cluster: PDB Name must be at least 1 character and at most 30 characters, 
should start with an alphabetical character and must include only alphabetical characters, numbers or the '_' character.
The reason is database names cannot contain - (dash) and cluster name cannot contain _ (underscore). "The cluster name is case-insensitive, must be unique across your enterprise, must be at least one character long and no more than 15 characters in length, must be alphanumeric, cannot begin with a numeral, and may contain hyphens (-). Underscore characters (_) are not allowed.".
So how did the GIMR got created in the first place? It seems if the cluster name contains (-) installer implicitly replace them with (_). This could be verified by looking in the $ORACLE_BASE/cfgtoollogs/dbca/_mgmtdb folder. This will contain a folder called rhel12c_cluster which has all the log files related to original PDB creation. Therefore if the cluster name contains (-) then replace it with (_) and run the PDB create command again.
$ dbca -silent -createPluggableDatabase -sourceDB -MGMTDB -pdbName rhel12c_cluster -createPDBFrom RMANBACKUP -PDBBackUpfile $GI_HOME/assistants/dbca/templates/mgmtseed_pdb.dfb -PDBMetadataFile $GI_HOME/assistants/dbca/templates/mgmtseed_pdb.xml -createAsClone true -internalSkipGIHomeCheck
Creating Pluggable Database
4% complete
12% complete
21% complete
38% complete
55% complete
85% complete
Completing Pluggable Database Creation
100% complete
Look at the log file "/opt/app/oracle/cfgtoollogs/dbca/_mgmtdb/rhel12c_cluster/_mgmtdb0.log" for further details.
7. Find out on which node the -MGMTDB database is running and run mgmtca from that node to secure the GIMR. Running mgmtca doesn't produce any output.
$ srvctl status MGMTDB
Database is enabled
Instance -MGMTDB is running on node rhel12c1
[grid@rhel12c1 _mgmtdb]$ mgmtca
8. Finally enable and start the ora.crf resource.
# crsctl modify res ora.crf -attr ENABLED=1 -init
# crsctl start res ora.crf -init
CRS-2672: Attempting to start 'ora.crf' on 'rhel12c1'
CRS-2676: Start of 'ora.crf' on 'rhel12c1' succeeded
Useful metalink notes
Managing the Cluster Health Monitor Repository [ID 1921105.1]
FAQ: 12c Grid Infrastructure Management Repository (GIMR) [ID 1568402.1]
How to Move GI Management Repository to Different Shared Storage (Diskgroup, CFS or NFS etc) [ID 1589394.1]
12.2: How to Create GI Management Repository [ID 2246123.1]
MDBUtil: GI Management Repository configuration tool [ID 2065175.1]
How to relocate CHM repository and increase retention time [ID 2062234.1]

Sunday, November 1, 2015

Convert Single Instance DB to RAC DB - Manual Method

There are many ways to convert a single database to RAC. Using database template, rconfig or the manual method. This post shows the steps for manual method of converting a single instance database to RAC (tested for both 11.2.0.4 and 12.1.0.2). The single database is running out of a non-rac oracle home. Backup of this would be restored using a rac enabled oracle home and then converted to a RAC database.
The database is called "asanga" and will retain the same name when converted to RAC.
1. Create a pfile of the single instance database. The pfile entries are shown below
 more pfile.ora
*.audit_file_dest='/opt/app/oracle/admin/asanga/adump'
*.audit_trail='NONE'
*.compatible='11.2.0.4.0'
*.db_block_size=8192
*.db_create_file_dest='+DATA'
*.db_domain=''
*.db_name='ASANGA'
*.db_recovery_file_dest_size=5218762752
*.db_recovery_file_dest='+FLASH'
*.diagnostic_dest='/opt/app/oracle'
*.dispatchers='(PROTOCOL=TCP) (SERVICE=asangaXDB)'
*.open_cursors=300
*.pga_aggregate_target=524288000
*.processes=3000
*.remote_login_passwordfile='EXCLUSIVE'
*.sessions=4524
*.sga_target=2147483648
*.undo_tablespace='UNDOTBS1'
2. Create the adump directories in all nodes of the cluster (in this case it is a two node cluster)
mkdir -p /opt/app/oracle/admin/asanga/adump
3. Copy the pfile created earlier to one of the RAC nodes and restore the datababase.
SQL> startup nomount pfile='/home/oracle/backup/pfile.ora';

SQL> create spfile from pfile='/home/oracle/backup/pfile.ora';
SQL> startup force nomount;

RMAN> restore controlfile from '/home/oracle/backup/ctlbkp04qfl69p_1_1.ctl';

Starting restore at 28-AUG-15
using target database control file instead of recovery catalog
allocated channel: ORA_DISK_1
channel ORA_DISK_1: SID=2275 device type=DISK

channel ORA_DISK_1: restoring control file
channel ORA_DISK_1: restore complete, elapsed time: 00:00:07
output file name=+DATA/asanga/controlfile/current.270.888927491
output file name=+FLASH/asanga/controlfile/current.447.888927493
Finished restore at 28-AUG-15

RMAN> alter database mount;

RMAN> catalog start with '/home/oracle/backup/';

RMAN> run {
2> restore database;
3> recover database;
4> }

Starting restore at 28-AUG-15
using channel ORA_DISK_1

channel ORA_DISK_1: starting datafile backup set restore
channel ORA_DISK_1: specifying datafile(s) to restore from backup set
channel ORA_DISK_1: restoring datafile 00001 to +DATA/asanga/datafile/system.309.888838053
channel ORA_DISK_1: restoring datafile 00002 to +DATA/asanga/datafile/sysaux.308.888838053
channel ORA_DISK_1: restoring datafile 00003 to +DATA/asanga/datafile/undotbs1.307.888838053
channel ORA_DISK_1: restoring datafile 00004 to +DATA/asanga/datafile/users.310.888838053
channel ORA_DISK_1: reading from backup piece /home/oracle/backup/full_bkp_ASANGA_20150827_02qfl697_1_1
channel ORA_DISK_1: piece handle=/home/oracle/backup/full_bkp_ASANGA_20150827_02qfl697_1_1 tag=FULL_BKP
channel ORA_DISK_1: restored backup piece 1
channel ORA_DISK_1: restore complete, elapsed time: 00:00:25
Finished restore at 28-AUG-15

Starting recover at 28-AUG-15
using channel ORA_DISK_1

starting media recovery

channel ORA_DISK_1: starting archived log restore to default destination
channel ORA_DISK_1: restoring archived log
archived log thread=1 sequence=2
channel ORA_DISK_1: reading from backup piece /home/oracle/backup/full_arc_ASANGA_20150827_03qfl69n_1_1
channel ORA_DISK_1: piece handle=/home/oracle/backup/full_arc_ASANGA_20150827_03qfl69n_1_1 tag=FULL_ARC_BKP
channel ORA_DISK_1: restored backup piece 1
channel ORA_DISK_1: restore complete, elapsed time: 00:00:02
archived log file name=+FLASH/asanga/archivelog/2015_08_28/thread_1_seq_2.444.888927577 thread=1 sequence=2
channel default: deleting archived log(s)
archived log file name=+FLASH/asanga/archivelog/2015_08_28/thread_1_seq_2.444.888927577 RECID=3 STAMP=888927577
unable to find archived log
archived log thread=1 sequence=3
RMAN-00571: ===========================================================
RMAN-00569: =============== ERROR MESSAGE STACK FOLLOWS ===============
RMAN-00571: ===========================================================
RMAN-03002: failure of recover command at 08/28/2015 12:19:38
RMAN-06054: media recovery requesting unknown archived log for thread 1 with sequence 3 and starting SCN of 1483282

RMAN> alter database open resetlogs;

database opened
4. Shutdown and restart the database to see if it opens without any issues.

5. The single instance will have only one redo thread. Add anther redo thread (or more if RAC has more nodes) and enable it.
SQL> select * from v$log;

    GROUP#    THREAD#  SEQUENCE#      BYTES  BLOCKSIZE    MEMBERS ARC STATUS           FIRST_CHANGE# FIRST_TIM NEXT_CHANGE# NEXT_TIME
---------- ---------- ---------- ---------- ---------- ---------- --- ---------------- ------------- --------- ------------ ---------
         1          1          1   52428800        512          2 NO  CURRENT                1483283 28-AUG-15   2.8147E+14
         2          1          0   52428800        512          2 YES UNUSED                       0                      0
         3          1          0   52428800        512          2 YES UNUSED                       0                      0


alter database add logfile thread 2 group 4 ('+DATA','+FLASH') size 50m ;
alter database add logfile thread 2 group 5 ('+DATA','+FLASH') size 50m ;
alter database add logfile thread 2 group 6 ('+DATA','+FLASH') size 50m ;

SQL> alter database enable public thread 2;

Database altered.

SQL>  select * from v$log;

    GROUP#    THREAD#  SEQUENCE#      BYTES  BLOCKSIZE    MEMBERS ARC STATUS           FIRST_CHANGE# FIRST_TIM NEXT_CHANGE# NEXT_TIME
---------- ---------- ---------- ---------- ---------- ---------- --- ---------------- ------------- --------- ------------ ---------
         1          1          1   52428800        512          2 NO  CURRENT                1483283 28-AUG-15   2.8147E+14
         2          1          0   52428800        512          2 YES UNUSED                       0                      0
         3          1          0   52428800        512          2 YES UNUSED                       0                      0
         4          2          1   52428800        512          2 YES INACTIVE               1484502 28-AUG-15      1484512 28-AUG-15
         5          2          0   52428800        512          2 YES UNUSED                       0                      0
         6          2          0   52428800        512          2 YES UNUSED                       0                      0

6 rows selected.
6. The single instance would have had one undo tablespace. Create undo tablespace for each additional instance
create undo tablespace UNDOTBS2 datafile '+DATA(datafile)' SIZE 600M AUTOEXTEND ON NEXT 64M MAXSIZE UNLIMITED;



7. Create a pfile from the spfile and edit it by removing the *.undo_tablespace='UNDOTBS1' and add instance specific entries
*.cluster_database_instances=2
*.cluster_database=true
asanga1.instance_number=1
asanga2.instance_number=2
asanga1.thread=1
asanga2.thread=2
asanga1.undo_tablespace='UNDOTBS1'
asanga2.undo_tablespace='UNDOTBS2'
Also make sure log archive format has thread number
log_archive_format                   string      %t_%s_%r.dbf
8. Shutdown the database and copy the new pfile to all nodes and start each instance using the pfile.
[oracle@rhel6m1 ~]$ export ORACLE_SID=asanga1

[oracle@rhel6m1 backup]$ sqlplus  / as sysdba

SQL> startup pfile='/home/oracle/backup/initasanga.ora';
ORACLE instance started.

Total System Global Area 2137886720 bytes
Fixed Size                  2254952 bytes
Variable Size            1811941272 bytes
Database Buffers          318767104 bytes
Redo Buffers                4923392 bytes
Database mounted.
Database opened.

SQL> select instance_name,status from v$instance;

INSTANCE_NAME    STATUS
---------------- ------------
asanga1          OPEN

[oracle@rhel6m2 ~]$ export ORACLE_SID=asanga2
[oracle@rhel6m2 ~]$ sqlplus  / as sysdba

SQL> startup pfile='/home/oracle/initasanga.ora';
ORACLE instance started.

Total System Global Area 2137886720 bytes
Fixed Size                  2254952 bytes
Variable Size            1811941272 bytes
Database Buffers          318767104 bytes
Redo Buffers                4923392 bytes
Database mounted.
Database opened.

SQL> select instance_name,status from v$instance;

INSTANCE_NAME    STATUS
---------------- ------------
asanga2          OPEN

SQL> select HOST_NAME,INSTANCE_NUMBER,INSTANCE_NAME,STATUS,THREAD# from gv$instance;

HOST_NAME                 INSTANCE_NUMBER INSTANCE_NAME    STATUS          THREAD#
------------------------- --------------- ---------------- ------------ ----------
rhel6m2.domain.net                     2 asanga2          OPEN                  2
rhel6m1.domain.net                     1 asanga1          OPEN                  1
9. Once confirmed that all instances are opening without any issue, create a spfile in a shared location.
SQL> create spfile='+data' from pfile='/home/oracle/backup/initasanga.ora';
Make an alias in ASM for the spfile
ASMCMD> mkalias spfile.283.888929113 spfileasanga.ora
ASMCMD> ls -l
Type           Redund  Striped  Time             Sys  Name
PARAMETERFILE  UNPROT  COARSE   AUG 28 12:00:00  Y    spfile.283.888929113
                                                 N    spfileasanga.ora => +DATA/ASANGA/PARAMETERFILE/spfile.283.888929113
Create instance specific pfile with entries to the spfile alias. Also remove the spfile created earlier from the local node
cat initasanga1.ora
spfile='+DATA/ASANGA/PARAMETERFILE/spfileasanga.ora'

scp initasanga1.ora rhel6m2:`pwd`/initasanga2.ora
10. Create oracle password files for each node

11. Run the following script to create cluster related views
@?/rdbms/admin/catclust.sql
After running this scrip the database registry will show RAC component entry
COMP_ID    COMP_NAME                           STATUS     VERSION
---------- ----------------------------------- ---------- ----------
RAC        Oracle Real Application Clusters    VALID      11.2.0.4.0
12.Add the database to the cluster
srvctl add database -d asanga -o $ORACLE_HOME -p "+DATA/ASANGA/PARAMETERFILE/spfileasanga.ora" 
srvctl add instance -d asanga -i asanga1 -n rhel6m1
srvctl add instance -d asanga -i asanga2 -n rhel6m2

[oracle@rhel6m1 dbs]$ srvctl config database -d asanga -a
Database unique name: asanga
Database name: rhel6m1
Oracle home: /opt/app/oracle/product/11.2.0/dbhome_4
Oracle user: oracle
Spfile: +DATA/ASANGA/PARAMETERFILE/spfileasanga.ora
Domain: local
Start options: open
Stop options: immediate
Database role: PRIMARY
Management policy: AUTOMATIC
Server pools: asanga
Database instances: asanga1,asanga2
Disk Groups:
Mount point paths:
Services:
Type: RAC
Database is enabled
Database is administrator managed
13. Shutdown both instances and start the database using srvctl.
srvctl start database -d asanga
srvctl status database -d asanga
Instance asanga1 is running on node rhel6m1
Instance asanga2 is running on node rhel6m2

srvctl config database -d asanga -a
Database unique name: asanga
Database name: rhel6m1
Oracle home: /opt/app/oracle/product/11.2.0/dbhome_4
Oracle user: oracle
Spfile: +DATA/ASANGA/PARAMETERFILE/spfileasanga.ora
Domain:
Start options: open
Stop options: immediate
Database role: PRIMARY
Management policy: AUTOMATIC
Server pools: asanga
Database instances: asanga1,asanga2
Disk Groups: DATA,FLASH
Mount point paths:
Services:
Type: RAC
Database is enabled
Database is administrator managed
14. Verify the database is also listed as a cluster resource
crsctl stat res ora.asanga.db
NAME=ora.asanga.db
TYPE=ora.database.type
TARGET=ONLINE           , ONLINE
STATE=ONLINE on rhel6m1, ONLINE on rhel6m2

Tuesday, October 20, 2015

12c Compression Comparison

OLTP Table compression introduced during 11g is called Advanced Row Compression in 12c. However the old syntax still works on 12c as well. Advance row compression maintains compression during all types of data manipulation operations, including conventional DML such as INSERT and UPDATE. Oracle's claim is that this minimizes the overhead for write operation of and making it suitable for OLTP and Data Warehouses. Advanced Row Compression requires purchasing Oracle Advanced Compression option while basic table compression is a feature of Oracle Database 12c Enterprise Edition (EE).
This post compares the reduction of storage usage, overhead of CPU and redo generated for advance vs basic vs no compression options. It is not an extensive test, how useful the compression depends on data being compressed.
As said earlier compress for OLTP creates the table with advance row compression.
create table x (a number, b varchar2(100)) compress for oltp;
select table_name,COMPRESSION,COMPRESS_FOR from  user_tables where table_name='X';

TABLE_NAME COMPRESS COMPRESS_FOR
---------- -------- ------------------------------
X          ENABLED  ADVANCED

create table x (a number, b varchar2(100)) ROW STORE COMPRESS ADVANCED;
select table_name,COMPRESSION,COMPRESS_FOR from  user_tables where table_name='X';

TABLE_NAME COMPRESS COMPRESS_FOR
---------- -------- ------------------------------
X          ENABLED  ADVANCED
If no option is specified this would create basic compression or it could be explicitly specified.
create table x (a number, b varchar2(100)) compress;
SQL> select table_name,COMPRESSION,COMPRESS_FOR from  user_tables where table_name='X';

TABLE_NAME COMPRESS COMPRESS_FOR
---------- -------- ------------------------------
X          ENABLED  BASIC

create table x (a number, b varchar2(100)) ROW STORE COMPRESS basic;
select table_name,COMPRESSION,COMPRESS_FOR from  user_tables where table_name='X';

TABLE_NAME COMPRESS COMPRESS_FOR
---------- -------- ------------------------------
X          ENABLED  BASIC
For the test case three tables were created with no compression, advance and basic compression. The database is 12.1.0.2.
create table NOCOMPRESSTABLE(a number, b varchar2(100));
create table ADVCOMPRESSTABLE(a number, b varchar2(100)) ROW STORE COMPRESS ADVANCED;
create table BASICVCOMPRESSTABLE(a number, b varchar2(100)) ROW STORE COMPRESS basic;

SQL> select table_name,COMPRESSION,COMPRESS_FOR from  user_tables where table_name like '%COMPRESSTABLE';

TABLE_NAME           COMPRESS COMPRESS_FOR
-------------------- -------- -------------
NOCOMPRESSTABLE      DISABLED
BASICVCOMPRESSTABLE  ENABLED  BASIC
ADVCOMPRESSTABLE     ENABLED  ADVANCED
Each table consists of two columns and was populated using the following anonymous PL/SQL block. It inserts a unique value to first column while second column get the same value inserted with each row.
begin
    for i in 1 .. 1000000
    loop
    insert into NOCOMPRESSTABLE values (i,'asdfghjkllqwertyuiopzxcvbnm134567890');
    insert into ADVCOMPRESSTABLE values(i,'asdfghjkllqwertyuiopzxcvbnm134567890');
    insert into BASICVCOMPRESSTABLE values(i,'asdfghjkllqwertyuiopzxcvbnm134567890');
    commit;
    end loop;
end;
/
At the end of the insert the segment size of each table was measured
select segment_name,bytes/1024/1024 as MB from user_segments where segment_name like '%COMPRESS%';

SEGMENT_NAME          MB
--------------------- ---
ADVCOMPRESSTABLE      14
BASICVCOMPRESSTABLE   47
NOCOMPRESSTABLE       52
Results are no surprise as the segment of the table created with advance compression is the smallest and table with no compression is the largest. However this test is the most optimistic of cases where one column consists of only a single value.





Therefore the test was rerun after recreating the tables and inserts were modified as below where some of the values inserted to second column are duplicates but not all of them
begin
    for i in 1 .. 1000000
    loop
--    insert into NOCOMPRESSTABLE values (i,'asdfghjkllqwertyuiopzxcvbnm134567890'||mod(i,500));
    insert into ADVCOMPRESSTABLE values(i,'asdfghjkllqwertyuiopzxcvbnm134567890'||mod(i,500));
--    insert into BASICVCOMPRESSTABLE values(i,'asdfghjkllqwertyuiopzxcvbnm134567890'||mod(i,500));
    commit;
    end loop;
    end;
    /
Only one table was inserted at a time and CPU used by session and redo size were measured for each test, shown on the graphs below.
Strangely the inserts done with compress options used less CPU than when inserted without any compression (test were repeated 3 times and same pattern was observed). However the difference wasn't huge. On the other hand use of advance compression option resulted in more redo being created than basic compression or no compression.
Comparing the segment size revealed the following.
select segment_name,bytes/1024/1024 as MB from user_segments where segment_name like '%COMPRESS%';

SEGMENT_NAME          MB
--------------------- ---
ADVCOMPRESSTABLE      55
BASICVCOMPRESSTABLE   50
NOCOMPRESSTABLE       55
It seems basic compression is slightly better than no compression option but advance compression didn't yield any benefit at all. Reason could be that advance compression didn't encounter enough duplicate values inside a block to make any significant reduction. It's important to remember that compression happens at block level.
Test was rerun this time however the tables were created on a tablespace with a block size of 32k. The segment size at the end of the test are as below
select segment_name,bytes/1024/1024 as MB from user_segments where segment_name like '%COMPRESS%';

SEGMENT_NAME          MB
--------------------- ---
ADVCOMPRESSTABLE      49
BASICVCOMPRESSTABLE   49
NOCOMPRESSTABLE       54
This time there's difference between segment size whose tables were created with a compression options and table created without any compression. However the reduction in segment size is not same as when all the values were duplicates.
From this simple test case it seems, to benefit from a compression strategy, the nature of the data is more important than the compression option being used. The reduction in space is dictated by how many duplicate values are found inside a block.

Wednesday, October 14, 2015

Change in Table Locking (TM) Behavior When Foreign Key Columns are Unindexed in 12.1.0.2

One of the most common cause of Table locks related wait enq: TM - contention is unindexed foreign key columns. There are two previous post related to this which areTM enq: Locking and Invisible Indexes and TM enq: Locks. One of the scenarios examined in these previous posts was where one session inserting to parent table and another session updating the primary key column or deleting a row on the parent table. In this scenario the second session would hang, observed both in 11.2.0.3 and 11.2.0.4. However this is no longer the case in 12c, tested on 12.1.0.2. This post look the change in this behavior.
First the test on 11.2.0.4. The test case is same as the one used in previous post where table X is the parent table and Y is the child table which refers table X and foreign key column is not indexed. Tables were populated same as before. Two sessions were created using sqlplus and autocommit set to off.
Session one does an insert to the primary table
SQL> select sys_context('userenv','sid') sid from dual;

SID
----
32

SQL> insert into x values (11,21);

1 row created.
Looking at the locks taken by session one (id : 32) it could be seen TM locks for both parent and child tables are there.
SQL> select * from v$lock where sid=32;

ADDR             KADDR                   SID TY        ID1        ID2      LMODE    REQUEST      CTIME      BLOCK
---------------- ---------------- ---------- -- ---------- ---------- ---------- ---------- ---------- ----------
0000000090D90E10 0000000090D90E68         32 AE        100          0          4          0        134          2
00007FFC99FE6E10 00007FFC99FE6E70         32 TM      72423          0          3          0         70          2
00007FFC99FE6E10 00007FFC99FE6E70         32 TM      72425          0          3          0         70          2
000000008DBA98F0 000000008DBA9968         32 TX     458766       5244          6          0         70          2

SQL> select object_name,object_id from user_objects where object_id in (72423,72425);

OBJECT_NAME  OBJECT_ID
----------- ----------
X               72423
Y               72425
The ID1 column gives the object ids and querying the user object view shows that TM locks are taken for both parent (X) and child (Y) tables. The locking mode is 3 which is row-X (SX). 11.2 Oracle documentation also says "Inserts into the parent table do not acquire table locks on the child table" which is not clearly the case however this text has been changed in 12c documentation to state "Inserts into the parent table do not acquire blocking table locks that prevent DML on the child table".
Second session does an update of the parent table by changing a value in primary key column. This results in second session hanging and this is also stated in 11.2 documentation "When both of the following conditions are true, the database acquires a full table lock on the child table:
1.No index exists on the foreign key column of the child table.
2.A session modifies a primary key in the parent table (for example, deletes a row or modifies primary key attributes) or merges rows into the parent table. Inserts into the parent table do not acquire table locks on the child table.
". Condition 1 is true in this case but so far there has been only an insert to parent table and this has resulted in a TM lock even though documentation says otherwise. Second condition is true only for second session which updates a primary key.
SQL> select sys_context('userenv','sid') sid from dual;

SID
-----
149

SQL> update x set a = 12 where a = 7;
The update hangs and querying locks for both session 32 and 149 gives the following
SQL>  select * from v$lock where sid in (149,32);

ADDR             KADDR                   SID TY        ID1        ID2      LMODE    REQUEST      CTIME      BLOCK
---------------- ---------------- ---------- -- ---------- ---------- ---------- ---------- ---------- ----------
0000000090D8EF40 0000000090D8EF98        149 AE        100          0          4          0       1027          2
0000000090D90E10 0000000090D90E68         32 AE        100          0          4          0       1029          2
00007FFC99FE6E10 00007FFC99FE6E70         32 TM      72423          0          3          0        965          2
00007FFC99FE6E10 00007FFC99FE6E70         32 TM      72425          0          3          0        965          1
00007FFC99FE6E10 00007FFC99FE6E70        149 TM      72423          0          3          0         58          2
00007FFC99FE6E10 00007FFC99FE6E70        149 TM      72425          0          0          4         58          0
000000008DBA98F0 000000008DBA9968         32 TX     458766       5244          6          0        965          2
From the lock view it could be seen that second session (id :149) is also has locked (object id 72423 which table X, parent table) and trying to lock the child table same as session 32. However it's failing to get the lock on child table which it request with mode 4 but locked by session 32 with mode 3 and is not getting the lock. The block column shows that session 32 lock on object 72425 (child table Y) is the blocker.
DBA Waiters view give the holding and blocking session
SQL> select * from dba_waiters;

WAITING_SESSION HOLDING_SESSION LOCK_TYPE                  MODE_HELD  MODE_REQUE   LOCK_ID1   LOCK_ID2
--------------- --------------- -------------------------- ---------- ---------- ---------- ----------
            149              32 DML                        Row-X (SX) Share           72425          0
and session wait shows the wait event
SQL> select sid,event from v$session_wait where sid in (32,149);

       SID EVENT
---------- -------------------------
        32 SQL*Net message to client
       149 enq: TM - contention



Now the same test on 12c (12.1.0.2) both on EE and SE2. Session one does the insert
SQL> select sys_context('userenv','sid') sid from dual;

SID
-----
399

SQL> insert into x values (11,21);

1 row created.
Looking the locks it could be seen the child table is locked in mode 2 (row-S (SS)), a shared mode which is different to behavior in 11.2
SQL> select * from v$lock where sid=399;

ADDR             KADDR                   SID TY        ID1        ID2      LMODE    REQUEST      CTIME      BLOCK     CON_ID
---------------- ---------------- ---------- -- ---------- ---------- ---------- ---------- ---------- ---------- ----------
000000013D4CBA98 000000013D4CBB10        399 AE        133          0          4          0        155          0          0
000000013731A468 000000013731A4E8        399 TX     393216      33536          6          0         89          0          0
00002B104C568A28 00002B104C568A90        399 TM     137537          0          3          0         89          0          0
00002B104C568A28 00002B104C568A90        399 TM     137539          0          2          0         89          0          0

SQL> select object_name,object_id from user_objects where object_id in (137537,137539);

OBJECT_NAME  OBJECT_ID
----------- ----------
X               137537
Y               137539
Unlike the 11.2 environment in 12c the second session running the update statement will succeed.
SQL> select sys_context('userenv','sid') sid from dual;

SID
-----
769

SQL>  update x set a = 12 where a = 7;

1 row updated.
Looking at the lock view it shows that second session only has table locks on parent table and no locks on child table
SQL> select * from v$lock where sid in (769,399);

ADDR             KADDR                   SID TY        ID1        ID2      LMODE    REQUEST      CTIME      BLOCK     CON_ID
---------------- ---------------- ---------- -- ---------- ---------- ---------- ---------- ---------- ---------- ----------
000000013D4CBA98 000000013D4CBB10        399 AE        133          0          4          0        426          0          0
000000013D4CCDC8 000000013D4CCE40        769 AE        133          0          4          0        423          0          0
00002B104C5679F0 00002B104C567A58        399 TM     137537          0          3          0        360          0          0
00002B104C5679F0 00002B104C567A58        399 TM     137539          0          2          0        360          0          0
00002B104C5679F0 00002B104C567A58        769 TM     137537          0          3          0         79          0          0
00000001373198C8 0000000137319948        769 TX     327692      33608          6          0         79          0          0
000000013731A468 000000013731A4E8        399 TX     393216      33536          6          0        360          0          0
This a change in behavior on 12c and oracle documentation states "Inserts into the parent table do not acquire blocking table locks that prevent DML on the child table. In the case of inserts, the database acquires a lock on the child table that prevents structural changes, but not modifications of existing or newly added rows."
This change of locking behavior in 12c could result in removing some indexes on child tables if the only reason for creating them is to prevent locking situations similar to above that were observed in 11.2. However the other locking situations due to unindexed foreign key columns are still there.

Thursday, October 1, 2015

Setting Up Data Guard Broker for an Existing Data Guard Configuration with Far Sync

This post list the steps for setting up data guard broker configuration for an existing data guard configuration with far sync. The data guard configuration is the same setup that had far sync instances added earlier. The current setup of the data guard configuration is given below.
Before setting up the dg broker, patch 19399918 (included in 12.1.0.2.4) and 19571599 (at the time of writing only available for system patched with 12.1.0.2.4) must be applied on all oracle homes involved, this include oracle homes used for far sync instances as well. Without the patch there'll be issues with the dg configuration and the switchover will fail when the redo route has an alternate option. However the switchover works without the alternate option on the redo route.
DGMGRL> show database ent12c1 LogXptStatus
Error: ORA-16777: unable to find the destination entry of a standby database in V$ARCHIVE_DEST

DGMGRL> switchover to ent12c1s;
Performing switchover NOW, please wait...
Error: ORA-16778: redo transport
Another pre-req is clearing of log_archvie_dest_* parameters in all the instances (including far sync instances). Trying to create a dg configuration or add a (far_sync) instance will fail.
DGMGRL> create configuration ent12c1_dgb as primary database is ent12c1 connect identifier is ent12c1tns;
Error: ORA-16698: LOG_ARCHIVE_DEST_n parameter set for object to be added

DGMGRL> add database ent12c1s as connect identifier is ent12c1stns;
Error: ORA-16698: LOG_ARCHIVE_DEST_n parameter set for object to be added

DGMGRL> ADD FAR_SYNC fs12c1 AS CONNECT IDENTIFIER IS fs12c1tns;
Error: ORA-16698: LOG_ARCHIVE_DEST_n parameter set for object to be added
This is a new requirement introduced in 12c which states "as of Oracle Database 12c Release 1 (12.1), for all databases to be added to a broker configuration, any LOG_ARCHIVE_DEST_n parameters that have the SERVICE attribute set, but not the NOREGISTER attribute, must be cleared."
1. To add the dg broker configuration clear the existing log_archive_dest_* parameter. In the existing setup destination 2 and 3 were used for log shipping.
alter system set log_Archive_dest_2='' scope=both;
alter system set log_Archive_dest_3='' scope=both;
alter system set log_archive_dest_state_3='enable' scope=both;
2. Create the dg broker configuration by adding primary, standby and far sync instances. In this case ENT12C1 is the primary database while ENT12C1S is the standby and FS12C1 is the far sync instance the primary database ships redo synchronously. After a role switch when the ENT12C1S becomes the new primary then it uses FS12C1S as the far sync instance to transport redo synchronously.
create configuration ent12c1_dgb as primary database is ent12c1 connect identifier is ent12c1tns;
add database ent12c1s as connect identifier is ent12c1stns;
add FAR_SYNC fs12c1 as connect identifier is fs12c1tns;
add FAR_SYNC fs12c1s as connect identifier is fs12c1stns;

show configuration;

Configuration - ent12c1_dgb

  Protection Mode: MaxPerformance
  Members:
  ent12c1  - Primary database
    ent12c1s - Physical standby database
    fs12c1   - Far sync instance
    fs12c1s  - Far sync instance

Fast-Start Failover: DISABLED

Configuration Status:
DISABLED
3. Enable the configuration and far_sync.
enable configuration;
Enabled.
enable far_sync fs12c1;
Enabled.
enable far_sync fs12c1s;
Enabled.
4. Set the redo transport modes sync for far_sync (default is async) and appropriate redo routes for each instance. In this first instance the redo routes are set such that primary ship to far sync and far sync ships to standby. There's no redo route between primary and standby. This is different to earlier setup and will be changed in subsequent steps. It's also worth noting that "if a database receives redo from a database or far sync instance where the RedoRoutes property has been configured with a redo transport mode, then the mode specified by that RedoRoutes property value overrides the value of the LogXptMode property".
EDIT far_sync fs12c1 SET PROPERTY LogXptMode='SYNC';
EDIT far_sync fs12c1s SET PROPERTY LogXptMode='SYNC';

EDIT DATABASE ent12c1 SET PROPERTY 'RedoRoutes' = '(LOCAL : FS12C1 SYNC)';
EDIT FAR_SYNC fs12c1 SET PROPERTY 'RedoRoutes' = '(ent12c1 : ent12c1s ASYNC)';
EDIT DATABASE ent12c1s SET PROPERTY 'RedoRoutes' = '(LOCAL : FS12C1S SYNC)';
EDIT FAR_SYNC fs12c1s SET PROPERTY 'RedoRoutes' = '(ent12c1s : ent12c1 ASYNC)';
5. Verify the configuration and validate the database and far sync instances
DGMGRL> show configuration

Configuration - ent12c1_dgb

  Protection Mode: MaxPerformance
  Members:
  ent12c1  - Primary database
    fs12c1   - Far sync instance
      ent12c1s - Physical standby database

  Members Not Receiving Redo:
  fs12c1s  - Far sync instance

Fast-Start Failover: DISABLED

Configuration Status:
SUCCESS   (status updated 49 seconds ago)

DGMGRL> show configuration when primary is ent12c1s

Configuration when ent12c1s is primary - ent12c1_dgb

  Members:
  ent12c1s - Primary database
    fs12c1s  - Far sync instance
      ent12c1  - Physical standby database

  Members Not Receiving Redo:
  fs12c1   - Far sync instance

DGMGRL> validate far_sync fs12c1
    Member Role:        Far Sync Instance
    When Primary Is:    ent12c1

    Active Redo Source: ent12c1
    Redo Destinations:
                        ent12c1s

    Thread #  Online Redo Log Groups  Standby Redo Log Groups Status
              ent12c1                 fs12c1
    1         3                       4                       Sufficient SRLs

DGMGRL> validate far_sync fs12c1s when primary is ent12c1s
    Member Role:        Far Sync Instance
    When Primary Is:    ent12c1s
    Redo Destinations:
                        ent12c1

    Thread #  Online Redo Log Groups  Standby Redo Log Groups Status
              ent12c1s                fs12c1s
    1         3                       4                       Sufficient SRLs


DGMGRL> validate database ent12c1s

  Database Role:     Physical standby database
  Primary Database:  ent12c1

  Ready for Switchover:  Yes
  Ready for Failover:    Yes (Primary Running)

  Flashback Database Status:
    ent12c1:   Off
    ent12c1s:  Off

  Transport-Related Property Settings:
    Property                        ent12c1 Value            ent12c1s Value
    RedoRoutes                      (LOCAL : FS12C1 SYNC)    (LOCAL : FS12C1S SYNC)

DGMGRL> validate database ent12c1

  Database Role:    Primary database

  Ready for Switchover:  Yes

  Flashback Database Status:
    ent12c1:  Off
6. During the validation of the databases the alert log will show the following warning.
SWITCHOVER VERIFY BEGIN
SWITCHOVER VERIFY WARNING: no standby database is defined in LOG_ARCHIVE_DEST_n to protect this database if it is converted to a primary database
SWITCHOVER VERIFY COMPLETE
However the switchover worked fine without any issues.
DGMGRL> switchover to ent12c1s
Performing switchover NOW, please wait...
Operation requires a connection to instance "ent12c1s" on database "ent12c1s"
Connecting to instance "ent12c1s"...
Connected as SYSDBA.
New primary database "ent12c1s" is opening...
Operation requires start up of instance "ent12c1" on database "ent12c1"
Starting instance "ent12c1"...
ORACLE instance started.
Database mounted.
Switchover succeeded, new primary is "ent12c1s"


DGMGRL> show configuration

Configuration - ent12c1_dgb

  Protection Mode: MaxPerformance
  Members:
  ent12c1s - Primary database
    fs12c1s  - Far sync instance
      ent12c1  - Physical standby database

  Members Not Receiving Redo:
  fs12c1   - Far sync instance

Fast-Start Failover: DISABLED

Configuration Status:
SUCCESS   (status updated 3 seconds ago)

DGMGRL> switchover to ent12c1
Performing switchover NOW, please wait...
Operation requires a connection to instance "ent12c1" on database "ent12c1"
Connecting to instance "ent12c1"...
Connected as SYSDBA.
New primary database "ent12c1" is opening...
Operation requires start up of instance "ent12c1s" on database "ent12c1s"
Starting instance "ent12c1s"...
ORACLE instance started.
Database mounted.
Switchover succeeded, new primary is "ent12c1"
7. Once the switchovers are completed successfully it's time to modify the redo routes so there is an alternate path between primary and standby in case of far_sync failure. The redo transport mode between primary and standby would be async. In order to provide an alternative redo route the maxfailure property must be set to a non-zero value. Without this change alternate route cannot be specified
DGMGRL> EDIT DATABASE ent12c1 SET PROPERTY RedoRoutes = '(LOCAL : FS12C1 SYNC ALT=(ent12c1s ASYNC FALLBACK))';
Error: ORA-16863: A member whose MaxFailure property is set to zero cannot have an alternate destination.

EDIT DATABASE ent12c1 SET PROPERTY 'MaxFailure' = 1;
EDIT FAR_SYNC fs12c1 SET PROPERTY 'MaxFailure' = 1;
EDIT DATABASE ent12c1s SET PROPERTY 'MaxFailure' = 1;
EDIT FAR_SYNC fs12c1s SET PROPERTY 'MaxFailure' = 1;
The maxfailure property must be set for far sync as well. Even though they don't have a alternate destination set on them, failure to do so will result in the above error and it's not possible to change the maxfailure to 0 once the alternate destination is set.
DGMGRL>  show far_sync fs12c1 RedoRoutes
  RedoRoutes = '(ent12c1 : ent12c1s ASYNC)'

DGMGRL> EDIT FAR_SYNC fs12c1 SET PROPERTY 'MaxFailure' =0;
Error: ORA-16864: The MaxFailure property cannot be set to zero for a member that has an alternate destination.
8. Set the redo routes for the databases so there's an alternate route for redo transport if far sync is not available. As mentioned earlier the patch 19399918 must be applied on all instances (inclusive of far_sync) for this to work.
EDIT DATABASE ent12c1 SET PROPERTY RedoRoutes = '(LOCAL : FS12C1 SYNC ALT=(ent12c1s ASYNC FALLBACK))';
EDIT DATABASE ent12c1s SET PROPERTY RedoRoutes = '(LOCAL : FS12C1s SYNC ALT=(ent12c1 ASYNC FALLBACK))';
9. Verify the configuration shows the alternate destination.
DGMGRL> show configuration verbose

Configuration - ent12c1_dgb

  Protection Mode: MaxPerformance
  Members:
  ent12c1  - Primary database
    fs12c1   - Far sync instance
      ent12c1s - Physical standby database
    ent12c1s - Physical standby database (alternate of fs12c1)

  Members Not Receiving Redo:
  fs12c1s  - Far sync instance

DGMGRL> show configuration when primary is ent12c1s

Configuration when ent12c1s is primary - ent12c1_dgb

  Members:
  ent12c1s - Primary database
    fs12c1s  - Far sync instance
      ent12c1  - Physical standby database
    ent12c1  - Physical standby database (alternate of fs12c1s)

  Members Not Receiving Redo:
  fs12c1   - Far sync instance

DGMGRL> validate database ent12c1s

  Database Role:     Physical standby database
  Primary Database:  ent12c1

  Ready for Switchover:  Yes
  Ready for Failover:    Yes (Primary Running)

  Flashback Database Status:
    ent12c1:   Off
    ent12c1s:  Off

  Transport-Related Property Settings:
    Property                        ent12c1 Value            ent12c1s Value
    RedoRoutes                      (LOCAL : FS12C1 SYNC ALT=(ent12c1s ASYNC(LOCAL : FS12C1s SYNC ALT=(ent12c1 ASYNC
Check if the alternate location is visible on the v$archive_dest view of the primary database
SQL> SELECT DEST_NAME,STATUS,DESTINATION,TRANSMIT_MODE FROM V$ARCHIVE_DEST WHERE DESTINATION IS NOT NULL;

DEST_NAME            STATUS    DESTINATION                    TRANSMIT_MOD
-------------------- --------- ------------------------------ ------------
LOG_ARCHIVE_DEST_1   VALID     USE_DB_RECOVERY_FILE_DEST      SYNCHRONOUS
LOG_ARCHIVE_DEST_2   VALID     fs12c1tns                      PARALLELSYNC
LOG_ARCHIVE_DEST_3   ALTERNATE ent12c1stns                    ASYNCHRONOUS
In case of far_sync failure the alternate destination will become valid.
SQL> SELECT DEST_NAME,STATUS,DESTINATION,TRANSMIT_MODE FROM V$ARCHIVE_DEST WHERE DESTINATION IS NOT NULL;

DEST_NAME                      STATUS    DESTINATION                    TRANSMIT_MOD
------------------------------ --------- ------------------------------ ------------
LOG_ARCHIVE_DEST_1             VALID     USE_DB_RECOVERY_FILE_DEST      SYNCHRONOUS
LOG_ARCHIVE_DEST_2             ALTERNATE fs12c1tns                      PARALLELSYNC
LOG_ARCHIVE_DEST_3             VALID     ent12c1stns                    ASYNCHRONOUS
10. Check of inconsistent properties and rectify if any exists. Finally carry out switchovers.
DGMGRL> show database ent12c1 InconsistentLogXptProps;
INCONSISTENT LOG TRANSPORT PROPERTIES
   INSTANCE_NAME         STANDBY_NAME        PROPERTY_NAME         MEMORY_VALUE         BROKER_VALUE

DGMGRL> show database ent12c1s InconsistentLogXptProps;
INCONSISTENT LOG TRANSPORT PROPERTIES
   INSTANCE_NAME         STANDBY_NAME        PROPERTY_NAME         MEMORY_VALUE         BROKER_VALUE

DGMGRL> show far_sync fs12c1 InconsistentLogXptProps;
INCONSISTENT LOG TRANSPORT PROPERTIES
   INSTANCE_NAME         STANDBY_NAME        PROPERTY_NAME         MEMORY_VALUE         BROKER_VALUE

DGMGRL>  show far_sync fs12c1s InconsistentLogXptProps;
INCONSISTENT LOG TRANSPORT PROPERTIES
   INSTANCE_NAME         STANDBY_NAME        PROPERTY_NAME         MEMORY_VALUE         BROKER_VALUE

DGMGRL> switchover to ent12c1s
Performing switchover NOW, please wait...
Operation requires a connection to instance "ent12c1s" on database "ent12c1s"
Connecting to instance "ent12c1s"...
Connected as SYSDBA.
New primary database "ent12c1s" is opening...
Operation requires start up of instance "ent12c1" on database "ent12c1"
Starting instance "ent12c1"...
ORACLE instance started.
Database mounted.
Switchover succeeded, new primary is "ent12c1s"

DGMGRL> switchover to ent12c1
Performing switchover NOW, please wait...
Operation requires a connection to instance "ent12c1" on database "ent12c1"
Connecting to instance "ent12c1"...
Connected as SYSDBA.
New primary database "ent12c1" is opening...
Operation requires start up of instance "ent12c1s" on database "ent12c1s"
Starting instance "ent12c1s"...
ORACLE instance started.
Database mounted.
Switchover succeeded, new primary is "ent12c1"
This completes setting up of dg broker for existing data guard configuration with far sync instances. The below section list some of the issues observed during various testing of the setup.





Observed Issues
1. It seems how the far sync is terminated has an effect on the fail over of the archive destination. If the far sync was terminated abruptly (shutdown abort) then the primary database's archive log destination fails over to alternative path give (v$archive_dest output given earlier). But when the far_sync starts up again the log archive destination status for the path from far_sync to standby comes up as idle. Following output is from far_sync
SQL> SELECT DEST_NAME,STATUS,DESTINATION,TRANSMIT_MODE FROM V$ARCHIVE_DEST WHERE DESTINATION IS NOT NULL;

DEST_NAME            STATUS    DESTINATION                              TRANSMIT_MOD
-------------------- --------- ---------------------------------------- ------------
LOG_ARCHIVE_DEST_1   VALID     USE_DB_RECOVERY_FILE_DEST                SYNCHRONOUS
LOG_ARCHIVE_DEST_2   IDLE      ent12c1stns                              ASYNCHRONOUS
STANDBY_ARCHIVE_DEST VALID     USE_DB_RECOVERY_FILE_DEST                SYNCHRONOUS
Even after leaving it for 24 hour period the destination did not resolve itself. Manually intervention was needed
ALTER SYSTEM SET log_archive_dest_state_2='enable' SCOPE=BOTH;
2. If the far_sync has a clean shutdown (shutdown immediate instead of shutdown abort) then when it comes up again the log archive dest status come up as valid. However the failover on the primary doesn't happen to alternate destination. The destination to far_sync remains deferred and alternate stays as it is. Following output from primary when far_sync is shutdown cleanly
SQL> SELECT DEST_NAME,STATUS,DESTINATION,TRANSMIT_MODE FROM V$ARCHIVE_DEST WHERE DESTINATION IS NOT NULL;

DEST_NAME                      STATUS    DESTINATION                    TRANSMIT_MOD
------------------------------ --------- ------------------------------ ------------
LOG_ARCHIVE_DEST_1             VALID     USE_DB_RECOVERY_FILE_DEST      SYNCHRONOUS
LOG_ARCHIVE_DEST_2             DEFERRED  fs12c1tns                      PARALLELSYNC
LOG_ARCHIVE_DEST_3             ALTERNATE ent12c1stns                    ASYNCHRONOUS
When the far_sync comes up again the deferred destination becomes valid again. In this scenario can lead to data loss as redo transport doesn't happen between primary and standby even though an alternate path was specified.

3. Increasing the protection mode was not possible with the redo routes defined with ALT and ASYNC.
DGMGRL> show database ent12c1 RedoRoutes
  RedoRoutes = '(LOCAL : FS12C1 SYNC ALT=(ent12c1s ASYNC FALLBACK))'
DGMGRL> show database ent12c1s RedoRoutes
  RedoRoutes = '(LOCAL : FS12C1s SYNC ALT=(ent12c1 ASYNC FALLBACK))'
DGMGRL> show far_sync fs12c1 RedoRoutes
  RedoRoutes = '(ent12c1 : ent12c1s ASYNC)'
DGMGRL> show far_sync fs12c1s RedoRoutes
  RedoRoutes = '(ent12c1s : ent12c1 ASYNC)'

DGMGRL> EDIT CONFIGURATION SET PROTECTION MODE AS MaxAvailability;
Error: ORA-16627: operation disallowed since no standby databases would remain to support protection mode
There was no such problem before when the setup was not using the dg broker. Whatever the problem seem to be localized to dg broker setup. Removing the alternate option on the redo route from primary and standby databases allowed the upgrade of the protection mode, even though between far sync and standby redo was shipping async.
DGMGRL> EDIT DATABASE ent12c1 SET PROPERTY 'RedoRoutes' = '(LOCAL : FS12C1 SYNC)';
Property "RedoRoutes" updated
DGMGRL> EDIT DATABASE ent12c1s SET PROPERTY 'RedoRoutes' = '(LOCAL : FS12C1S SYNC)';
Property "RedoRoutes" updated
On primary
SQL> SELECT DEST_NAME,STATUS,DESTINATION,TRANSMIT_MODE FROM V$ARCHIVE_DEST WHERE DESTINATION IS NOT NULL;

DEST_NAME            STATUS    DESTINATION                    TRANSMIT_MOD
-------------------- --------- ------------------------------ ------------
LOG_ARCHIVE_DEST_1   VALID     USE_DB_RECOVERY_FILE_DEST      SYNCHRONOUS
LOG_ARCHIVE_DEST_2   VALID     fs12c1tns                      PARALLELSYNC
On far sync
SQL> SELECT DEST_NAME,STATUS,DESTINATION,TRANSMIT_MODE FROM V$ARCHIVE_DEST WHERE DESTINATION IS NOT NULL;

DEST_NAME            STATUS    DESTINATION                              TRANSMIT_MOD
-------------------- --------- ---------------------------------------- ------------
LOG_ARCHIVE_DEST_1   VALID     USE_DB_RECOVERY_FILE_DEST                SYNCHRONOUS
LOG_ARCHIVE_DEST_2   VALID     ent12c1stns                              ASYNCHRONOUS
STANDBY_ARCHIVE_DEST VALID     USE_DB_RECOVERY_FILE_DEST                SYNCHRONOUS
Increase of protection mode works and switchvoer is possible without any issue
DGMGRL> EDIT CONFIGURATION SET PROTECTION MODE AS MaxAvailability;
Succeeded.

DGMGRL> show configuration;

Configuration - ent12c1_dgb

  Protection Mode: MaxAvailability
  Members:
  ent12c1  - Primary database
    fs12c1   - Far sync instance
      ent12c1s - Physical standby database

  Members Not Receiving Redo:
  fs12c1s  - Far sync instance

Fast-Start Failover: DISABLED

Configuration Status:
SUCCESS   (status updated 59 seconds ago)

DGMGRL> switchover to ent12c1s
Performing switchover NOW, please wait...
Operation requires a connection to instance "ent12c1s" on database "ent12c1s"
Connecting to instance "ent12c1s"...
Connected as SYSDBA.
New primary database "ent12c1s" is opening...
Operation requires start up of instance "ent12c1" on database "ent12c1"
Starting instance "ent12c1"...
ORACLE instance started.
Database mounted.
Switchover succeeded, new primary is "ent12c1s"


DGMGRL> show configuration

Configuration - ent12c1_dgb

  Protection Mode: MaxAvailability
  Members:
  ent12c1s - Primary database
    fs12c1s  - Far sync instance
      ent12c1  - Physical standby database

  Members Not Receiving Redo:
  fs12c1   - Far sync instance

Fast-Start Failover: DISABLED

Configuration Status:
SUCCESS   (status updated 55 seconds ago)

DGMGRL> switchover to ent12c1
Performing switchover NOW, please wait...
Operation requires a connection to instance "ent12c1" on database "ent12c1"
Connecting to instance "ent12c1"...
Connected as SYSDBA.
New primary database "ent12c1" is opening...
Operation requires start up of instance "ent12c1s" on database "ent12c1s"
Starting instance "ent12c1s"...
ORACLE instance started.
Database mounted.
Switchover succeeded, new primary is "ent12c1"
Even though ALT+ASYCN combination didn't work, ALT+SYNC combination allowed increase of protection mode and switchover without any issue.
DGMGRL> EDIT CONFIGURATION SET PROTECTION MODE AS maxperformance;
Succeeded.

DGMGRL> show configuration;

Configuration - ent12c1_dgb

  Protection Mode: MaxPerformance
  Members:
  ent12c1  - Primary database
    fs12c1   - Far sync instance
      ent12c1s - Physical standby database

  Members Not Receiving Redo:
  fs12c1s  - Far sync instance

Fast-Start Failover: DISABLED

Configuration Status:
SUCCESS   (status updated 31 seconds ago)

DGMGRL> EDIT DATABASE ent12c1 SET PROPERTY RedoRoutes = '(LOCAL : FS12C1 SYNC ALT=(ent12c1s SYNC FALLBACK))';
Property "redoroutes" updated
DGMGRL> EDIT DATABASE ent12c1s SET PROPERTY RedoRoutes = '(LOCAL : FS12C1s SYNC ALT=(ent12c1 SYNC FALLBACK))' ; 
Property "redoroutes" updated

DGMGRL> show database ent12c1 RedoRoutes
  RedoRoutes = '(LOCAL : FS12C1 SYNC ALT=(ent12c1s SYNC FALLBACK))'
DGMGRL>  show database ent12c1s RedoRoutes
  RedoRoutes = '(LOCAL : FS12C1s SYNC ALT=(ent12c1 SYNC FALLBACK))'
DGMGRL> show far_sync fs12c1 RedoRoutes
  RedoRoutes = '(ent12c1 : ent12c1s ASYNC)'
DGMGRL>  show far_sync fs12c1s RedoRoutes
  RedoRoutes = '(ent12c1s : ent12c1 ASYNC)'


DGMGRL> EDIT CONFIGURATION SET PROTECTION MODE AS MaxAvailability;
Succeeded.

DGMGRL> show configuration

Configuration - ent12c1_dgb

  Protection Mode: MaxAvailability
  Members:
  ent12c1  - Primary database
    fs12c1   - Far sync instance
      ent12c1s - Physical standby database

  Members Not Receiving Redo:
  fs12c1s  - Far sync instance

Fast-Start Failover: DISABLED

Configuration Status:
SUCCESS   (status updated 50 seconds ago)

DGMGRL> switchover to ent12c1s
Performing switchover NOW, please wait...
Operation requires a connection to instance "ent12c1s" on database "ent12c1s"
Connecting to instance "ent12c1s"...
Connected as SYSDBA.
New primary database "ent12c1s" is opening...
Operation requires start up of instance "ent12c1" on database "ent12c1"
Starting instance "ent12c1"...
ORACLE instance started.
Database mounted.
Switchover succeeded, new primary is "ent12c1s"

DGMGRL> show configuration

Configuration - ent12c1_dgb

  Protection Mode: MaxAvailability
  Members:
  ent12c1s - Primary database
    fs12c1s  - Far sync instance
      ent12c1  - Physical standby database

  Members Not Receiving Redo:
  fs12c1   - Far sync instance

Fast-Start Failover: DISABLED

Configuration Status:
SUCCESS   (status updated 2 seconds ago)

DGMGRL> switchover to ent12c1
Performing switchover NOW, please wait...
Operation requires a connection to instance "ent12c1" on database "ent12c1"
Connecting to instance "ent12c1"...
Connected as SYSDBA.
New primary database "ent12c1" is opening...
Operation requires start up of instance "ent12c1s" on database "ent12c1s"
Starting instance "ent12c1s"...
ORACLE instance started.
Database mounted.
Switchover succeeded, new primary is "ent12c1"
This creates a alternate route between primary and standby when far sync fails. In this case redo shipment happens in SYNC mode. When far sync is up
SQL> SELECT DEST_NAME,STATUS,DESTINATION,TRANSMIT_MODE FROM V$ARCHIVE_DEST WHERE DESTINATION IS NOT NULL;

DEST_NAME                                STATUS    DESTINATION                    TRANSMIT_MOD
---------------------------------------- --------- ------------------------------ ------------
LOG_ARCHIVE_DEST_1                       VALID     USE_DB_RECOVERY_FILE_DEST      SYNCHRONOUS
LOG_ARCHIVE_DEST_2                       VALID     fs12c1tns                      PARALLELSYNC
LOG_ARCHIVE_DEST_3                       ALTERNATE ent12c1stns                    PARALLELSYNC
After far sync fails
SQL> SELECT DEST_NAME,STATUS,DESTINATION,TRANSMIT_MODE FROM V$ARCHIVE_DEST WHERE DESTINATION IS NOT NULL;

DEST_NAME                                STATUS    DESTINATION                    TRANSMIT_MOD
---------------------------------------- --------- ------------------------------ ------------
LOG_ARCHIVE_DEST_1                       VALID     USE_DB_RECOVERY_FILE_DEST      SYNCHRONOUS
LOG_ARCHIVE_DEST_2                       ALTERNATE fs12c1tns                      PARALLELSYNC
LOG_ARCHIVE_DEST_3                       VALID     ent12c1stns                    PARALLELSYNC
Even though this works it's not same as the setup exited before dg broker was used and one of the key points of far sync to avoid sync redo shipping between primary and standby over long distance. It is also not possible to change the redo mode to ASYNC.
DGMGRL> EDIT DATABASE ent12c1 SET PROPERTY RedoRoutes = '(LOCAL : FS12C1 SYNC ALT=(ent12c1s ASYNC FALLBACK))';
Error: ORA-16900: change of LogXptMode or RedoRoutes property violates overall protection mode
A SR was raised about this issue. After some investigation Oracle came with a patch "Patch 20695119 MERGE REQUEST ON TOP OF DATABASE PSU 12.1.0.2.2 FOR BUGS 19399918 19571599" (only works of 12.1.0.2.2). After applying this patch the dg broke allowed increasing the protection mode but when a switchover was issued failed with "ORA-00600: internal error code, arguments: [kghstack_underflow_internal_1], [0x2B29FF483BA0], [rfrxpts2dfp.1]". There was no resolution for this from Oracle and SR went nowhere.
Searching MOS it was discovered that recently Oracle has released patch 19571599 (requires 12.1.0.2.4 applied DB Home) which was not available outside of the merge patch earlier. After applying this patch it was possible to create the dg broker configuration same as before.
EDIT DATABASE ent12c1 SET PROPERTY RedoRoutes = '(LOCAL : FS12C1 SYNC ALT=(ent12c1s ASYNC FALLBACK))';
EDIT DATABASE ent12c1s SET PROPERTY RedoRoutes = '(LOCAL : FS12C1s SYNC ALT=(ent12c1 ASYNC FALLBACK))';

DGMGRL> EDIT CONFIGURATION SET PROTECTION MODE AS MaxAvailability;
Succeeded.

DGMGRL> show configuration

Configuration - ent12c1_dgb

  Protection Mode: MaxAvailability
  Members:
  ent12c1  - Primary database
    fs12c1   - Far sync instance
      ent12c1s - Physical standby database

  Members Not Receiving Redo:
  fs12c1s  - Far sync instance

Fast-Start Failover: DISABLED

Configuration Status:
SUCCESS   (status updated 35 seconds ago)
With this configuration and patches in place when the far sync goes down the dg broker lowers the protection mode to maximum performance to reflect the new redo shipping mode of ASYNC. Following entries could be seen on standby's alert log
Primary database is in MAXIMUM PERFORMANCE mode
Changing standby controlfile to MAXIMUM PERFORMANCE mode
When the far sync comes up again the protection mode is upgraded to maximum availability. On far sync alert log
Primary database is in MAXIMUM AVAILABILITY mode
Changing standby controlfile to MAXIMUM AVAILABILITY mode
Changing standby controlfile to RESYNCHRONIZATION level
On standby's alert log
Primary database is in MAXIMUM PERFORMANCE mode
Changing standby controlfile to MAXIMUM AVAILABILITY mode
Changing standby controlfile to RESYNCHRONIZATION level
Even though standby's protection level goes up and down primary's protection level remains the same at maximum availability and this can cause warning to appear on the dg broker configuation
DGMGRL> show configuration
Configuration - ent12c1_dgb

  Protection Mode: MaxAvailability
  Members:
  ent12c1  - Primary database
    Warning: ORA-16629: database reports a different protection level from the protection mode

    ent12c1s - Physical standby database (alternate of fs12c1)

  Members Not Receiving Redo:
  fs12c1   - Far sync instance
    Warning: ORA-01034: ORACLE not available

  fs12c1s  - Far sync instance

Fast-Start Failover: DISABLED

Configuration Status:
WARNING   (status updated 31 seconds ago)

DGMGRL> show database ent12c1 statusreport
STATUS REPORT
       INSTANCE_NAME   SEVERITY ERROR_TEXT
                   *    WARNING ORA-16629: database reports a different protection level from the protection mode
Another issue observed is that at times the alternate route disappear from the log archive dest entries even though the state of the entries remains alternate. On primary (there's no alternate option)
SQL> SELECT DEST_NAME,STATUS,DESTINATION,TRANSMIT_MODE FROM V$ARCHIVE_DEST WHERE DESTINATION IS NOT NULL;

DEST_NAME                      STATUS    DESTINATION                    TRANSMIT_MOD
------------------------------ --------- ------------------------------ ------------
LOG_ARCHIVE_DEST_1             VALID     USE_DB_RECOVERY_FILE_DEST      SYNCHRONOUS
LOG_ARCHIVE_DEST_2             VALID     fs12c1stns                     PARALLELSYNC
This will lead to log transport properties being inconsistent and switchovers failing
DGMGRL> show database ent12c1 InconsistentLogXptProps;
INCONSISTENT LOG TRANSPORT PROPERTIES
   INSTANCE_NAME         STANDBY_NAME        PROPERTY_NAME         MEMORY_VALUE         BROKER_VALUE
         ent12c1               fs12c1           RedoRoutes                 NONE             ent12c1s
When standby becomes primary
DGMGRL> show database ent12c1s InconsistentLogXptProps;
INCONSISTENT LOG TRANSPORT PROPERTIES
   INSTANCE_NAME         STANDBY_NAME        PROPERTY_NAME         MEMORY_VALUE         BROKER_VALUE
        ent12c1s              fs12c1s           RedoRoutes                 NONE              ent12c1
Show database would have entries similar to following
Instance(s):
    ent12c1s
      Error: ORA-16736: unable to find the destination entry of standby database "ent12c1" in V$ARCHIVE_DEST
      Warning: ORA-16715: redo transport-related property RedoRoutes of standby database "fs12c1s" is inconsistent
      Warning: ORA-16728: consistency check for property LogXptMode found ORA-16777 error
      Warning: ORA-16777: unable to find the destination entry of a standby database in V$ARCHIVE_DEST
One of the easiest way to resolve this is to disable and enable redo transport on the primary
DGMGRL>  EDIT DATABASE ent12c1s SET state='transport-off';
DGMGRL> EDIT DATABASE ent12c1s SET state='transport-on';
But this may not be possible when the protection mode is maximum availability. In such cases simply enabling transport, while it's already on works
DGMGRL> edit database ent12c1s set state='transport-off';
Error: ORA-16627: operation disallowed since no standby databases would remain to support protection mode
Failed.

DGMGRL> edit database ent12c1s set state='transport-on';
Succeeded.
Check if the alternate route appears on the archive destination
SQL>  SELECT DEST_NAME,STATUS,DESTINATION,TRANSMIT_MODE FROM V$ARCHIVE_DEST WHERE DESTINATION IS NOT NULL;

DEST_NAME                      STATUS    DESTINATION                    TRANSMIT_MOD
------------------------------ --------- ------------------------------ ------------
LOG_ARCHIVE_DEST_1             VALID     USE_DB_RECOVERY_FILE_DEST      SYNCHRONOUS
LOG_ARCHIVE_DEST_2             VALID     fs12c1stns                     PARALLELSYNC
LOG_ARCHIVE_DEST_3             ALTERNATE ent12c1tns                     ASYNCHRONOUS
Continue with the switchover
DGMGRL> switchover to ent12c1;
Performing switchover NOW, please wait...
Operation requires a connection to instance "ent12c1" on database "ent12c1"
Connecting to instance "ent12c1"...
Connected as SYSDBA.
New primary database "ent12c1" is opening...
Operation requires start up of instance "ent12c1s" on database "ent12c1s"
Starting instance "ent12c1s"...
ORACLE instance started.
Database mounted.
Switchover succeeded, new primary is "ent12c1"

Related Posts
Creating Data Guard Broker on 12c
Adding Far Sync Instances to Existing Data Guard Configuration
11gR2 RAC to RAC Data Guard
11gR2 Standalone Data Guard (with ASM and Role Separation)

Useful metalink notes
Cascaded Standby Databases in Oracle 12c [ID 1542969.1]
Create Configuration Failing with ORA-16698 [ID 1582179.1]
Data Guard 12c New Feature: Far Sync Standby [ID 1565071.1]
12c Create Dataguard Broker Configuration - DGMGRL [ID 1583588.1]
Best Practices for Corruption Detection, Prevention, and Automatic Repair - in a Data Guard Configuration [ID 1302539.1]
Configuring Active Data Guard Far Sync Using a Terminal Standby Database as Alternate Log Archive Destination in Data Guard Broker [ID 1918316.1]