Tuesday, November 30, 2010

RAC to Single Instance Physical Standby

Oracle 11g new feature of active database duplication could also be used for creating standby databases in data guard configurations. This blog uses the same RAC configuration used for active database duplication to create a single instance physical standby database.

RAC instances are
olsnodes -n
rac4    1
rac5    2
Data files and logfiles are in two ASM diskgroups called +DATA and +FLASH.

Physical standby database will have it's files in the local file system and will be referred to as stdby through out the blog.

1. Instance Oracle Enterprise Edition software on the host where physical standby will reside. In addition to this also create the necessary directory structures such as adump ($ORACLE_BASE/admin/sid name/adump) and directories for controlfiles, datafiles and onlinelogs, though the configuration uses OMF once the setup is completed, these directories are required in the beginning to complete the setup. (not required if instead of OMF, some other directory path is referenced). For this configuration following directories were created
cd /data/oradata
mkdir STDBY
cd STDBY
mkdir controlfile  datafile  onlinelog

cd cd /data/flash_recovery
mkdir STDBY
cd STDBY
mkdir onlinelog
2. Create TNS entries on both RAC node's tnsnames.ora file
STDBYTNS =
(DESCRIPTION =
(ADDRESS = (PROTOCOL = TCP)(HOST = standby-host)(PORT = 1521))
(CONNECT_DATA =
(SERVER = DEDICATED)
(SID = stdby)
)
)
3. Create TNS entry on the standby's tnsnames.ora file
PRIMARYTNS =
(DESCRIPTION =
(ADDRESS = (PROTOCOL = TCP)(HOST = rac1-host)(PORT = 1521))
(CONNECT_DATA =
(SERVER = DEDICATED)
(SERVICE_NAME = rac11g2)
)
)
In this case only one instance will be used to fetch archive log gaps and to send redo when the switch over happens. In a RAC standby multiple instances can receive but there can only be one applier.

4. Add a static listener entry to listener.ora
SID_LIST_LISTENER =
(SID_LIST =
(SID_DESC =
(GLOBAL_DBNAME = stdby)
(SID_NAME = stdby)
(ORACLE_HOME = /opt/app/oracle/product/11.2.0/ent)
)
)
5. Enable force logging on the primary
SQL> alter database force logging;
6. Create standby log files for each thread on primary. These should be same size as the online redo log files
SQL> alter database add standby logfile thread 1 size 52428800;
or
SQL> alter database add standby logfile thread 1;
SQL> alter database add standby logfile thread 2;
There should be at least one more redo log group per thread than the online redo logs.

6. Add Data Guard related initialization parameters to primary. These include, information about the instances involved in the data guard configuration, redo transport parameters such SYNC,ASYNC, AFFIRM, NOAFFIRM, fetch archive log client and server values and datafile/logfile name conversions.
alter system set log_archive_config='dg_config=(rac11g2,stdby)' scope=both ;
alter system set log_archive_dest_1='location=use_db_recovery_file_dest valid_for=(all_logfiles,all_roles) db_unique_name=rac11g2' scope=both;
alter system set log_archive_dest_2='service=STDBYTNS LGWR ASYNC NOAFFIRM max_failure=10 max_connections=5 reopen=180 valid_for=(online_logfiles,primary_role) db_unique_name=stdby' scope=both;
alter system set log_archive_dest_state_1='enable' scope=both;
alter system set log_archive_dest_state_2='enable' scope=both;
alter system set fal_server='STDBYTNS' scope=both;
alter system set fal_client='PRIMARYTNS' scope=both;
alter system set log_archive_max_processes=10 scope=both;
alter system set db_file_name_convert='/data/oradata/STDBY','+DATA/rac11g2' scope=spfile;
alter system set log_file_name_convert='/data/flash_recovery/STDBY','+FLASH/rac11g2' scope=spfile;
alter system set standby_file_management='AUTO' scope=both;
7. Copy the password file to standby host's ORACLE_HOME/dbs and rename the file. Assuming password file is copied to the standby host
mv orapwrac11g22 orapwstdby
8. create a pfile with db_name as the only entry.
*.db_name='stdby'
9. Start the standby instance using the above mentioned pfile in nomount mode
startup nomount pfile=initstdby.ora
10. On the primary using rman connect to primary db as target and standby as the auxiliary and run the active duplication command to create the standby. Some of the RAC only parameters has been reset while others have been set to reflect the standby database after switch over.
rman target / auxiliary sys/rac11g2db@stdbytns

duplicate target database for standby from active database
spfile
parameter_value_convert 'rac11g2','stdby','RAC11G2','stdby'
set db_unique_name='stdby'
set db_file_name_convert='+DATA/rac11g2','/data/oradata/STDBY','+DATA/rac11g2/tempfile','/data/oradata/STDBY'
set log_file_name_convert='+FLASH/rac11g2','/data/flash_recovery/STDBY','+DATA/rac11g2','/data/flash_recovery/STDBY'
set control_files='/data/oradata/stdby/controlfile/control01.ctl'
set log_archive_max_processes='5'
set fal_client='STDBYTNS'
set fal_server='PRIMARYTNS'
SET cluster_database='false'
reset REMOTE_LISTENER
reset local_listener
set db_create_file_dest  = '/data/oradata'
set db_recovery_file_dest  = '/data/flash_recovery'
set standby_file_management='AUTO'
set log_archive_config='dg_config=(rac11g2,stdby)'
set log_archive_dest_2='service=PRIMARYTNS LGWR ASYNC NOAFFIRM max_failure=10 max_connections=5 reopen=180 valid_for=(online_logfiles,primary_role) db_unique_name=rac11g2'
set log_archive_dest_1='location=use_db_recovery_file_dest valid_for=(all_logfiles,all_roles) db_unique_name=stdby';
This will start the duplication and creation of the physical standby
Starting Duplicate Db at 30-NOV-10
using target database control file instead of recovery catalog
allocated channel: ORA_AUX_DISK_1
channel ORA_AUX_DISK_1: SID=10 device type=DISK

contents of Memory Script:
{
backup as copy reuse
targetfile  '/opt/app/oracle/product/11.2.0/db_1/dbs/orapwrac11g22' auxiliary format
'/opt/app/oracle/product/11.2.0/ent/dbs/orapwstdby'   ;
}
executing Memory Script
...
...
...
Finished Duplicate Db at 30-NOV-10
If any parameter setting has some configuration mismatches and still referrers to ASM for files then the duplication process will terminate with
ERROR: slave communication error with ASM; terminating process 21416
Errors in file /opt/app/oracle/diag/rdbms/stdby/stdby/trace/stdby_lgwr_21416.trc:
Mon Nov 29 17:40:49 2010
PMON (ospid: 21396): terminating the instance due to error 470
Instance terminated by PMON, pid = 21396
11. Once successfully completed start redo apply on the standby with
ALTER DATABASE RECOVER MANAGED STANDBY DATABASE USING CURRENT LOGFILE DISCONNECT
12. Force a log switch on the primary
alter system switch logfile;
or
alter database archive log current;
13. Check if the logs are applied on the standby with
select thread#,sequence#,applied from v$archived_log;

THREAD#  SEQUENCE# APPLIED
---------- ---------- ---------
1        108 YES
1        109 YES
1        110 YES
1        111 YES
2         98 YES
2         99 YES
2        100 YES
1        112 IN-MEMORY


To further checks could be performed by creating a tablespace on primary and checking whether the changes get reflected appropriately only the standby.

The spfile created in this scenario will have the RAC instance level values similar to active duplication.

Alternately a pfile for the standby could be created with all the necessary parameter entries and use that to start the duplicate in nomount state. Then when running the duplicate command and omit the spfile clause.

In this scenario pfile created on step 8 would be
*.audit_file_dest='/opt/app/oracle/admin/stdby/adump'
*.audit_trail='OS'
*.compatible='11.2.0.0.0'
*.control_files='/data/oradata/STDBY/controlfile/control01.ctl'
*.db_block_size=8192
*.db_create_file_dest='/data/oradata'
*.db_domain='domain.net'
*.db_name='rac11g2'
*.db_unique_name='stdby'
*.db_file_name_convert='+DATA/rac11g2','/data/oradata/STDBY','+DATA/rac11g2/tempfile','/data/oradata/STDBY'
*.log_file_name_convert='+FLASH/rac11g2','/data/flash_recovery/STDBY','+DATA/rac11g2','/data/flash_recovery/STDBY'
*.db_recovery_file_dest='/data/flash_recovery'
*.db_recovery_file_dest_size=40705720320
*.diagnostic_dest='/opt/app/oracle'
*.dispatchers='(PROTOCOL=TCP) (SERVICE=stdbyXDB)'
*.log_archive_format='%t_%s_%r.dbf'
*.log_archive_config='dg_config=(rac11g2,stdby)'
*.log_archive_dest_2='service=PRIMARYTNS LGWR ASYNC NOAFFIRM max_failure=10 max_connections=5 reopen=180 valid_for=(online_logfiles,primary_role) db_unique_name=rac11g2'
*.standby_file_management='AUTO'
*.log_archive_dest_1='location=use_db_recovery_file_dest valid_for=(all_logfiles,all_roles) db_unique_name=stdby'
*.open_cursors=300
*.pga_aggregate_target=1326448640
*.processes=150
*.remote_login_passwordfile='EXCLUSIVE'
*.sga_target=3707764736
*.undo_tablespace='UNDOTBS1'
*.fal_client='STDBYTNS'
*.fal_server='PRIMARYTNS'
Duplication command on step 10 would be
duplicate target database for standby from active database;
This will not create a spfile for the standby and at the end of the duplication command a spfile should be created explicitly. Complete output for this last scenario is given below
rman target / auxiliary sys/rac11g2db@stdbytns

Recovery Manager: Release 11.2.0.1.0 - Production on Tue Nov 30 13:02:16 2010

Copyright (c) 1982, 2009, Oracle and/or its affiliates.  All rights reserved.

connected to target database: RAC11G2 (DBID=371695083)
connected to auxiliary database: RAC11G2 (not mounted)

RMAN> duplicate target database for standby from active database;

Starting Duplicate Db at 30-NOV-10
using target database control file instead of recovery catalog
allocated channel: ORA_AUX_DISK_1
channel ORA_AUX_DISK_1: SID=10 device type=DISK

contents of Memory Script:
{
backup as copy reuse
targetfile  '/opt/app/oracle/product/11.2.0/db_1/dbs/orapwrac11g22' auxiliary format
'/opt/app/oracle/product/11.2.0/ent/dbs/orapwstdby'   ;
}
executing Memory Script

Starting backup at 30-NOV-10
allocated channel: ORA_DISK_1
channel ORA_DISK_1: SID=152 instance=rac11g22 device type=DISK
Finished backup at 30-NOV-10

contents of Memory Script:
{
backup as copy current controlfile for standby auxiliary format  '/data/oradata/stdby/control01.ctl';
}
executing Memory Script

Starting backup at 30-NOV-10
using channel ORA_DISK_1
channel ORA_DISK_1: starting datafile copy
copying standby control file
output file name=/opt/app/oracle/product/11.2.0/db_1/dbs/snapcf_rac11g22.f tag=TAG20101130T130231 RECID=33 STAMP=736434151
channel ORA_DISK_1: datafile copy complete, elapsed time: 00:00:03
Finished backup at 30-NOV-10

contents of Memory Script:
{
sql clone 'alter database mount standby database';
}
executing Memory Script

sql statement: alter database mount standby database

contents of Memory Script:
{
set newname for tempfile  1 to
"/data/oradata/stdby/tempfile/temp.263.732796409";
switch clone tempfile all;
set newname for datafile  1 to
"/data/oradata/stdby/datafile/system.256.732796287";
set newname for datafile  2 to
"/data/oradata/stdby/datafile/sysaux.257.732796289";
set newname for datafile  3 to
"/data/oradata/stdby/datafile/undotbs1.258.732796289";
set newname for datafile  4 to
"/data/oradata/stdby/datafile/users.259.732796291";
set newname for datafile  5 to
"/data/oradata/stdby/datafile/undotbs2.264.732796603";
backup as copy reuse
datafile  1 auxiliary format
"/data/oradata/stdby/datafile/system.256.732796287"   datafile
2 auxiliary format
"/data/oradata/stdby/datafile/sysaux.257.732796289"   datafile
3 auxiliary format
"/data/oradata/stdby/datafile/undotbs1.258.732796289"   datafile
4 auxiliary format
"/data/oradata/stdby/datafile/users.259.732796291"   datafile
5 auxiliary format
"/data/oradata/stdby/datafile/undotbs2.264.732796603"   ;
sql 'alter system archive log current';
}
executing Memory Script

executing command: SET NEWNAME

renamed tempfile 1 to /data/oradata/stdby/tempfile/temp.263.732796409 in control file

executing command: SET NEWNAME

executing command: SET NEWNAME

executing command: SET NEWNAME

executing command: SET NEWNAME

executing command: SET NEWNAME

Starting backup at 30-NOV-10
using channel ORA_DISK_1
channel ORA_DISK_1: starting datafile copy
input datafile file number=00001 name=+DATA/rac11g2/datafile/system.256.732796287
output file name=/data/oradata/stdby/datafile/system.256.732796287 tag=TAG20101130T130241
channel ORA_DISK_1: datafile copy complete, elapsed time: 00:01:05
channel ORA_DISK_1: starting datafile copy
input datafile file number=00002 name=+DATA/rac11g2/datafile/sysaux.257.732796289
output file name=/data/oradata/stdby/datafile/sysaux.257.732796289 tag=TAG20101130T130241
channel ORA_DISK_1: datafile copy complete, elapsed time: 00:00:55
channel ORA_DISK_1: starting datafile copy
input datafile file number=00003 name=+DATA/rac11g2/datafile/undotbs1.258.732796289
output file name=/data/oradata/stdby/datafile/undotbs1.258.732796289 tag=TAG20101130T130241
channel ORA_DISK_1: datafile copy complete, elapsed time: 00:00:15
channel ORA_DISK_1: starting datafile copy
input datafile file number=00005 name=+DATA/rac11g2/datafile/undotbs2.264.732796603
output file name=/data/oradata/stdby/datafile/undotbs2.264.732796603 tag=TAG20101130T130241
channel ORA_DISK_1: datafile copy complete, elapsed time: 00:00:15
channel ORA_DISK_1: starting datafile copy
input datafile file number=00004 name=+DATA/rac11g2/datafile/users.259.732796291
output file name=/data/oradata/stdby/datafile/users.259.732796291 tag=TAG20101130T130241
channel ORA_DISK_1: datafile copy complete, elapsed time: 00:00:01
Finished backup at 30-NOV-10

sql statement: alter system archive log current

contents of Memory Script:
{
switch clone datafile all;
}
executing Memory Script

datafile 1 switched to datafile copy
input datafile copy RECID=33 STAMP=736434316 file name=/data/oradata/stdby/datafile/system.256.732796287
datafile 2 switched to datafile copy
input datafile copy RECID=34 STAMP=736434316 file name=/data/oradata/stdby/datafile/sysaux.257.732796289
datafile 3 switched to datafile copy
input datafile copy RECID=35 STAMP=736434316 file name=/data/oradata/stdby/datafile/undotbs1.258.732796289
datafile 4 switched to datafile copy
input datafile copy RECID=36 STAMP=736434316 file name=/data/oradata/stdby/datafile/users.259.732796291
datafile 5 switched to datafile copy
input datafile copy RECID=37 STAMP=736434316 file name=/data/oradata/stdby/datafile/undotbs2.264.732796603
Finished Duplicate Db at 30-NOV-10

Not related to the title but for single-single physical standby through active database duplication, assuming primary is ent11g2, on primary
alter system set log_archive_config='dg_config=(ent11g2,stdby)' scope=both;
alter system set log_archive_dest_1='location=use_db_recovery_file_dest valid_for=(all_logfiles,all_roles) db_unique_name=ent11g2' scope=both;
alter system set log_archive_dest_2='service=STDBYTNS LGWR ASYNC NOAFFIRM max_failure=10 max_connections=5 reopen=180 valid_for=(online_logfiles,primary_role) db_unique_name=stdby' scope=both;
alter system set log_archive_dest_state_1='enable' scope=both;
alter system set log_archive_dest_state_2='enable' scope=both;
alter system set fal_server='STDBYTNS' scope=both;
alter system set fal_client='PRIMARYTNS' scope=both;
alter system set log_archive_max_processes=10 scope=both;
alter system set db_file_name_convert='/data/oradata/stdby/','/data/oradata/ENT11G2' scope=spfile;
alter system set log_file_name_convert='/data/flash_recovery/STDBY','/data/flash_recovery/ENT11G2' scope=spfile;
alter system set standby_file_management='AUTO' scope=both;
RMAN command for duplication for standby
duplicate target database for standby from active database
spfile
parameter_value_convert 'ent11g2','stdby','ENT11G2','stdby'
set db_unique_name='stdby'
set db_file_name_convert='/ENT11G2','/stdby'
set log_file_name_convert='/ENT11G2','/stdby'
set control_files='/data/oradata/stdby/controlfile/control01.ctl'
set log_archive_max_processes='5'
set fal_client='STDBYTNS'
set fal_server='PRIMARYTNS'
set standby_file_management='AUTO'
set log_archive_config='dg_config=(ent11g2,stdby)'
set log_archive_dest_2='service=PRIMARYTNS LGWR ASYNC NOAFFIRM max_failure=10 max_connections=5 reopen=180 valid_for=(online_logfiles,primary_role) db_unique_name=ent11g2';

Thursday, November 25, 2010

MongoDB CRUD

MongoDB is an open source NoSQL database (used by the open source social networking site Diaspora). This blog is about basic Create,Read,Update and Delete operation using Java.

1.Installing

Download MnogoDB (in this case linux version) and untar.

create the directory /data/db, if don't want to or can't create the directory under root then create it somewhere else (/opt/data/db) and create a symbolic link in the / directory.

Start the MongoDB daemon with
mongodb-linux-x86_64-1.6.4]$ bin/mongod
bin/mongod --help for help and startup options
Thu Nov 25 16:43:57 MongoDB starting : pid=5221 port=27017 dbpath=/data/db/ 64-bit
Thu Nov 25 16:43:57 db version v1.6.4, pdfile version 4.5
Thu Nov 25 16:43:57 git version: 4f5c02f8d92ff213b71b88f5eb643b7f62b50abc
Thu Nov 25 16:43:57 sys info: Linux domU-12-31-39-06-79-A1 2.6.21.7-2.ec2.v1.2.fc8xen #1 SMP Fri Nov 20 17:48:28 EST 2009 x86_64
BOOST_LIB_VERSION=1_41
Thu Nov 25 16:43:57 [initandlisten] waiting for connections on port 27017
Thu Nov 25 16:43:57 [websvr] web admin interface listening on port 28017

Thu Nov 25 16:44:15 [initandlisten] connection accepted from 127.0.0.1:52961 #1
Thu Nov 25 16:45:21 allocating new datafile /data/db/asangadb.ns, filling with zeroes...
Thu Nov 25 16:45:21 done allocating datafile /data/db/asangadb.ns, size: 16MB, took 0.029 secs
Thu Nov 25 16:45:21 allocating new datafile /data/db/asangadb.0, filling with zeroes...
Thu Nov 25 16:45:21 done allocating datafile /data/db/asangadb.0, size: 64MB, took 0.123 secs
Thu Nov 25 16:45:21 [conn1] building new index on { _id: 1 } for asangadb.asangadb
Thu Nov 25 16:45:21 allocating new datafile /data/db/asangadb.1, filling with zeroes...
From another command shell connect to the DB with
bin/mongo
MongoDB shell version: 1.6.4
connecting to: test
That's it, now the DB is ready to receive connections.

2. Creating & Connecting

In MongoDB databases aren't created explicitly. A database is lazily created when it is used for the very first time.

Download the drivers for Java from the MongoDB site.

To connect to the DB specify the host name (ip) and port. (There are other ways to connect as well).
Mongo m = new Mongo("192.168.0.76", 27017);
DB db = m.getDB( "pradeepdb" );
If required it is also possible to secure the connection with password.
This case a database called "pradeepdb" will be created if it is not present, if it's present then a connect will be made to it. Whenever a new database is created following could be seen in the starter daemon (mongod) output
Thu Nov 25 17:10:30 [initandlisten] connection accepted from 192.168.0.29:52274 #11
Thu Nov 25 17:10:30 allocating new datafile /data/db/pradeepdb.ns, filling with zeroes...
Thu Nov 25 17:10:30 done allocating datafile /data/db/pradeepdb.ns, size: 16MB, took 0.03 secs
Thu Nov 25 17:10:30 allocating new datafile /data/db/pradeepdb.0, filling with zeroes...
Thu Nov 25 17:10:30 done allocating datafile /data/db/pradeepdb.0, size: 64MB, took 0.121 secs
Thu Nov 25 17:10:30 allocating new datafile /data/db/pradeepdb.1, filling with zeroes...
Thu Nov 25 17:10:30 [conn11] building new index on { _id: 1 } for pradeepdb.pradeepdb
Thu Nov 25 17:10:30 [conn11] done for 0 records 0.012secs
In the command shell switch between databases using
use dbname
3. Inserting

By default a a collection is created with the same name as the database name. If required a separate collection could be created. Again not explicitly, if one doesn't exists it will be created on first use.
To create or get the relevant collection use the db reference created earlier
 DBCollection col = db.getCollection("pradeep2");
To insert a key-value into this collection
DBObject obj = new BasicDBObject();
obj.put("pradeep", 200);
col.insert(obj);
To insert values in the command shell
db.pradeep2.insert({"pradeep":500});
4.Reading

To read the first object in a collection
DBObject obj =  col.findOne();
To read all the objects in a collection
DBCursor cur = col.find();
while(cur.hasNext()) {

System.out.println(cur.next());
}
To read one specific object in a collection, first create the object with desired properties and then query the collection using that object.
BasicDBObject query = new BasicDBObject();
query.put("asanga", 100);
DBCursor cur = col.find(query);
while(cur.hasNext()) {

System.out.println(cur.next());
}
On the command shell following could be used to get the first value and all the values
db.pradeep2.findOne();
db.pradeep2.find();
5. Updating

To update create two objects that represents past image and new image. Past image will be updated by the new image.
  DBObject oldObj = new BasicDBObject();
oldObj.put("asanga", 100);

DBObject newObj = new BasicDBObject();
newObj.put("asanga", 200);

col.update(oldObj, newObj);
On the command prompt use
db.pradeep2.update({"asanga":200},{"asanga":500});
6. Deleting

Create an object to repsent the object being deleted and use the remove function to delete it from the database.
DBObject obj = new BasicDBObject();
obj.put("asanga", 500);

col.remove(obj);
On command shell use
db.pradeep2.remove({"asanga":500});
Follow the MongoDB Java Tutorial for more.

SQL Sinppet : Inter Instance Lockings

Identify the inter instance locks in a RAC environment.
Select (Select locking.sid||','||se.serial# From Gv$session Se Where Se.Sid=Locking.Sid And Se.Inst_Id = Locking.Inst_Id) As Locking_Session_serial,
Locking.Type As holding_lock_Type,
Locking.Inst_Id As Lock_Instance,
Decode (Locking.Lmode,0,'None',1,'Null',2,'Row-S',3, 'Row-X',4,'Share',5,'S/Row',6,'Exclusive') Locking_Mode,
(Select waiting.sid||','||se.serial# From Gv$session Se Where Se.Sid=Waiting.Sid And Se.Inst_Id = Waiting.Inst_Id) As Waiting_Session,
Waiting.Type As Wait_Lock_Type,
Waiting.Inst_Id As Wait_Instance,
DECODE(waiting.REQUEST,0,'None',1,'Null',2,'Row-S',3, 'Row-X',4,'Share',5,'S/Row',6,'Exclusive') REQUEST_mode,
(Select Prev_Sql_Id||' @ '||Machine From Gv$session Se Where Se.Sid=Locking.Sid And Se.Inst_Id = Locking.Inst_Id) Locking_Sql_And_Machine,
(Select Sql_Id||' @ '||Machine From Gv$session Se Where Se.Sid=Waiting.Sid And Se.Inst_Id = Waiting.Inst_Id) Waiting_Sql_and_machine,
Case When Locking.Type='TM' Then
(select object_name from dba_objects where object_id=locking.id1)
end as TM_lock_Object
FROM Gv$lock Locking,
Gv$lock Waiting
WHERE Locking.Id1 = Waiting.Id1
And Locking.Id2 = Waiting.Id2
And Locking.Block > 0
And Waiting.Request >0;
Above gives sid,serial# from locking and waiting sessions (makes it easy to kill them), client machine names of the session in the lock situation as well as sql ids to identify the sql involved in the lock and instance ids.

There's also gv$global_blocked_locks view which gives global locks.
SELECT g.inst_id INST,
s.sid,
g.type,
s.username,
s.serial#,
s.process,
DECODE (LMODE,0,'None',1,'Null',2,'Row-S',3, 'Row-X',4,'Share',5,'S/Row',6,'Exclusive') LMODE,
DECODE(REQUEST,0,'None',1,'Null',2,'Row-S',3, 'Row-X',4,'Share',5,'S/Row',6,'Exclusive') REQUEST
FROM gv$global_blocked_locks g,
gv$session s
WHERE g.sid = s.sid
And G.Inst_Id = S.Inst_Id
ORDER BY state;
To test this create the tables listed in TM Enq post.
From instance 1 run
update y set c = 8 where d=10;
and from instance 2 run
update x set a = 12 where a = 7;
second session will hang. From a third session run the first SQL above to identify the locks and objects involved.



Kill blocking sessions across all instances
Select (Select 'alter system kill session '''||locking.sid||','||se.serial#||',@'||locking.inst_id||''';' 
From Gv$session Se Where Se.Sid=Locking.Sid And Se.Inst_Id = Locking.Inst_Id) As kill_stmt
FROM Gv$lock Locking,
  Gv$lock Waiting
WHERE Locking.Id1   = Waiting.Id1
And Locking.Id2     = Waiting.Id2
And Locking.Block   > 0
And Waiting.Request >0;


KILL_STMT
--------------------------------------------------
alter system kill session '762,397,@2';
alter system kill session '16,8893,@1';

Monday, November 22, 2010

Invisible Indexes and TM enq: Locks

Indexes could be created to prevent locking situations when foreign key columns are un-indexed.
Introducing the index could alter the execution plans and resulting plans may not be desirable.
In 11g indexes could be made invisible thus preventing the optimizer from using them (unless use invisible indexes set to true).
But this would defeat the purpose if the indexes were created to prevent the foreign key columns related locking situations.
create table x (a number primary key, b number);
create table y (c number,d number, foreign key (c) references x(a));

begin
for i in 1 .. 10
loop
insert into x values (i, i + 10);
end loop;
end;
/

begin
for i in 1 .. 5
loop
insert into y values (i, i + 5);
end loop;
end;
/

commit;
Create two sessions with autocommit off and run on session 1
insert into x values (11,21);
on session 2
update x set a = 12 where a = 7;
In this case session 2 will hang.

Now create an index on y.c
create index aidx on y(c);
Run the same scenario as above and this time both sessions will get executed without hanging.

Make the index invisible
alter index aidx invisible;
Run the above two session scenario and second session will start to hang.

Even though it is said, invisible indexes are maintained during DML, it appears invisible indexes cannot be used to prevent such locking situations.

If the index is made visible from the first session while the second session still hanging, second session will come out of the hang and proceed as normal.

This was tested on 11gR2 (11.2.0.1.3)

Friday, November 19, 2010

SQL Sinppet : Foreign Key Columns not Inside an Index

--select 'create index '||child_table||'_aidx on '||child_table||'('||child_table_column||') Tablespace Tbxindexes Online Nologging Compute Statistics;' from (
SELECT p.table_name AS parent_table,
pc.column_name AS parent_table_column,
C.Table_Name AS Child_Table,
Cc.Column_Name AS Child_Table_Column ,
C.R_Constraint_Name As Parent_Constraint_Name,
C.Constraint_Name As Child_Constraint_Name,
cc.position
FROM user_constraints p,
user_constraints c,
user_cons_columns pc,
user_cons_columns cc
WHERE p.constraint_name = c.r_constraint_name
AND p.constraint_name = pc.constraint_name
AND pc.position = cc.position
And C.Constraint_Name = Cc.Constraint_Name
And C.Table_Name = '<<child_table_name_here>>'
--And C.Table_Name like '<<child_table_name_here>>_%'
And Not Exists (Select 1 from user_ind_columns Inds
Where Inds.Table_Name = C.Table_Name
And Inds.Column_Name=Cc.Column_Name
And Inds.Column_Position = Cc.Position)
Order By Parent_Table,child_table,child_constraint_name, Position;
--);
Useful in situations where foreign keys need to be indexed.

Thursday, November 11, 2010

RDA Highlights

RDA could be downloaded from metalink 314422.1 which also has other useful document links.

Setting up rda
rda.sh -S
Listing all modules, profiles and etc
rda.sh -L
Listing Profiles
rda.sh -L Profile
Listing Modules
rda.sh -L Module
Setting up RDA for cluster

Check ssh between nodes
rda.sh -T ssh
Use Rac profile to do the initial setup
rda.sh -p Rac
Add the remote nodes to cluster set up
rda.sh -vX Remote setup_cluster
This command will end with the following
RAC Setup Summary
----------------------------
Nodes:
. NOD001  rac1/rac10g21
. NOD002  rac2/rac10g22
2 nodes found
----------------------------
After the setup list of nodes could be displayed with
rda.sh -vX Remote list
Nodes:
. NOD001  rac1/rac10g21
. NOD002  rac2/rac10g22
2 nodes found
Run the rda from one node to collect node information on all nodes
rda.sh -v -e
Each node's information is visible under the Remote Data Collection => Collected Data section.Any problems could be diagnosed with additional tracing
./rda.sh -v -e REMOTE_TRACE=1
When the command runs it will log information on each operation on each node similar to below
....
NOD001>         Setting up ...
NOD002> bash: /home/oracle/rda/rda.sh: No such file or directory
NOD001>         Collecting diagnostic data ...
NOD001> ------------------------------------------------
NOD001> RDA Data Collection Started 11-Nov-2010 14:41:07
NOD001> ------------------------------------------------
...
...
NOD002>         Setting up ...
NOD002>         Collecting diagnostic data ...
NOD002> ------------------------------------------------
NOD002> RDA Data Collection Started 11-Nov-2010 14:41:15
NOD002> ------------------------------------------------
Health Check validation
perl rda.pl -T hcve
Useful metalink notes

Note 314422.1 Remote Diagnostic Agent (RDA) 4.0 - Overview
Note 359395.1 Remote Diagnostic Agent (RDA) 4 - RAC Cluster Guide
Note 330363.1 Remote Diagnostic Agent (RDA) 4 - FAQ
Note 330344.1 Remote Diagnostic Agent (RDA) 4.0 - Training
Note 330362.1 Remote Diagnostic Agent (RDA) 4 - Troubleshooting Guide
Note 250262.1 RDA 4 - Health Check / Validation Engine Guide

Warning of HWaddr when network restarted with bonding

Following warning could be observed on /var/log/messages when the network is restarted
Nov  4 08:18:09 db1 kernel: bonding: bond0: enslaving eth1 as an active interface with an up link.
Nov  4 08:18:12 db1 kernel: bnx2: eth0 NIC Copper Link is Up, 1000 Mbps full duplex
Nov  4 08:18:12 db1 kernel: bnx2: eth1 NIC Copper Link is Up, 1000 Mbps full duplex
Nov  4 08:22:55 db1 kernel: bonding: bond0: Removing slave eth0
Nov  4 08:22:55 db1 kernel: bonding: bond0: Warning: the permanent HWaddr of eth0 - F0:4D:A2:06:BC:7B - is still in use by bond0.
Set the HWaddr of eth0 to a different address to avoid conflicts.
Nov  4 08:22:55 db1 kernel: bonding: bond0: releasing active interface eth0
Nov  4 08:22:55 db1 kernel: bonding: bond0: Removing slave eth1
Nov  4 08:22:55 db1 kernel: bonding: bond0: releasing active interface eth1
Nov  4 08:22:55 db1 kernel: ADDRCONF(NETDEV_UP): bond0: link is not ready
Nov  4 08:22:55 db1 kernel: bonding: bond0: Adding slave eth0.
According to RedHat support document DOC-37419 this is a warning only doesn't indicate an issue.

It could be avoided by adding a MACADDR to the bonding interface. If the bonding interface is bond0 then adding following line to ifcfg-bond0 will make the warning disappear.
MACADDR=AA:BB:CC:11:22:33
The MAC address must be unique to that network segment.

Root cause for this is, when a MAC address is not defined bonding interface will use one of the slaves MAC address, and when that slave interface goes down bonding interface still prefers to use the same MAC address regardless other slave is now the active slave.