Sunday, December 7, 2008

SQL Plus Input Ouput

Input to sqlplus

set verify off
set serveroutput on
accept new_dept_name char prompt 'Enter new department name '
declare
dept_name varchar2(100);
begin
dept_name := &new_dept_name; -- assign the value taken from user
end;
/



SET AUTOPRINT ON
variable maximum_department_number number
declare
max_dept_num number;
begin
select max(deptno) into max_dept_num from dept;
:maximum_department_number := max_dept_num; -- pl/sql value assigned to sqlplus variable
end;
/



set autoprint print the sqlplus variable at the end of the execution
to explicitly print

print maximum_department_number

Thursday, August 28, 2008

SQL Design Patterns

Same table refered twice in a multi level subquery.

SELECT calc_scheme.scheme_id
FROM calc_scheme
WHERE NOT EXISTS
(SELECT 'PRODUCT'
FROM calc_rule_product
WHERE calc_rule_product.scheme_id = calc_scheme.scheme_id)
OR EXISTS
(SELECT 'PRODUCT'
FROM calc_rule_product
WHERE calc_rule_product.scheme_id = calc_scheme.scheme_id
AND 'FLT' = calc_rule_product.code
AND 'FLT' IS NOT NULL)


Becomes

SELECT calc_scheme.scheme_id
FROM calc_scheme,
(SELECT scheme_id,
code
FROM calc_rule_product)
x
WHERE(calc_scheme.scheme_id = x.scheme_id(+)
AND x.rowid IS NULL) OR(x.scheme_id = calc_scheme.scheme_id
AND 'FLT' = x.code
AND 'FLT' IS NOT NULL)



For WHERE EXISTS single table sub-queries, try a Semi-Join. If this doesn't work, try rewriting the query as a join

SELECT *
FROM table_a
WHERE EXISTS (
SELECT 1
FROM table_b b, table_c c
WHERE b.key2 = c.key2
AND b.key1 = a.key1);


becomes

SELECT a.*
FROM table_a a
, ( SELECT DISTINCT b.key1
FROM table_b b, table_c c
WHERE b.key2 = c.key2
) d
WHERE a.key1 = d.key1



For WHERE NOT EXISTS sub-queries, try re-writing the SQL as an outer join with a b.ROWID IS NULL clause. eg.

SELECT a.*
FROM table_a a
WHERE NOT EXISTS (
SELECT 1
FROM table_b
WHERE b.col1 = a.col1)

becomes

SELECT a.*
FROM table_a a , table_b b
WHERE a.col1 = b.col1 (+)
AND b.rowid IS NULL



For IN and NOT IN sub-queries, eliminate correlated sub-queries, they must use a Nested Loops join. eg.
SELECT a.*
FROM table_a a
WHERE col2 IN (
SELECT col2
FROM table_b
WHERE b.col1 = a.col1)

becomes

SELECT a.*
FROM table_a a
WHERE (col1, col2) IN (
SELECT col2
FROM table_b)



For IN sub-queries that are not correlated, try an ORDERED hint and a USE_HASH or USE_MERGE hint. eg:
SELECT /*+ ORDERED USE_HASH(a)*/ *
FROM table_a a
WHERE col1 IN (
SELECT col1
FROM table_b b)

In this way, Oracle can run the outer query and the sub-query independently and perform a hash join.


For correlated col = (sub-query) sub-queries, Oracle must use a Filter. Try to re-write as a table join. eg.
SELECT a.*
FROM table_a a
WHERE col2 = (
SELECT max(col2)
FROM table_b
WHERE b.col1 = a.col1)


becomes

SELECT a.*
FROM table_a a,
( SELECT col1, max(col2)
FROM table_b
GROUP BY col1
) b
WHERE a.col1 = b.col1


Tuesday, August 26, 2008

NOLOGGING

NOLOGGING tablespaces
For some database operations, you can control whether the database generates redo records. Without redo, no media recovery is possible. However, suppressing redo
generation can improve performance, and may be appropriate for easily recoverable operations.

An example of such an operation is a CREATE TABLE...AS SELECT statement, which can be repeated in case of database or instance failure. Specify the NOLOGGING clause in the CREATE TABLESPACE statement if you wish to suppress redo when these operations are performed for objects within the tablespace.

If you do not include this clause, or if you specify LOGGING instead, then the database generates redo when changes are made to objects in the tablespace. Redo is never generated for temporary segments or in temporary tablespaces, regardless of the logging attribute. The logging attribute specified at the tablespace level is the default attribute for objects created within the tablespace. You can override this default logging attribute by specifying LOGGING or NOLOGGING at the schema object level--for example, in a CREATE TABLE statement.

If you have a standby database, NOLOGGING mode causes problems with the availability and accuracy of the standby database. To overcome this problem, you can specify FORCE LOGGING mode. When you include the FORCE LOGGING clause in the CREATE TABLESPACE statement, you force the generation of redo records for all operations that make changes to objects in a tablespace. This overrides any specification made at the object level.

If you transport a tablespace that is in FORCE LOGGING mode to another database, the new tablespace will not maintain the FORCE LOGGING mode.

Consider Using NOLOGGING When Creating Tables
To create a table most efficiently use the NOLOGGING clause in the CREATE TABLE...AS SELECT statement. The NOLOGGING clause causes minimal redo information to be generated during the table creation. This has the following benefits:

* Space is saved in the redo log files.
* The time it takes to create the table is decreased.
* Performance improves for parallel creation of large tables.

The NOLOGGING clause also specifies that subsequent direct loads using SQL*Loader
and direct load INSERT operations are not logged. Subsequent DML statements
(UPDATE, DELETE, and conventional path insert) are unaffected by the NOLOGGING attribute of the table and generate redo.

If you cannot afford to lose the table after you have created it (for example, you will no longer have access to the data used to create the table) you should take a backup immediately after the table is created. In some situations, such as for tables that are created for temporary use, this precaution may not be necessary.

In general, the relative performance improvement of specifying NOLOGGING is greater for larger tables than for smaller tables. For small tables, NOLOGGING has little effect on the time it takes to create a table. However, for larger tables the performance improvement can be significant, especially when you are also parallelizing the table creation.

Tablespace Resize

query to see if tablespaces can be reduced in size


set verify off
column file_name format a50 word_wrapped
column smallest format 999,990 heading "Smallest|Size|Poss."
column currsize format 999,990 heading "Current|Size"
column savings format 999,990 heading "Poss.|Savings"
break on report
compute sum of savings on report

column value new_val blksize
select value from v$parameter where name = 'db_block_size'
/

select file_name,
ceil( (nvl(hwm,1)*&&blksize)/1024/1024 ) smallest,
ceil( blocks*&&blksize/1024/1024) currsize,
ceil( blocks*&&blksize/1024/1024) -
ceil( (nvl(hwm,1)*&&blksize)/1024/1024 ) savings
from dba_data_files a,
( select file_id, max(block_id+blocks-1) hwm
from dba_extents
group by file_id ) b
where a.file_id = b.file_id(+)
/

column cmd format a75 word_wrapped

select 'alter database datafile '''||file_name||''' resize ' ||
ceil( (nvl(hwm,1)*&&blksize)/1024/1024 ) || 'm;' cmd
from dba_data_files a,
( select file_id, max(block_id+blocks-1) hwm
from dba_extents
group by file_id ) b
where a.file_id = b.file_id(+)
and ceil( blocks*&&blksize/1024/1024) -
ceil( (nvl(hwm,1)*&&blksize)/1024/1024 ) > 0
/


Bind Variables in PL/SQL

Bind Variables in PL/SQL

variables in PL/SQL are implicitly bind variables.

 create or replace procedure dsal(p_empno in number)
as
begin
update emp set sal=sal*2 where empno = p_empno;
commit;
end;
/


Now you might be thinking that you've got to replace the p_empno with a bind variable. However, the good news is that every reference to a PL/SQL variable is in fact a bind variable.

Dynamic SQL

In fact, the only time you need to consciously decide to use bind variables when working with PL/SQL is when using Dynamic SQL.

Dynamic SQL, allows you to execute a string containing SQL using the EXECUTE IMMEDIATE command. For next example would always require a hard parse when it is submitted:

create or replace procedure dsal(p_empno in number)
as
begin
execute immediate
'update emp set sal = sal*2 where empno = '||p_empno;
commit;
end;
/


The way to use bind variables instead is to change the EXECUTE IMMEDIATE command as follows:

create or replace procedure dsal(p_empno in number)
as
begin
execute immediate
'update emp set
sal = sal*2 where empno = :x' using p_empno;
commit;
end;
/

And that's all there is to it. One thing to bear in mind, though, is that you can't substitute actual object names (tables, views, columns etc) with bind variables - you can only subsitute literals. If the object name is generated at runtime, you'll still need to string concatenate these parts, and the SQL will only match with those already in the shared pool when the same object name comes up. However, whenever you're using dynamic SQL to build up the predicate part of a statement, use bind variables instead and you'll reduce dramatically the amount of latch contention going on.

The Performance Killer

Just to give you a tiny idea of how huge of a difference this can make performance wise, you only need to run a very small test:

Here is the Performance Killer ....

SQL> alter system flush shared_pool;
SQL> set serveroutput on;

declare
type rc is ref cursor;
l_rc rc;
l_dummy all_objects.object_name%type;
l_start number default dbms_utility.get_time;
begin
for i in 1 .. 1000
loop
open l_rc for
'select object_name
from all_objects
where object_id = ' || i;
fetch l_rc into l_dummy;
close l_rc;
-- dbms_output.put_line(l_dummy);
end loop;
dbms_output.put_line
(round((dbms_utility.get_time-l_start)/100, 2) ||
' Seconds...' );
end;
/


Modified code



declare
type rc is ref cursor;
l_rc rc;
l_dummy all_objects.object_name%type;
l_start number default dbms_utility.get_time;
begin
for i in 1 .. 1000
loop
open l_rc for
'select object_name
from all_objects
where object_id = :x'
using i;
fetch l_rc into l_dummy;
close l_rc;
-- dbms_output.put_line(l_dummy);
end loop;
dbms_output.put_line
(round((dbms_utility.get_time-l_start)/100, 2) ||
' Seconds...' );
end;
/


Modified code runs faster than the earlier version. This is because it reduces the number of parses required. This is good for scalability as less cpu and memory is consumed.

Monday, August 25, 2008

ORA-4031 Common Analysis Diagnostic Scripts

Metalink note :

430473.1
396940.1

Lbrary cache latch contention

Lbrary cache latch contention is typically caused by NOT using bind variables. It is due
to excessive parsing of statements.

One way to see if this might be the case in your situation is to run a script like:


create table t1 as select sql_text from v$sqlarea;

alter table t1 add sql_text_wo_constants varchar2(1000);

create or replace function
remove_constants( p_query in varchar2 ) return varchar2
as
l_query long;
l_char varchar2(1);
l_in_quotes boolean default FALSE;
begin
for i in 1 .. length( p_query )
loop
l_char := substr(p_query,i,1);
if ( l_char = '''' and l_in_quotes )
then
l_in_quotes := FALSE;
elsif ( l_char = '''' and NOT l_in_quotes )
then
l_in_quotes := TRUE;
l_query := l_query || '''#';
end if;
if ( NOT l_in_quotes ) then
l_query := l_query || l_char;
end if;
end loop;
l_query := translate( l_query, '0123456789', '@@@@@@@@@@' );
for i in 0 .. 8 loop
l_query := replace( l_query, lpad('@',10-i,'@'), '@' );
l_query := replace( l_query, lpad(' ',10-i,' '), ' ' );
end loop;
return upper(l_query);
end;
/
update t1 set sql_text_wo_constants = remove_constants(sql_text);

select sql_text_wo_constants, count(*)
from t1
group by sql_text_wo_constants
having count(*) > 100
order by 2
/



The output of that last query will show you statements that are identical in the shared
pool after all numbers and character string constants have been removed. These
statements -- and more importantly their counts -- are the potential bottlenecks. In
addition to causing the contention, they will be HUGE cpu consumers.

Thursday, August 21, 2008

NETCA fails on 11g client installation

NETCA on Oracle 11g client installation fails on RHEL 4 with the following error

UnsatisfiedLinkError exception loading native library: njni11

java.lang.UnsatisfiedLinkError: jniGetOracleHome
at oracle.net.common.NetGetEnv.jniGetOracleHome(Native Method)
at oracle.net.common.NetGetEnv.getOracleHome(Unknown Source)
at oracle.net.ca.NetCA.main(Unknown Source)
Oracle Net Services configuration failed. The exit code is -1


To rectify this install libaio-devel and libaio rpms on the machine.

Sunday, August 17, 2008

11g Partitioning Enhancements

Interval Partitioning

Interval partitioning is an extension of range partitioning which instructs the database to automatically create partitions of a specified interval when data inserted into the table exceeds all of the range partitions.

You must specify at least one range partition. The range partitioning key value determines the high value of the range partitions, which is called the transition point, and the database creates interval partitions for data beyond that transition point.

Interval partitioning restrictions
Can only specify one partitioning key column, and it must be of NUMBER or DATE type.
Interval partitioning is not supported for index-organized tables.
Cannot create a domain index on an interval-partitioned table.


CREATE TABLE interval_sales
( prod_id NUMBER(6)
, cust_id NUMBER
, time_id DATE
, channel_id CHAR(1)
, promo_id NUMBER(6)
, quantity_sold NUMBER(3)
, amount_sold NUMBER(10,2)
)
PARTITION BY RANGE (time_id)
INTERVAL(NUMTOYMINTERVAL(1, 'MONTH'))
(PARTITION p0 VALUES LESS THAN (TO_DATE('1-1-2005', 'DD-MM-YYYY')),
PARTITION p1 VALUES LESS THAN (TO_DATE('1-1-2006', 'DD-MM-YYYY')),
PARTITION p2 VALUES LESS THAN (TO_DATE('1-7-2006', 'DD-MM-YYYY')),
PARTITION p3 VALUES LESS THAN (TO_DATE('1-1-2007', 'DD-MM-YYYY')) );
above the transition point of January 1, 2007, partitions are created with a width of one month.
The high bound of partition p3 represents the transition point. p3 and all partitions below it (p0, p1, and p2 in this example) are in the range section while all partitions above it fall into the interval section.


System Partitioning
System partitioning does not have any partitioning keys so the mapping of the rows to a particular partition is not implicit. Instead, you specify the partition to which a row maps by using partition extended syntax when inserting a row.

Without a partitioning key the usual performance benefits of partitioned tables are not available for system partitioned tables. There is no support for traditional partition pruning or partition wise joins. Partition pruning is only achieved by accessing the same partitions in the system partitioned tables as those that were accessed in the base table.

System partitioned tables do provide the manageability advantages of equi-partitioning. For example, a nested table can be created as a system partitioned table that has the same number of partitions as the base table. A domain index can be backed up by a system partitioned table that has the same number of partitions as the base table.

System Partitioning Guidelines

The following operations are supported for system partitioned tables:

* Partition maintenance operations and other DDLs (See exceptions below)
* Creation of local indexes.
* Creation of local bitmapped indexes.
* Creation of global indexes.
* All DML operations.
* INSERT AS SELECT with partition extended syntax:
INSERT INTO PARTITION (partition-name|number|bind var) AS subqery

Following operations are not supported for system partitioning

* Unique local indexes are not supported because they require a partitioning key.
* CREATE TABLE AS SELECT
* Since there is no partitioning method, it is not possible to distribute rows to partitions. Instead the user should first create the table and then insert rows into each partition.
INSERT INTO tabname AS subquery
SPLIT PARTITION operations

CREATE TABLE systab (c1 integer, c2 integer) 
PARTITION BY SYSTEM
(
PARTITION p1 TABLESPACE tbs_1,
PARTITION p2 TABLESPACE tbs_2,
PARTITION p3 TABLESPACE tbs_3,
PARTITION p4 TABLESPACE tbs_4
);

INSERT and MERGE statements must use partition extended syntax to identify a particular partition a row should go into.For example, tuple (4,5) can be inserted into any one of the above four partitions
INSERT INTO systab PARTITION (p1) VALUES (4,5); /* Partition p1 */ 
INSERT INTO systab PARTITION (1) VALUES (4,5); /* First partition */
INSERT INTO systab PARTITION (:pno) VALUES (4,5); /* pno bound to 1/p1 */

Or:
INSERT INTO systab PARTITION (p2) VALUES (4,5); /* partition p2 */ or
INSERT INTO systab PARTITION (2) VALUES (4,5) /* second partition */
INSERT INTO systab PARTITION (:pno) VALUES (4,5); /* pno bound to 1/p1 */



Composite Partitioning Enhancements

With Oracle Database 11g list partitioning can be a top level partitioning method for composite partitioned tables giving us List-List, List-Hash, List-Range and Range-Range composite methods.
With the Interval partitioning, the following composite partitioning methods are also supported: Interval-Range, Interval-List and Interval-Hash.

* Range-Range Partitioning :-Composite range-range partitioning enables logical range partitioning along two dimensions; for example, partition by order_date and range subpartition by shipping_date.

* List-Range Partitioning: Composite list-range partitioning enables logical range subpartitioning within a given list partitioning strategy; for example, list partition by country_id and range subpartition by order_date.

* List-Hash Partitioning :- Composite list-hash partitioning enables hash sub-partitioning of a list-partitioned object; for example, to enable partition-wise joins.

* List-List Partitioning :-Composite list-list partitioning enables logical list partitioning along two dimensions; for example, list partition by country_id and list subpartition by sales_channel.

eg: Composite Range-Range Partitioning

CREATE TABLE shipments(
order_date DATE NOT NULL,
delivery_date DATE NOT NULL)
PARTITION BY RANGE (order_date)
SUBPARTITION BY RANGE (delivery_date)
(PARTITION p_2006_jul VALUES LESS THAN
(TO_DATE('01-AUG-2006','dd-MON-yyyy'))
( SUBPARTITION p06_jul_e VALUES LESS THAN
(TO_DATE('15-AUG-2006','dd-MON-yyyy'))
, SUBPARTITION p06_jul_a VALUES LESS THAN
(TO_DATE('01-SEP-2006','dd-MON-yyyy'))
, SUBPARTITION p06_jul_l VALUES LESS THAN (MAXVALUE))
,PARTITION p_2006_aug VALUES LESS THAN
(TO_DATE('01-SEP-2006','dd-MON-yyyy'))
( SUBPARTITION p06_aug_e VALUES LESS THAN
(TO_DATE('15-SEP-2006','dd-MON-yyyy'))
, SUBPARTITION p06_aug_a VALUES LESS THAN
(TO_DATE('01-OCT-2006','dd-MON-yyyy'))
, SUBPARTITION p06_aug_l VALUES LESS THAN (MAXVALUE)));


The above example illustrates how range-range partitioning might be used. The example tracks shipments.

A row is mapped to a partition by checking whether the value of the partitioning column for a row falls within a specific partition range. The row is then mapped to a subpartition within that partition by identifying whether the value of the subpartitioning column falls within a specific range.

Virtual Column-Based Partitioning

The values for virtual columns are not physically stored in the table row on disk, rather they are evaluated on demand. The functions or expressions describing the virtual columns should be deterministic and pure, meaning the same set of input values should return the same output values.

Virtual columns can be used like any other table columns. They can be indexed, used in queries, DML and DDL statements. Tables and indexes can be partitioned on a virtual column and even statistics can be gathered upon them.

These columns can be specified during a CREATE, or ALTER table operation and can be defined to be either visible or hidden. Virtual columns share the same SQL namespace as other real table columns and conform to the data type of the underlying expression that describes it. These columns can be used in queries like any other table columns providing a simple, elegant and consistent mechanism of accessing expressions in a SQL statement.

Partition pruning takes place for virtual column partition keys when the predicates on the partitioning key are of the following types:

* Equality or Like
* List
* Range
* TBL$
* Partition extended names

Given a join operation between two tables, the optimizer recognizes when partition-wise join (full or partial) is applicable, decides whether to use it or not and annotate the join properly when it decides to use it. This applies to both serial and parallel cases.


To recognize full partition-wise join the optimizer relies on the definition of equi-partitioning of two objects, this definition includes the equivalence of the virtual expression on which the tables were partitioned.


CREATE TABLE employees
(employee_id number(6) not null, first_name varchar2(30),
last_name varchar2(40) not null, email varchar2(25),
phone_number varchar2(20), hire_date date not null,
job_id varchar2(10) not null, salary number(8,2),
commission_pct number(2,2), manager_id number(6),
department_id number(4),
total_compensation as (salary *( 1+commission_pct))
)
PARTITION BY RANGE (total_compensation)
(
PARTITION p1 VALUES LESS THAN (50000),
PARTITION p2 VALUES LESS THAN (100000),
PARTITION p3 VALUES LESS THAN (150000),
PARTITION p4 VALUES LESS THAN (MAXVALUE)
);



Reference Partitioning

Reference partitioning allows the partitioning of a table based on the partitioning scheme of the table referenced in its referential constraint. The partitioning key is resolved through an existing parent-child relationship, enforced by active primary key or foreign key constraints. This means that tables with a parent-child relationship can be logically equi-partitioned by inheriting the partitioning key from the parent table without duplicating the key columns. The logical dependency also automatically cascades partition maintenance operations, making application development easier and less error-prone.

To create a reference-partitioned table, you specify a PARTITION BY REFERENCE clause in the CREATE TABLE statement. This clause specifies the name of a referential constraint and this constraint becomes the partitioning referential constraint that is used as the basis for reference partitioning in the table.


CREATE TABLE orders
(order_id NUMBER(12),
order_date DATE,
order_mode VARCHAR2(8),
customer_id NUMBER(6),
order_status NUMBER(2),
order_total NUMBER(8,2),
sales_rep_id NUMBER(6),
promotion_id NUMBER(6),
CONSTRAINT orders_pk PRIMARY KEY(order_id)
)
PARTITION BY RANGE(order_date)
(PARTITION Q1_2005 VALUES LESS THAN
(TO_DATE('01-APR-2005','DD-MON-YYYY')),
PARTITION Q2_2005 VALUES LESS THAN
(TO_DATE('01-JUL-2005','DD-MON-YYYY')),
PARTITION Q3_2005 VALUES LESS THAN
(TO_DATE('01-OCT-2005','DD-MON-YYYY')),
PARTITION Q4_2005 VALUES LESS THAN
(TO_DATE('01-JAN-2006','DD-MON-YYYY')));




CREATE TABLE order_items
( order_id NUMBER(12) NOT NULL,
line_item_id NUMBER(3) NOT NULL,
product_id NUMBER(6) NOT NULL,
unit_price NUMBER(8,2),
quantity NUMBER(8),
CONSTRAINT order_items_fk
FOREIGN KEY(order_id) REFERENCES orders(order_id)
)
PARTITION BY REFERENCE(order_items_fk);


The reference-partitioned child table ORDER_ITEMS example above is created with four partitions, Q1_2005, Q2_2005, Q3_2005, and Q4_2005, where each partition contains the order_items rows corresponding to orders in the respective parent partition.

If partition descriptors are provided, then the number of partitions described must be exactly equal to the number of partitions or sub-partitions in the referenced table. If the parent table is a composite partitioned table, then the table will have one partition for each sub-partition of its parent; otherwise the table will have one partition for each partition of its parent.

Partition bounds cannot be specified for the partitions of a reference-partitioned table. The partitions of a reference-partitioned table can be named. If a partition is not explicitly named, then it will inherit its name from the corresponding partition in the parent table, unless this inherited name conflicts with one of the explicit names given. In this case, the partition will have a system-generated name.

Partitions of a reference-partitioned table will co-locate with the corresponding partition of the parent table, if no explicit tablespace is specified for the reference-partitioned table's partition.

Thursday, August 7, 2008

Useful SQLs

Child tables not having index on foreign key columns
select table_name,column_name from user_cons_columns where constraint_name in (select constraint_name from user_constraints where r_constraint_name in (select constraint_name from user_constraints where table_name='table_name') and constraint_type='R') minus (select table_name,column_name from user_ind_columns);



child tables of a parent table

select table_name from user_constraints where r_constraint_name in (select constraint_name from user_constraints where table_name='table name');


Friday, July 18, 2008

Oracle Streams in Brief - 2

The Redo Log and a Capture Process
A capture process is an optional Oracle background process that scans the database redo log to capture DML and DDL changes made to database objects. When a capture process is configured to capture changes from a redo log, the database where the changes were generated is called the source database.

A capture process can run on the source database or on a remote database. When a capture process runs on the source database, the capture process is a local capture process. When a capture process runs on a remote database, the capture process is called a downstream capture process, and the remote database is called the downstream database.

Logical Change Records (LCRs)
A capture process reformats changes captured from the redo log into LCRs. An LCR is a message with a specific format that describes a database change. A capture process captures two types of LCRs: row LCRs and DDL LCRs.

After capturing an LCR, a capture process enqueues a message containing the LCR into a queue. A capture process is always associated with a single ANYDATA queue, and it enqueues messages into this queue only. For improved performance, captured messages always are stored in a buffered queue, which is System Global Area (SGA) memory associated with an ANYDATA queue. You can create multiple queues and associate a different capture process with each queue.

Capture Process Rules
A capture process either captures or discards changes based on rules that you define. Each rule specifies the database objects and types of changes for which the rule evaluates to TRUE. You can place these rules in a positive rule set or negative rule set for the capture process.

If a rule evaluates to TRUE for a change, and the rule is in the positive rule set or a capture process, then the capture process captures the change. If a rule evaluates to TRUE for a change, and the rule is in the negative rule set for a capture process, then the capture process discards the change. If a capture process has both a positive and a negative rule set, then the negative rule set is always evaluated first.

Types of DML Changes Captured
When you specify that DML changes made to certain tables should be captured, a capture process captures the following types of DML changes made to these tables
Insert
Update
Delete
Merge
Piecewise updates to LOB

A capture process converts each MERGE change into an INSERT or UPDATE change. MERGE is not a valid command type in a row LCR.

A capture process can capture changes made to an index-organized table only if
the index-organized table does not contain any columns of the following
datatypes

ROWID
UROWID
User defined types (object types, REFs, varrays, and nested tables)

A capture process ignores CALL, EXPLAIN PLAN, or LOCK TABLE statements.

A capture process cannot capture DML changes made to temporary tables or object tables. A capture process raises an error if it attempts to capture such changes.

If you share a sequence at multiple databases, then sequence values used for individual rows at these databases might vary. Also, changes to actual sequence values are not captured. For example, if a user references a NEXTVAL or sets the sequence, then a capture process does not capture changes resulting from these operations.

DDL Changes and Capture Processes
A capture process captures the DDL changes that satisfy its rule sets, except for the following types of DDL changes

Alter Database
Create Controlfile
Create Database
Create Pfile
Create SPfile
Flashback Database

A capture process can capture DDL statements, but not the results of DDL statements
unless the DDL statement is a CREATE TABLE AS SELECT statement.

Some types of DDL changes that are captured by a capture process cannot be applied by an apply process. If an apply process receives a DDL LCR that specifies an operation that cannot be applied, then the apply process ignores the DDL LCR and records information about it in the trace file for the apply process.

Other Types of Changes Ignored by a Capture Process

The session control statements ALTER SESSION and SET ROLE.

The system control statement ALTER SYSTEM.

Invocations of PL/SQL procedures, which means that a call to a PL/SQL procedure is not captured. However, if a call to a PL/SQL procedure causes changes to database objects, then these changes can be captured by a capture process if the changes satisfy the capture process rule sets.

Changes made to a table or schema by online redefinition using the DBMS_REDEFINITION package. Online table redefinition is supported on a table for which a capture process captures changes, but the logical structure of the table before online redefinition must be the same as the logical structure after online redefinition.

Streams Capture Processes and Oracle Real Application Clusters

You can configure a Streams capture process to capture changes in an Oracle Real Application Clusters (RAC) environment. If you use one or more capture processes and RAC in the same environment, then all archived logs that contain changes to be captured by a capture process must be available for all instances in the RAC environment. In a RAC environment, a capture process reads changes made by all instances.

Each capture process is started and stopped on the owner instance for its ANYDATA queue, even if the start or stop procedure is run on a different instance. Also, a capture process will follow its queue to a different instance if the current owner instance becomes unavailable. The queue itself follows the rules for primary instance and secondary instance ownership. If the owner instance for a queue table containing a queue used by a capture process becomes unavailable, then queue ownership is transferred automatically to another instance in the cluster. In addition, if the captureprocess was enabled when the owner instance became unavailable, then the capture process is restarted automatically on the new owner instance. If the capture process was disabled when the owner instance became unavailable, then the capture process remains disabled on the new owner instance.

The DBA_QUEUE_TABLES data dictionary view contains information about the owner instance for a queue table. Also, any parallel execution servers used by a single capture process run on a single instance in a RAC environment.

LogMiner supports the LOG_ARCHIVE_DEST_n initialization parameter, and Streams capture processes use LogMiner to capture changes from the redo log. If an archived log file is inaccessible from one destination, a local capture process can read it from another accessible destination. On a RAC database, this ability also enables you to use cross instance archival (CIA) such that each instance archives its files to all other instances. This solution cannot detect or resolve gaps caused by missing archived log files. Hence, it can be used only to complement an existing solution to have the archived files shared between all instances.

Capture Process Architecture
A capture process is an optional Oracle background process whose process name is cnnn, where nnn is a capture process number. Valid capture process names include c001 through c999. A capture process captures changes from the redo log by using the infrastructure of LogMiner. Streams configures LogMiner automatically. You can create, alter, start, stop, and drop a capture process, and you can define capture process rules that control which changes a capture process captures.

Capture Process Components
One reader server that reads the redo log and divides the redo log into regions.

One or more preparer servers that scan the regions defined by the reader server in parallel and perform prefiltering of changes found in the redo log. Prefiltering involves sending partial information about changes, such as schema and object name for a change, to the rules engine for evaluation, and receiving the results of the evaluation.

One builder server that merges redo records from the preparer servers. These redo records either evaluated to TRUE during partial evaluation or partial evaluation was inconclusive for them. The builder server preserves the SCN order of these redo records and passes the merged redo records to the capture process.

Each reader server, preparer server, and builder server is a parallel execution server. A capture process (cnnn) is an Oracle background process.

Capture Process States
INITIALIZING - Starting up.

WAITING FOR DICTIONARY REDO - Waiting for redo log files containing the dictionary build related to the first SCN to be added to the capture process session. A capture process cannot begin to scan the redo log files until all of the log files containing the dictionary build have been added.

DICTIONARY INITIALIZATION - Processing a dictionary build.

MINING (PROCESSED SCN = scn_value) - Mining a dictionary build at the SCN scn_value.

LOADING (step X of Y) - Processing information from a dictionary build and currently at step X in a process that involves Y steps, where X and Y are numbers.

CAPTURING CHANGES - Scanning the redo log for changes that evaluate to TRUE against the capture process rule sets.

WAITING FOR REDO - Waiting for new redo log files to be added to the capture process session. The capture process has finished processing all of the redo log files added to its session. This state is possible if there is no activity at a source database. For a downstream capture process, this state is possible if the capture process is waiting for new log files to be added to its session.

EVALUATING RULE - Evaluating a change against a capture process rule set.

CREATING LCR - Converting a change into an LCR.

ENQUEUING MESSAGE - Enqueuing an LCR that satisfies the capture process rule sets into the capture process queue.

PAUSED FOR FLOW CONTROL - Unable to enqueue LCRs either because of low memory or because propagations and apply processes are consuming messages slower than the capture process is creating them. This state indicates flow control that is used to reduce spilling of captured messages when propagation or apply has fallen behind or is unavailable.

SHUTTING DOWN - Stopping.

Multiple Capture Processes in a Single Database

If you run multiple capture processes in a single database, consider increasing the size of the System Global Area (SGA) for each instance. Use the SGA_MAX_SIZE initialization parameter to increase the SGA size. Also, if the size of the Streams pool is not managed automatically in the database, (The size of the Streams pool is managed automatically if the SGA_TARGET initialization parameter is set to a nonzero value.) then you should increase the size of the Streams pool by 10 MB for each capture process parallelism.

If you have two capture processes running in a database, and the parallelism parameter is set to 4 for one of them and 1 for the other, then increase the Streams pool by 50 MB (4 + 1 = 5 parallelism).

Oracle recommends that each ANYDATA queue used by a capture process, propagation, or apply process have captured messages from at most one capture process from a particular source database. Therefore, a separate queue should be used for each capture process that captures changes originating at a particular source database.

Capture Process Checkpoints
A checkpoint is information about the current state of a capture process that is stored persistently in the data dictionary of the database running the capture process. A capture process tries to record a checkpoint at regular intervals called checkpoint intervals.

Required Checkpoint SCN
The SCN that corresponds to the lowest checkpoint for which a capture process requires redo data is the required checkpoint SCN. The redo log file that contains the required checkpoint SCN, and all subsequent redo log files, must be available to the capture process. If a capture process is stopped and restarted, then it starts scanning the redo log from the SCN that corresponds to its required checkpoint SCN. The required checkpoint SCN is important for recovery if a database stops unexpectedly.

Maximum Checkpoint SCN
The SCN that corresponds to the last checkpoint recorded by a capture process is the maximum checkpoint SCN. If you create a capture process that captures changes from a source database, and other capture processes already exist which capture changes from the same source database, then the maximum checkpoint SCNs of the existing capture processes can help you decide whether the new capture process should create a new LogMiner data dictionary or share one of the existing LogMiner data dictionaries.

Checkpoint Retention Time
The checkpoint retention time is the amount of time, in number of days, that a capture process retains checkpoints before purging them automatically.

Whenever the first SCN is reset for a capture process, the capture process purges information about archived redo log files prior to the new first SCN from its LogMiner data dictionary. After this information is purged, the archived redo log files remain on the hard disk, but the files are not needed by the capture process. The PURGEABLE column in the DBA_REGISTERED_ARCHIVED_LOG view displays YES for the archived redo log files that are no longer needed. These files can be removed from disk or moved to another location without affecting the capture process.

If you create a capture process using the CREATE_CAPTURE procedure in the DBMS_CAPTURE_ADM package, then you can specify the checkpoint retention time, in days, using the checkpoint_retention_time parameter. The default checkpoint retention time is 60 days if the checkpoint_retention_time parameter is not specified in the CREATE_CAPTURE procedure, or if you use the DBMS_STREAMS_ADM package to create the capture process.

You can change the checkpoint retention time for a capture process by specifying a new time in the ALTER_CAPTURE procedure in the DBMS_CAPTURE_ADM package. If you do not want checkpoints for a capture process to be purged automatically, then specify DBMS_CAPTURE_ADM.INFINITE for the checkpoint_retention_time parameter in CREATE_CAPTURE or ALTER_CAPTURE.

Capture Process Creation
You can create a capture process using the DBMS_STREAMS_ADM package or the DBMS_CAPTURE_ADM package. Using the DBMS_STREAMS_ADM package to create a capture process is simpler because defaults are used automatically for some configuration options.

Alternatively, using the DBMS_CAPTURE_ADM package to create a capture process is more flexible, and you create one or more rule sets and rules for the capture process either before or after it is created. You can use the procedures in the DBMS_STREAMS_
ADM package or the DBMS_RULE_ADM package to add rules to a rule set for the capture process.

When you create a capture process using a procedure in the DBMS_STREAMS_ADM package and generate one or more rules in the positive rule set for the capture process, the objects for which changes are captured are prepared for instantiation automatically, unless it is a downstream capture process and there is no database link from the downstream database to the source database.

When you create a capture process using the CREATE_CAPTURE procedure in the DBMS_CAPTURE_ADM package, you should prepare for instantiation any objects for which you plan to capture changes as soon as possible after capture process creation.

The LogMiner Data Dictionary for a Capture Process

A capture process requires a data dictionary that is separate from the primary data dictionary for the source database. This separate data dictionary is called a LogMiner data dictionary. There can be more than one LogMiner data dictionary for a particular source database. If there are multiple capture processes capturing changes from the source database, then two or more capture processes can share a LogMiner data dictionary, or each capture process can have its own LogMiner data dictionary. If the LogMiner data dictionary needed by a capture process does not exist, then the capture process populates it using information in the redo log when the capture process is started for the first time.

The Streams Data Dictionary

Propagations and apply processes use a Streams data dictionary to keep track of the database objects from a particular source database. A Streams data dictionary is populated whenever one or more database objects are prepared for instantiation at a source database. Specifically, when a database object is prepared for instantiation, it is recorded in the redo log. When a capture process scans the redo log, it uses this information to populate the local Streams data dictionary for the source database. In the case of local capture, this Streams data dictionary is at the source database. In the case of downstream capture, this Streams data dictionary is at the downstream database.

ARCHIVELOG Mode and a Capture Process

A local capture process reads online redo logs whenever possible and archived redo log files otherwise. Therefore, the source database must be running in ARCHIVELOG mode when a local capture process is configured to capture changes.

A real-time downstream capture process reads online redo data from its source database whenever possible and archived redo log files that contain redo data from the source database otherwise. In this case, the redo data from the source database is stored in the standby redo log at the downstream database, and the archiver at the downstream database archives the redo data in the standby redo log. Therefore, both the source database and the downstream database must be running in ARCHIVELOG mode when a real-time downstream capture process is configured to capture changes.

An archived-log downstream capture process always reads archived redo log files from its source database. Therefore, the source database must be running in ARCHIVELOG mode when an archived-log downstream capture process is configured to capture changes.

You can query the REQUIRED_CHECKPOINT_SCN column in the DBA_CAPTURE data dictionary view to determine the required checkpoint SCN for a capture process. When the capture process is restarted, it scans the redo log from the required checkpoint SCN forward. Therefore, the redo log file that includes the required checkpoint SCN, and all subsequent redo log files, must be available to the capture process.

You must keep an archived redo log file available until you are certain that no capture process will need that file. The first SCN for a capture process can be reset to a higher value, but it cannot be reset to a lower value. Therefore, a capture process will never need the redo log files that contain information prior to its first SCN. Query the DBA_LOGMNR_PURGED_LOG data dictionary view to determine which archived redo log files will never be needed by any capture process.

RMAN and Archived Redo Log Files Required by a Capture Process

Some Recovery Manager (RMAN) commands delete archived redo log files. If one of these RMAN commands is used on a database that is running one or more local capture processes, then the RMAN command does not delete archived redo log files that are needed by a local capture process. That is, the RMAN command does not delete archived redo log files that contain changes with SCN values that are equal to or greater than the required checkpoint SCN for a local capture process.

The RMAN command DELETE OBSOLETE permanently purges the archived redo log files that are no longer needed. This command only deletes the archived redo log files in which all of the changes are less than the required checkpoint SCN for a local capture process.

The RMAN command BACKUP ARCHIVELOG ALL DELETE INPUT copies the archived redo log files and deletes the original files after completing the backup. This command only deletes the archived redo log files in which all of the changes are less than the required checkpoint SCN for a local capture process. If archived redo log files are not deleted because they contain changes required by a capture process, then RMAN display a warning message about skipping the delete operation for these files.

If a database is a source database for a downstream capture process, then these RMAN commands might delete archived redo log files that have not been transferred to the downstream database and are required by a downstream capture process. Therefore, before running these commands on the source database, make sure any archived redo log files needed by a downstream database have been transferred to the downstream database.

Capture Process Parameters

The parallelism capture process parameter controls the number of preparer servers used by a capture process. The preparer servers concurrently format changes found in the redo log into LCRs. Each reader server, preparer server, and builder server is a parallel execution server, and the number of preparer servers equals the number specified for the parallelism capture process parameter. So, if parallelism is set to 5, then a capture process uses a total of seven parallel execution servers, assuming seven parallel execution servers are available: one reader server, five preparer servers, and one builder server.

The time_limit capture process parameter specifies the amount of time a capture process runs, and the message_limit capture process parameter specifies the number of messages a capture process can capture. The capture process stops automatically when it reaches one of these limits.

The disable_on_limit parameter controls whether a capture process becomes disabled or restarts when it reaches a limit. If you set the disable_on_limit parameter to y, then the capture process is disabled when it reaches a limit and does not restart until you restart it explicitly. If, however, you set the disable_on_limit parameter to n, then the capture process stops and restarts automatically when it reaches a limit.

Capture Process Rule Evaluation



A capture process maintains a persistent status when the database running the capture process is shut down and restarted. If a capture process is enabled when the database is shut down, then the capture process automatically starts when the database is restarted. Similarly, if a capture process is disabled or aborted when a database is shut down, then the capture process is not started and retains the disabled or aborted status when the database is restarted.

Wednesday, July 9, 2008

Oracle Streams in Brief

Capture Process
A capture process is an Oracle background process that scans the database redo log to capture DML and DDL changes made to database objects. A capture process formats these changes into messages called Logical Change Records (LCRs) and enqueues them into a queue.

There are two types of LCRs:
row LCRs contain information about a change to a row in table resulting from a DML operation, and DDL LCRs contain information about a DDL change to a database object. Rules determine which changes are captured.

You can configure change capture locally at a source database or remotely at a downstream database. A local capture process runs at the source database and
captures changes from the local source database redo log.

The following types of configurations are possible for a downstream capture process:

A real-time downstream capture configuration means that the log writer process (LGWR) at the source database sends redo data from the online redo log to the downstream database. At the downstream database, the redo data is stored in the standby redo log, and the capture process captures changes from the standby redo log.

An archived-log downstream capture configuration means that archived redo log files from the source database are copied to the downstream database, and the capture process captures changes in these archived redo log files.

A capture process does not capture some types of DML and DDL changes, and it does not capture changes made in the SYS, SYSTEM, or CTXSYS schemas.


Message Staging and Propagation
Streams uses queues to stage messages for propagation or consumption. Propagations send messages from one queue to another, and these queues can be in the same database or in different databases. The queue from which the messages are propagated is called the source queue, and the queue that receives the messages is called the destination queue. There can be a one-to-many, many-to-one, or many-to-many relationship between source and destination queues.

Directed Networks
Streams enables you to configure an environment in which changes are shared through directed networks. In a directed network, propagated messages pass through one or more intermediate databases before arriving at a destination database where they are consumed. The messages might or might not be consumed at an intermediate database in addition to the destination database. Using Streams, you can choose which messages are propagated to each destination database, and you can specify the route
messages will traverse on their way to a destination database.

Explicit Enqueue and Dequeue of Messages
User applications can enqueue messages into a queue explicitly. The user applications can format these user-enqueued messages as LCRs or user messages, and an apply process, a messaging client, or a user application can consume these messages.
Messages that were enqueued explicitly into a queue can be propagated to another queue or explicitly dequeued from the same queue.

Apply Process
An apply process is an Oracle background process that dequeues messages from a queue and either applies each message directly to a database object or passes the message as a parameter to a user-defined procedure called an apply handler. Apply handlers include message handlers, DML handlers, DDL handler, precommit handlers, and error handlers.

Typically, an apply process applies messages to the local database where it is unning, but, in a heterogeneous database environment, it can be configured to apply messages at a remote non-Oracle database.

Messaging Client
A messaging client consumes user-enqueued messages when it is invoked by an application or a user. Rules determine which user-enqueued messages are dequeued by a messaging client. These user-enqueued messages can be LCRs or user messages.

Automatic Conflict Detection and Resolution
An apply process detects conflicts automatically when directly applying LCRs in a replication environment. A conflict is a mismatch between the old values in an LCR
and the expected data in a table. Typically, a conflict results when the same row in he source database and destination database is changed at approximately the same time.

When a conflict occurs, you need a mechanism to ensure that the conflict is resolved in accordance with your business rules. Streams offers a variety of prebuilt conflict handlers. Using these prebuilt handlers, you can define a conflict resolutio system
for each of your databases that resolves conflicts in accordance with your business rules. If you have a unique situation that prebuilt conflict resolution handlers cannot resolve, then you can build your own conflict resolution handlers.

Tuesday, July 1, 2008

Dictionary Scripts

To resolve
ORA-39127: unexpected error from call to export_string
:=WMSYS.LT_EXPORT_PKG.schema_info_exp('SYS',1,1,'10.02.00.03.00',newblock)
ORA-04063: package "WMSYS.LTADM" has errors
ORA-06508: PL/SQL: could not find program unit being called: "WMSYS.LTADM"

run

@?/rdbms/admin/owmcpkgb.plb
@?/rdbms/admin/catexp
@?/rdbms/admin/utlrp

OLAP component invalid
@?/olap/admin/olap.sql SYSAUX TEMP;

Spatial component invalid
@?/md/admin/catmd

InterMedia component invlaid
@?/ord/im/admin/iminst.sql


to resolve
ORA-04063: View "SYS.DBA_REGISTRY" has errors..
$ORACLE_HOME/rdbms/admin/prvtcr.plb

Other dictionary scripts

catadt.sql - Creates views that support user-defined types and object components.
cataudit.sql - Creates the sys.aud$ dictionary table,
catexp.sql - Creates views for the import/export utilities
catldr.sql - Creates views for the sql*loader tool
catpart.sql - Creates views for partitioning oracle option,

catalog.sql - create data dictionary views.
cataudit.sql - data dictionary views for auditing. (catnoaud.sql will remove)
catblock.sql - create views of oracle locks
catclust.sql - create cluster database specific views definitions
catdbsyn.sql - catalog dba synonyms (dba_synonyms.sql)
catexp7.sql - create v7 style export/import views
catexp81.sql - create internal views for export/import utility
catio.sql - collect i/o per table/object stats by statistical sampling
catjava.sql - catalog scripts for java
catjobq.sql - catalog views for the job queue
catjvm.sql - create user|dba|all_java_* views
catldap.sql - catalog for ldap pl/sql api
catldr.sql - views for the direct path of the loader
catlsby.sql - logical standby tables and views
catpart.sql - creates data dictionary views for the partitioning table.
catoctk.sql - catalog - oracle cryptographic toolkit
catol.sql - outline views and synonyms
catparr.sql - parallel-server specific views, performance queries, etc
catpatch.sql - script to apply catalog patch set
catpitr.sql - tablespace point in time specific views
catplug.sql - pluggable tablespace check views
catprc.sql - creates data dictionary views for types, stored procedures, and triggers.
catproc.sql - run all sql scripts for the procedural option
catqm.sql - creates the tables and views needed to run the xdb system
catqueue.sql - contains the queue dictionary information
catrep.sql - run all sql scripts for replication option
catrepc.sql - repcat tables, views, and sequences
catrepm.sql - catalog script for master replication packages
catrepr.sql - catalog replication remove
catreps.sql - installs pl/sql packages for snapshot functionality.
catrls.sql - catalog views for row level security
catrm.sql - packages for the dbms resource manager
catrpc.sql - rpc procedural option
catrule.sql - rules catalog creation
catsnap.sql - data dictionary views for snapshots
catsnmp.sql - creates an snmpagent role
catspace.sql - catalog space management (remove with catspaced.sql)
catsum.sql - data dictionary views for summary management
catsvrmg.sql - create the views and tables required for server manager
catblock.sql - creates the view blocking_locks, which shows which locks are blocking the system
catnoaud.sql - removes the database audit trail created by cataudit.sql, including its data and views
catnoprc.sql - removes data dictionary structures that were created by catprc.sql

Tuesday, June 24, 2008

oneway SSH with no password prompt

create RSA and DSA keys with no passphrase




Internal Eerror Code LibraryCacheNotEmptyOnClose

Found in

Oracle Server - Enterprise Edition - Version: 10.2.0.1 to 10.2.0.4
in any platform.

Metalink note


Note:466425.1
Note:4483084.8
Note:365103.1

Signs

ORA-600 [LibraryCacheNotEmptyOnClose] , [], [], [] [], [], [], []


Bug

4483084

Fixed in

11.1.0.6 (Base Release)


Workaround

Ignore the error as it just indicates that there are some items in the library cache when closing down the instance. The error itself occurs AFTER the database close and dismount stages so only affects the instance shutdown itself. Datafiles have been closed cleanly.

It is intended that this will be fixed in 10.2.0.5 but this has not yet been confirmed.

Sunday, June 22, 2008

Converting from Standard to Enterprise Edition

1. Install enterprise edition on a separate home
2. create pfile of the existing database and move to enterprise home
3. move the password file to enterprise home or create new password file
4. backup the database.
5. switch paths, sid, and other oracle home related properties to enterprise home
6. startup the database
7. run catalog, catproc and utlrp scripts
8. de-install the standard software

Database version must be same. (ie. if standard is on 10.2.0.3 enterprise should also be on 10.2.0.3

if versions are different then replace steps 6 and 7 with

6. startup the database in upgrade/migrate mode.
7. run catupgrd.sql script of the current patch set level

more on metalink Note:117048.1

The only way to properly convert from an Enterprise Edition back to a Standard Edition is through an Export/Import operation. This way you will get rid of all Enterprise Edition specific Data Dictionary objects, as the SYS schema objects are not exported. The Standard Edition EXP utility is preferred to be used to export the data.

After the Import in the Standard Edition database, you only need to drop all user schemas related to Enterprise Edition features, such as the MDSYS account (used with Oracle Spatial).

more on metalink Note:139642.1

Wednesday, June 18, 2008

Script for creating bonded interfaces

Usage:
bond.sh interface1 interface2 bondname IP boradcastIP netmask network mode link_monitoring_frequency(miimon)

./bond.sh eth5 eth6 bond0 192.168.0.100 192.168.0.255 255.255.255.0 192.168.0.0 6 100

Copy the text below to create the bond.sh script

#! /bin/sh

if [ $# -ne 9 ]; then
printf "Usage:\nbond.sh interface1 interface2 bondname IP boradcastIP netmask network mode link_monitoring_frequency(miimon)\n"
printf "eg: bond.sh eth5 eth6 bond0 192.168.0.100 192.168.0.255 255.255.255.0 192.168.0.0 6 100\n"
exit 0
fi

printf "\t\t*************** Bonded interface script ***************\n"
printf "\t\t*************** Author : Asanga Pradeep ***************\n"

if [ -f /etc/sysconfig/network-scripts/ifcfg-$1 ]; then

if [ -f /etc/sysconfig/network-scripts/ifcfg-$2 ]; then

mv /etc/sysconfig/network-scripts/ifcfg-$1 /root/ifcfg-$1
mv /etc/sysconfig/network-scripts/ifcfg-$2 /root/ifcfg-$2
echo "Original files moved to /root"
else
echo "No "$2" found"
exit 0
fi


else
echo "No "$1" found"
exit 0
fi

#creating bond ---------------------------------------------------------------

echo "creating bond " $3
touch /etc/sysconfig/network-scripts/ifcfg-$3

echo "DEVICE="$3 >> /etc/sysconfig/network-scripts/ifcfg-$3
echo "BOOTPROTO=static" >> /etc/sysconfig/network-scripts/ifcfg-$3
echo "BROADCAST="$5 >> /etc/sysconfig/network-scripts/ifcfg-$3
echo "IPADDR="$4 >> /etc/sysconfig/network-scripts/ifcfg-$3
echo "IPV6ADDR=" >> /etc/sysconfig/network-scripts/ifcfg-$3
echo "IPV6PREFIX=" >> /etc/sysconfig/network-scripts/ifcfg-$3
echo "IPV6_AUTOCONF=yes" >> /etc/sysconfig/network-scripts/ifcfg-$3
echo "NETMASK="$6 >> /etc/sysconfig/network-scripts/ifcfg-$3
echo "NETWORK="$7 >> /etc/sysconfig/network-scripts/ifcfg-$3
echo "ONBOOT=yes" >> /etc/sysconfig/network-scripts/ifcfg-$3

#creating first eth -------------------------------------------------------------------

echo "creating "$1

touch /etc/sysconfig/network-scripts/ifcfg-$1

echo "DEVICE="$1 >> /etc/sysconfig/network-scripts/ifcfg-$1
echo "BOOTPROTO=none" >> /etc/sysconfig/network-scripts/ifcfg-$1
echo "ONBOOT=yes" >> /etc/sysconfig/network-scripts/ifcfg-$1
echo "USERCTL=no" >> /etc/sysconfig/network-scripts/ifcfg-$1
echo "MASTER=$3" >> /etc/sysconfig/network-scripts/ifcfg-$1
echo "SLAVE=yes" >> /etc/sysconfig/network-scripts/ifcfg-$1
echo "MTU=9000" >> /etc/sysconfig/network-scripts/ifcfg-$1


#creating second eth -------------------------------------------------------------------

echo "creating "$2
touch /etc/sysconfig/network-scripts/ifcfg-$2

echo "DEVICE="$2 >> /etc/sysconfig/network-scripts/ifcfg-$2
echo "BOOTPROTO=none" >> /etc/sysconfig/network-scripts/ifcfg-$2
echo "ONBOOT=yes" >> /etc/sysconfig/network-scripts/ifcfg-$2
echo "USERCTL=no" >> /etc/sysconfig/network-scripts/ifcfg-$2
echo "MASTER=$3" >> /etc/sysconfig/network-scripts/ifcfg-$2
echo "SLAVE=yes" >> /etc/sysconfig/network-scripts/ifcfg-$2
echo "MTU=9000" >> /etc/sysconfig/network-scripts/ifcfg-$2

#editng modprobe.conf

if [ -f /etc/modprobe.conf ]; then

cp /etc/modprobe.conf /etc/modprobe.conf.bak
echo "Original file backup as /etc/modprobe.conf.bak"
echo "alias "$3" bonding" >> /etc/modprobe.conf
echo "options "$3" mode="$8" miimon="$9 >> /etc/modprobe.conf
/sbin/modprobe bonding
else
echo "NO /etc/modprobe.conf found"
fi




Services to disable on Linux

on both RHEL 4 and RHEL 5

chkconfig sendmail off
chkconfig rhnsd off
chkconfig readahead_early off
chkconfig readahead_later off
chkconfig yum-updatesd off
chkconfig cups off
chkconfig iptables off
chkconfig bluetooth off
chkconfig isdn off
chkconfig kudzu off
chkconfig netfs off
chkconfig rpcgssd off
chkconfig rpcidmapd
chkconfig rpcidmapd off
chkconfig auditd off
chkconfig avahi-daemon off
chkconfig firstboot off
chkconfig hidd off
chkconfig portmap off
chkconfig ip6tables off
chkconfig xend off
chkconfig xendomains off
chkconfig autofs off
chkconfig setroubleshoot off
chkconfig pcscd off
chkconfig nfslock off
chkconfig arptables_jf off
chkconfig cups-config-daemon off
chkconfig rawdevices off
chkconfig readahead off
chkconfig smartd off
chkconfig xinetd off
chkconfig xfs off
chkconfig pcmcia off


Wednesday, June 11, 2008

ORADEBUG

 oradebug help
HELP [command] Describe one or all commands
SETMYPID Debug current process
SETOSPID ospid Set OS pid of process to debug
SETORAPID orapid ['force'] Set Oracle pid of process to debug
SHORT_STACK Dump abridged OS stack
DUMP dump_name lvl [addr] Invoke named dump
DUMPSGA [bytes] Dump fixed SGA
DUMPLIST Print a list of available dumps
EVENT text Set trace event in process
SESSION_EVENT text Set trace event in session
DUMPVAR p|s|uga name [level] Print/dump a fixed PGA/SGA/UGA DUMPTYPE
/name/p|s|uga/text/text/lvl/dump_name/orapid/ospid address type count Print/dump an address with type info
SETVAR p|s|uga name value Modify a fixed PGA/SGA/UGA PEEK addr len [level] Print/Dump memory
POKE addr len value Modify memory
WAKEUP orapid Wake up Oracle process
SUSPEND Suspend execution
RESUME Resume execution
FLUSH Flush pending writes to trace CLOSE_TRACE Close trace file
TRACEFILE_NAME Get name of trace file
LKDEBUG Invoke global enqueue service
SGATOFILE sga dump="" dir="" Dump SGA to file
DMPCOWSGA sga dump="" dir="" Dump & map SGA as COW
MAPCOWSGA sga dump="" dir="" Map SGA as COW
HANGANALYZE [level] [syslevel] Analyze system hang
FFBEGIN Flash Freeze the Instance
FFDEREGISTER FF deregister instance from cluster
FFTERMINST Call exit and terminate instance
FFRESUMEINST Resume the flash frozen instance
FFSTATUS Flash freeze status of instance
UNLIMIT Unlimit the size of the trace file
PROCSTAT Dump process statistics



Level 1 – Basic level of trace information.

Level 8 - Level 1 plus the wait events for elapsed times that are more than current CPU timings.

Level 12- All the previous trace level information in addition to all wait event information.

Tracing a session

SQL> oradebug setmypid
SQL> alter session set tracefile_identifier='trace identifier';
SQL> oradebug tracefile_name /home/oracle/trace_file.trc (default location is user_dump_dest)
SQL> oradebug unlimit
SQL> oradebug event 10046 trace name context forever, level 12
SQL> sql statments to trace ......
SQL> sql statments to trace ......
SQL> sql statments to trace ......
SQL> oradebug event 10046 trace name context off
SQL> oradebug tracefile_name /home/oracle/trace_file.trc


Monday, June 9, 2008

Transporting Database

1. Check if the database can be transported to destination platform. Put the DB in read only mode and use DBMS_TDB.CHECK_DB for this. It can be called without arguments to see if any condition at the source database prevents transport. It can also be called with one or both of the following arguments
target_platform_name : The name of the destination platform, as it appears in V$DB_TRANSPORTABLE_PLATFORM.

skip_option : Specifies which, if any, parts of the database to skip when checking whether the database can be transported.
* SKIP_NONE (or 0), which checks all tablespaces
* SKIP_OFFLINE (or 2), which skips checking datafiles in offline tablespaces
* SKIP_READONLY (or 3), which skips checking datafiles in read-only tablespaces

If SERVEROUTPUT is ON, and DBMS_TDB.CHECK_DB returns FALSE, then the output includes the reason why the database cannot be transported.
If you call DBMS_TDB.CHECK_DB and no messages are displayed indicating conditions preventing transport before the PL/SQL procedure successfully completed message, then your database is ready for transport.

set serveroutput on
declare
db_ready boolean;
begin
db_ready := dbms_tdb.check_db('Solaris Operating System (AMD64)',dbms_tdb.skip_readonly);
end;
/


2. Use DBMS_TDB.CHECK_EXTERNAL to identify any external tables, directories or BFILEs. RMAN cannot automate the transport of such files. BMS_TDB.CHECK_EXTERNAL takes no parameters. With SERVEROUTPUT set to ON, the output of DBMS_TDB.CHECK_EXTERNAL lists the external tables, directories and BFILEs of your database.

set serveroutput on
declare
external boolean;
begin
external := dbms_tdb.check_external;
end;
/


3. To convert the database at the source
CONVERT DATABASE NEW DATABASE 'newdb' transport script '/home/oracle/dump/tranport.sql' to platform 'Solaris Operating System (AMD64)' db_file_name_convert '/orace/oradata/test','/home/oracle/dump';

This will generate datafiles at /home/oracle/dump ready for transport, a PFile at $ORACLE_HOME/dbs and a transport.sql used to create the DB at the target.

4. Copy the datafiles, PFile and transport sql file to the target host.

5. At the target edit the pfile to reflect the new directory structure. control_files, *dump_dest, and recovery area are some of the things to consider. If the pfile is renamed from the original system generated name edit the transport sql to reflect the changes.

6. Once all the changes are done set the oracle sid to new DB and run the transport sql

7. At the end of the script DB is created and opened. Towards the end of the script utlirp.sql and utlrp.sql are run. IF utlrp.sql gives out errors then manually open the DB in upgrade mode and run utlirp and utlrp to compile the packages.

8. Create a password file for the database.

9. To generate a new SID for the database use the DBNEWID utility.
nid target=sys/newdb

10. After the new id is generated database is shutdown. open it with resetlogs.
alter database open resetlogs;

11. Create new directory objects in the target
select directory_path from dba_directories;
update dba_directories set directory_path='new path' where directory_path='old path';


Sunday, June 8, 2008

Transporting Tablespaces

1. Select the endian format of source DB and target DB
To list all OS and their endian formats
SELECT * FROM V$TRANSPORTABLE_PLATFORM;

To find the endian format of the target and source DB
SELECT d.PLATFORM_NAME, ENDIAN_FORMAT
FROM V$TRANSPORTABLE_PLATFORM tp, V$DATABASE d
WHERE tp.PLATFORM_NAME = d.PLATFORM_NAME;

If the endian formats are different then datafiles need to be converted to target DB version before transporting.

2. Check if the tablespaces that are transported are self contained with referential integrity constraints taken into consideration (indicated by TRUE).
exec DBMS_TTS.TRANSPORT_SET_CHECK('dumps',TRUE);

Any violations are listed in the TRANSPORT_SET_VIOLATIONS view.
SELECT * FROM TRANSPORT_SET_VIOLATIONS;

If the set of tablespaces is self-contained, this view is empty.

3. Generate a Transportable Tablespace Set with the following steps.

1. put the source tablespaces in read only mode.
alter tablespace dumps read only;

2. Invoke the Data Pump export utility on the host system and specify which tablespaces are in the transportable set.

expdp uname/pw directory=dumpdir dumpfile=tbs.dmp logfile=tbs.log TRANSPORT_TABLESPACES =dumps

3. convert the tablespace for transport using rman if the endian types are different. If compressed backupset is enabled disable it before converting.

convert tablespace dumps to platform 'Linux 64-bit for AMD' format '/home/orace/dump%U';



4. Transport the tablespace set which includes the datafile and the expdp dumpfile to the target DB host.

5. Import the tablespace metadata using impdp
impdp system/testdb directory=dumpdir dumpfile=tbs.dmp logfile=tbs.log TRANSPORT_DATAFILES=/oracle/oradata/test/dump.dbf


6. After the import tablespace will be in read only mode.
select tablespace_name,status from dba_tablespaces;
DUMPS READ ONLY


7. Make the tablespace read write if needed.
alter tablespace dumps read write;



Instead of convert tablespace which does the conversion at the source DB convert datafile can be used to do the conversion at the target DB.

All the steps are similar to above except for the fact the step 3.3 convert tablespace ... is omitted.

Instead after the export copy the datafiles which are in read only mode to the target DB. Before doing the import use datafile convert
convert datafile '/oracle/oradata/test/dump.dbf' from platform 'Solaris Operating System (AMD64)';



Restrictions on CONVERT TABLESPACE and CONVERT DATAFILE

* Both source and destination databases must be running with the COMPATIBLE initialization parameter set to 10.0 or higher.

* Not all combinations of source and destination platforms are supported. To determine whether your source and destination platforms are supported, query V$TRANSPORTABLE_PLATFORM. If both the source and destination platforms are listed in this view, then CONVERT can be used to prepare datafiles from one platform for use on the other.

* A tablespace must be made read-write at least once in release 10g before it can be transported to another platform using CONVERT. Hence, any read-only tablespaces (or previously transported tablespaces) from a previous release must first be made read-write before they can be transported to another platform.

* RMAN does not process user datatypes that require endian conversions.

* Prior to release 10g, CLOBs were created with a variable-width character set and stored in an endian-dependent format. The CONVERT command does not perform conversions on these CLOBs. Instead, RMAN captures the endian format of each LOB column and propagates it to the target database. Subsequent reads of this data by the SQL layer interpret the data correctly based upon either endian format, and write the data out in an endian-independent format if the tablespace is writeable.

CLOBs created in Oracle Database Release 10g are stored in character set AL16UTF16, which is platform-independent.

How Redo Send Works

ARCn Archival Processing

Archiving happens when a log switch occurs on the primary database:

* On the primary database, after one archival process (ARC0) has successfully archived the local online redo log to the local destination (LOG_ARCHIVE_DEST_1), another archival process (ARC1) process transmits redo from the local archived redo log files (instead of the online redo log files) to the remote standby destination (LOG_ARCHIVE_DEST_2).

* On the remote destination, the remote file server process (RFS) will, in turn, write the redo data to an archived redo log file from a standby redo log file. Log apply services use Redo Apply (managed recovery process - MRP ) or SQL Apply ( logical standby process - LSP uses parallel execution (Pnnn) processes) to apply the redo to the standby database.

Because the online redo log files are archived locally first, the LGWR process reuses the online redo log files much earlier than would be possible if the ARCn processes archived to the standby database concurrently with the local destination.

Using the Log Writer Process (LGWR) to Archive Redo Data

Using the LGWR process differs from ARCn processing because instead of waiting for the online redo log to switch at the primary database and then writing the entire archived redo log at the remote destination all at once, the LGWR process selects a standby redo log file at the standby site that reflects the log sequence number (and size) of the current online redo log file of the primary database. Then, as redo is generated at the primary database, it is also transmitted to the remote destination. The transmission to the remote destination will either be synchronous or asynchronous. Synchronous LGWR processing is required for the maximum protection and maximum availability modes of data protection in Data Guard configurations.

LGWR SYNC Archival Processing

LGWR process to synchronously transmit redo data to the standby system at the same time it is writing redo data to the online redo log file on the primary database:

* On the primary database, the LGWR process submits the redo data to one or more network server (LGWR Network Server process LNSn) processes, which then initiate the network I/O in parallel to multiple remote destinations. Transactions are not committed on the primary database until the redo data necessary to recover the transaction is received by all LGWR SYNC destinations.

* On the standby system, the remote file server (RFS) receives redo data over the network from the LGWR process and writes the redo data to the standby redo log files.

A log switch on the primary database triggers a log switch on the standby database, causing ARCn processes on the standby database to archive the standby redo log files to archived redo log files on the standby database. Then, Redo Apply (MRP process) or SQL Apply (LSP process) applies the redo data to the standby database. If real-time apply is enabled, Data Guard recovers redo data directly from the current standby redo log file as it is being filled up by the RFS process.


LGWR ASYNC Archival Processing

When the LGWR and ASYNC attributes are specified, the log writer process writes to the local online redo log file, while the network server (LNSn) processes (one for each destination) asynchronously transmit the redo to remote destinations. The LGWR process continues processing the next request without waiting for the LNS network I/O to complete.

If redo transport services transmit redo data to multiple remote destinations, the LNSn processes (one for each destination) initiate the network I/O to all of the destinations in parallel.

When an online redo log file fills up, a log switch occurs and an archiver process archives the log file locally, as usual.

Wednesday, June 4, 2008

aq_tm_processes Is Set To 0

Signs
After upgrading to 10.2.0.3 using DBUA the message "WARNING: AQ_TM_PROCESSES is set to 0" begins appearing in the alert log file.

DBUA has set the aq_tm_processes initialization parameter explicitly to zero.

Fix

In 10.2, it is recommended to leave the parameter aq_tm_processes unset and let the database autotune the parameter.

Setting aq_tm_processes parameter explicitly to zero which disables the time monitor process (qmn), can disrupt the operation of the database due to several system queue tables used when the standard database features are used.

You cannot determine if aq_tm_processes is set explicitly to zero just by querying v$parameter.

A check to see if the parameter is explicitly zero is:

connect / as sysdba

set serveroutput on

declare
mycheck number;
begin
select 1 into mycheck from v$parameter where name = 'aq_tm_processes' and value = '0'
and (ismodified <> 'FALSE' OR isdefault='FALSE');
if mycheck = 1 then
dbms_output.put_line('The parameter ''aq_tm_processes'' is explicitly set to 0!');
end if;
exception when no_data_found then
dbms_output.put_line('The parameter ''aq_tm_processes'' is not explicitly set to 0.');
end;
/

If it is set to zero, it is recommended to unset the parameter.
alter system reset aq_tm_processes scope=spfile sid='*';

However, this requires bouncing the database if unable to do so
alter system set aq_tm_processes = 1;


Tuesday, June 3, 2008

Moving Datafiles to new location

1. Make datafile backups as copy

run {
SET CONTROLFILE AUTOBACKUP FORMAT FOR DEVICE TYPE DISK TO '/tmp/%FU';
backup as copy datafile 1 format '/oracle/oradata/test/system01.dbf';
backup as copy datafile 2 format '/oracle/oradata/test/undotbs01.dbf';
backup as copy datafile 3 format '/oracle/oradata/test/sysaux01.dbf';
backup as copy datafile 4 format '/oracle/oradata/test/users01.dbf';
backup as copy datafile 5 format '/oracle/oradata/test/gravelsox.dbf';
}

2. shutdown immediate

3. startup mount

4. switch data file to copy

switch datafile 1 to copy;
switch datafile 2 to copy;
switch datafile 3 to copy;
switch datafile 4 to copy;
switch datafile 5 to copy;

5. recover the datafiles
recover datafiles 1,2,3,4,5;

6. open the database
7. Add new tmep file to the new datafile location and drop the old tempfile
alter tablespace temp add tempfile.....
alter tablespace temp drop tempfile

8. delete old datafile copies
delete datafilecopy all;

Following sqls will be helpful in generating the above commands
select 'backup as copy datafile '||file#||' format '''||replace(name,'old_location','new_location')||''';' from v$datafile;

select 'switch datafile '||file#||' to copy;' from v$datafile;


Above method is useful even when datafiles reside in ASM. There's another easier way if operating system level file movement is possible.

1. Shutdown immediate;

2. copy the datafiles to new location using OS utilities (such as cp in linux)

3. startup mount;

4. switch the datafile locations
alter database rename file '/oradata/livestg/system01.dbf' to '/u1/oradata/livestg/system01.dbf';
alter database rename file '/oradata/livestg/undotbs01.dbf' to '/u1/oradata/livestg/undotbs01.dbf';
alter database rename file '/oradata/livestg/sysaux01.dbf' to '/u1/oradata/livestg/sysaux01.dbf';
alter database rename file '/oradata/livestg/users01.dbf' to '/u1/oradata/livestg/users01.dbf';


5. alter database open;

Following sqls will be useful in generating the above commands
select 'alter database rename file '''||name||''' to '''||replace(name,'oracle','u1')||''';' as sql from v$datafile;

select 'alter database rename file '''||name||''' to '''||replace(name,'oracle','u1')||''';' as sql from v$tempfile;


Remote Debugging with SQL Developer

1. Create a remote debugging session on SQL Developer.


2. Give a port and a IP of the machine or localhost if the debugging client is also on the same machine.




3. Create break points on the PL/SQL function, trigger, package etc.

4. On the client session run
exec DBMS_DEBUG_JDWP.CONNECT_TCP( '192.168.0.124', 4000 );

Remember to change the port and IP accordingly. User needs
DEBUG CONNECT SESSION
prvilege so grant it to the user.

5. Execute the function,package or trigger event from the client session

6. Use SQL Developer to trace the execution path, monitor values, change values and etc.