Monday, January 31, 2011

Segment Creation Behavior Change Between 11.2.0.1 and 11.2.0.2 When Using EXPDP/IMPDP

In 11.2 the initialization parameter DEFERRED_SEGMENT_CREATION controls whether segment space is allocated at the time of the segment creation. This is set to true by default both on enterprise edition and standard edition. But on standard edition this parameter has no effect as this feature is not enabled. Initialization parameter behavior could be override at table with
SQL> CREATE TABLE x(a number) SEGMENT CREATION IMMEDIATE; 
or
SQL> CREATE TABLE x(a number) SEGMENT CREATION DEFERRED;
But on standard edition this would still give an error.
SQL>  CREATE TABLE x(a number) SEGMENT CREATION DEFERRED;
CREATE TABLE x(a number) SEGMENT CREATION DEFERRED
*
ERROR at line 1:
ORA-00439: feature not enabled: Deferred Segment Creation
Problem happens when exporting some empty tables from an enterprise edition database to a standard edition database. According to the default behavior on enterprise edition when tables have no rows there won't be any segments. But default behavior on standard edition is that, tables are always created with segments.

In 11.2.0.1 when importing tables from enterprise to standard edition tables with no segments throws an error while in 11.2.0.2 tables get created with segments without an error.

Exporting from 11.2.0.1 Enterprise Edition to 11.2.0.1 Standard Edition

1. Create a test user on both databases (enterprise and standard editions)
SQL> create user test identified by test default tablespace users temporary tablespace temp quota unlimited on users;
2. Connect to the enterprise edition database and create two tables. Populate one with data and leave the other empty.
SQL> create table x (a number);
Table created.

SQL> create table y (a number);
Table created.

SQL> select table_name,segment_created from user_tables;

TABLE_NAME                     SEG
------------------------------ ---
X                              NO
Y                              NO

SQL> begin for i in 1 .. 100 loop insert into x values (i); end loop; end;
2  /
PL/SQL procedure successfully completed.

SQL> commit;
Commit complete.

SQL> select table_name,segment_created from user_tables;

TABLE_NAME                     SEG
------------------------------ ---
X                              YES
Y                              NO
As expected tables get created without any segment space allocated but after inserting some rows table get some space allocated.

4. Export these two tables using expdp
expdp test/test directory=dumpdir dumpfile=ent.dmp schemas=test flashback_time=\"$(date +%F' '%T)\"

Export: Release 11.2.0.1.0 - Production on Mon Jan 31 18:29:07 2011

Copyright (c) 1982, 2009, Oracle and/or its affiliates.  All rights reserved.

Connected to: Oracle Database 11g Enterprise Edition Release 11.2.0.1.0 - 64bit Production
With the Partitioning, OLAP, Data Mining and Real Application Testing options
Starting "TEST"."SYS_EXPORT_SCHEMA_01":  test/******** directory=dumpdir dumpfile=ent.dmp schemas=test flashback_time="2011-01-31 18:29:07"
Estimate in progress using BLOCKS method...
Processing object type SCHEMA_EXPORT/TABLE/TABLE_DATA
Total estimation using BLOCKS method: 0 KB
Processing object type SCHEMA_EXPORT/USER
Processing object type SCHEMA_EXPORT/ROLE_GRANT
Processing object type SCHEMA_EXPORT/DEFAULT_ROLE
Processing object type SCHEMA_EXPORT/TABLESPACE_QUOTA
Processing object type SCHEMA_EXPORT/PRE_SCHEMA/PROCACT_SCHEMA
Processing object type SCHEMA_EXPORT/TABLE/TABLE
Processing object type SCHEMA_EXPORT/TABLE/INDEX/INDEX
Processing object type SCHEMA_EXPORT/TABLE/CONSTRAINT/CONSTRAINT
Processing object type SCHEMA_EXPORT/TABLE/INDEX/STATISTICS/INDEX_STATISTICS
Processing object type SCHEMA_EXPORT/TABLE/COMMENT
. . exported "TEST"."Y"                                      0 KB       0 rows
. . exported "TEST"."X"                                  5.679 KB     100 rows
Master table "TEST"."SYS_EXPORT_SCHEMA_01" successfully loaded/unloaded
******************************************************************************
Dump file set for TEST.SYS_EXPORT_SCHEMA_01 is:
/home/oracle/dumpdir/ent.dmp
Job "TEST"."SYS_EXPORT_SCHEMA_01" successfully completed at 18:31:26
5. Import the same tables to standard edition
impdp test/test directory=dumpdir dumpfile=ent.dmp full=y

Import: Release 11.2.0.1.0 - Production on Mon Jan 31 18:32:18 2011

Copyright (c) 1982, 2009, Oracle and/or its affiliates.  All rights reserved.

Connected to: Oracle Database 11g Release 11.2.0.1.0 - 64bit Production
Master table "TEST"."SYS_IMPORT_FULL_01" successfully loaded/unloaded
Starting "TEST"."SYS_IMPORT_FULL_01":  test/******** directory=dumpdir dumpfile=ent.dmp full=y
Processing object type SCHEMA_EXPORT/USER
ORA-31684: Object type USER:"TEST" already exists
Processing object type SCHEMA_EXPORT/ROLE_GRANT
Processing object type SCHEMA_EXPORT/DEFAULT_ROLE
Processing object type SCHEMA_EXPORT/TABLESPACE_QUOTA
Processing object type SCHEMA_EXPORT/PRE_SCHEMA/PROCACT_SCHEMA
Processing object type SCHEMA_EXPORT/TABLE/TABLE
ORA-39083: Object type TABLE:"TEST"."Y" failed to create with error:
ORA-00439: feature not enabled: Deferred Segment Creation
Failing sql is:
CREATE TABLE "TEST"."Y" ("A" NUMBER) SEGMENT CREATION DEFERRED PCTFREE 10 PCTUSED 40 INITRANS 1 MAXTRANS 255 NOCOMPRESS LOGGING TABLESPACE "USERS"

Processing object type SCHEMA_EXPORT/TABLE/TABLE_DATA
. . imported "TEST"."X"                                  5.679 KB     100 rows
Job "TEST"."SYS_IMPORT_FULL_01" completed with 2 error(s) at 18:33:24
The import tries to create the files usind the same ddl as in the enterprise without suppressing it and gets and error.

Exporting from 11.2.0.2 Enterprise Edition to 11.2.0.2 Standard Edition

The erroneous behavior is changed on 11.2.0.2. Assuming the tables were created as above export from enterprise edition.
expdp test/test directory=dumpdir dumpfile=ent.dmp schemas=test flashback_time=\"$(date +%F' '%T)\"

Export: Release 11.2.0.2.0 - Production on Mon Jan 31 13:29:46 2011

Copyright (c) 1982, 2009, Oracle and/or its affiliates.  All rights reserved.

Connected to: Oracle Database 11g Enterprise Edition Release 11.2.0.2.0 - 64bit Production
With the Partitioning, OLAP, Data Mining and Real Application Testing options
Starting "TEST"."SYS_EXPORT_SCHEMA_01":  test/******** directory=dumpdir dumpfile=ent.dmp schemas=test flashback_time="2011-01-31 13:29:46"
Estimate in progress using BLOCKS method...
Processing object type SCHEMA_EXPORT/TABLE/TABLE_DATA
Total estimation using BLOCKS method: 64 KB
Processing object type SCHEMA_EXPORT/USER
Processing object type SCHEMA_EXPORT/ROLE_GRANT
Processing object type SCHEMA_EXPORT/DEFAULT_ROLE
Processing object type SCHEMA_EXPORT/TABLESPACE_QUOTA
Processing object type SCHEMA_EXPORT/PRE_SCHEMA/PROCACT_SCHEMA
Processing object type SCHEMA_EXPORT/TABLE/TABLE
Processing object type SCHEMA_EXPORT/TABLE/INDEX/INDEX
Processing object type SCHEMA_EXPORT/TABLE/CONSTRAINT/CONSTRAINT
Processing object type SCHEMA_EXPORT/TABLE/INDEX/STATISTICS/INDEX_STATISTICS
Processing object type SCHEMA_EXPORT/TABLE/COMMENT
. . exported "TEST"."X"                                  5.679 KB     100 rows
. . exported "TEST"."Y"                                      0 KB       0 rows
Master table "TEST"."SYS_EXPORT_SCHEMA_01" successfully loaded/unloaded
******************************************************************************
Dump file set for TEST.SYS_EXPORT_SCHEMA_01 is:
/ssdoracle/dpdumps/ent.dmp
Job "TEST"."SYS_EXPORT_SCHEMA_01" successfully completed at 13:30:08
Import into the standared edition
impdp test/test directory=dumpdir dumpfile=ent.dmp full=y

Import: Release 11.2.0.2.0 - Production on Mon Jan 31 13:32:37 2011

Copyright (c) 1982, 2009, Oracle and/or its affiliates.  All rights reserved.

Connected to: Oracle Database 11g Release 11.2.0.2.0 - 64bit Production
Master table "TEST"."SYS_IMPORT_FULL_01" successfully loaded/unloaded
Starting "TEST"."SYS_IMPORT_FULL_01":  test/******** directory=dumpdir dumpfile=ent.dmp full=y
Processing object type SCHEMA_EXPORT/USER
ORA-31684: Object type USER:"TEST" already exists
Processing object type SCHEMA_EXPORT/ROLE_GRANT
Processing object type SCHEMA_EXPORT/DEFAULT_ROLE
Processing object type SCHEMA_EXPORT/TABLESPACE_QUOTA
Processing object type SCHEMA_EXPORT/PRE_SCHEMA/PROCACT_SCHEMA
Processing object type SCHEMA_EXPORT/TABLE/TABLE
Processing object type SCHEMA_EXPORT/TABLE/TABLE_DATA
. . imported "TEST"."X"                                  5.679 KB     100 rows
. . imported "TEST"."Y"                                      0 KB       0 rows
Job "TEST"."SYS_IMPORT_FULL_01" completed with 1 error(s) at 13:32:39
There's no error and import goes through and all the tables get created with segment space.
SQL> select table_name,segment_created from user_tables;

TABLE_NAME                     SEG
------------------------------ ---
X                              YES
Y                              YES
For 11.2.0.1 one way to avoid the issue is to allocate segment manually before the export. This could be done with
alter table  allocate extent;
Metalink notes for similar issues

1. Init.ora Parameter "DEFERRED_SEGMENT_CREATION" Reference Note [ID 1216282.1]
2. IMP-00003, ORA-00959 While Importing Data Into Existing Table Of 11gR2 Using Traditional Import [ID 1180873.1]
3. Bug 8795792 - DBMS_METADATA.get_ddl generates wrong keywords for compressed indexes in 11.2 [ID 8795792.8]
4. Exporting Schema In 11.2 Database With 10g Client Gives ORA-1455 [ID 1083330.1]

Thursday, January 27, 2011

Moving Datafiles Between ASM Disk Groups

This blog came about as a result of a situation in a production system. The ASM disks in the DATA diskgroup were full (should have monitored) and data files had no where to extend. Only space left is in the FLASH diskgroup. To keep the system running create an additional datafile for the tablespace in that location. Later when new ASM disk space is added to the DATA diskgroup move the datafiles created in FLASH group to DATA group.

First running out of space situation

1. Create a tablespace with few mega bytes and grant the user the quotas on it
create tablespace test datafile '+DATA(datafile)' SIZE 2m autoextend on next  1m maxsize 5m;
2. Create a table in that tablespace
create table fult (a char(2000),b char(2000), c char(2000), d char(2000)) tablespace test;
3. Run the following PL/SQL code. Above created tablespace ran out of space after inserting 248 rows. Inserting the 249th row raised an error
begin
for i in 1 .. 248
loop
insert into fult values ('a'||i,'a'||i,'a'||i,'a'||i);
end loop;
end;
/
Inserting the 249th
insert into fult values ('a','a','a','a')

insert into fult values ('a','a','a','a')
*
ERROR at line 1:
ORA-01653: unable to extend table ASANGA.FULT by 128 in tablespace TEST
4. Add another datafile into the tablespace (assuming diskgroup is filled)
 alter tablespace test add datafile '+flash(datafile)' SIZE 5m autoextend on next 1m maxsize 10m;
After this step it will be possible to continue to add more rows.

Once new ASM disks are added move the newly created datafile from DATA ASM Disk group to FLASH Disk group.

5. List current datafiles in rman
RMAN> report schema;

using target database control file instead of recovery catalog
Report of database schema for database with db_unique_name RAC11G

List of Permanent Datafiles
===========================
File Size(MB) Tablespace RB segs Datafile Name
---- -------- -------------------- ------- ------------------------
1 730 SYSTEM *** +DATA/rac11g/datafile/system.256.720805879
2 980 SYSAUX *** +DATA/rac11g/datafile/sysaux.257.720805881
3 3365 UNDOTBS1 *** +DATA/rac11g/datafile/undotbs1.258.720805881
4 897 USERS *** +DATA/rac11g/datafile/users.259.720805881
5 100 REDUNTBS *** +DATA/rac11g/datafile/reduntbs.334.727582971
6 100 ASMBKP *** +DATA/rac11g/datafile/asmbkp.479.729141249
7 100 MSSM *** +DATA/rac11g/datafile/mssm.520.729490583
8 5 TEST *** +DATA/rac11g/datafile/test.618.741548757
9 7 TEST *** +FLASH/rac11g/datafile/test.266.741549549
6. Take the datafile in FLASH diskgroup offline
SQL> alter database datafile '+FLASH/rac11g/datafile/test.266.741549549' offline;
Database altered.
New rows cannot be inserted while the datafile is offline
SQL> insert into fult values ('a','a','a','a');
insert into fult values ('a','a','a','a')
*
ERROR at line 1:
ORA-00376: file 7 cannot be read at this time
ORA-01110: data file 9: +FLASH/rac11g/datafile/test.266.741549549'
7. Create a copy of the datafile
RMAN> copy datafile '+FLASH/rac11g/datafile/test.266.741549549' to '+DATA';

Starting backup at 27-Jan-2011 18:02:55
allocated channel: ORA_DISK_1
channel ORA_DISK_1: SID=138 instance=rac11g1 device type=DISK
channel ORA_DISK_1: starting datafile copy
input datafile file number=00009 name=+FLASH/rac11g/datafile/test.266.741549549
output file name=+DATA/rac11g/datafile/test.609.741549777 tag=TAG20110127T180255 RECID=1 STAMP=741549777
channel ORA_DISK_1: datafile copy complete, elapsed time: 00:00:01
Finished backup at 27-Jan-2011 18:02:57
8. Rename the datafile to the newly created copy
SQL> alter database rename file '+FLASH/rac11g/datafile/test.266.741549549' to '+DATA/rac11g/datafile/test.609.741549777';
Database altered.
9. Switch to the copy and recover the datafile
RMAN> switch datafile '+DATA/rac11g/datafile/test.609.741549777' to copy;

datafile 9 switched to datafile copy "+DATA/rac11g/datafile/test.609.741549777"

RMAN> recover datafile 9;
10. Bring the datafile online
SQL> alter database datafile '+DATA/rac11g/datafile/test.609.741549777' online;
Database altered.
11. After this new rows could be inserted
SQL>  insert into fult values ('a','a','a','a');

1 row created.
12.The old datafile (one creatd in flash group is no longer part of the tablespace).
RMAN> report schema;

using target database control file instead of recovery catalog
Report of database schema for database with db_unique_name RAC11G

List of Permanent Datafiles
===========================
File Size(MB) Tablespace RB segs Datafile Name
---- -------- -------------------- ------- ------------------------
1 730 SYSTEM *** +DATA/rac11g/datafile/system.256.720805879
2 980 SYSAUX *** +DATA/rac11g/datafile/sysaux.257.720805881
3 3365 UNDOTBS1 *** +DATA/rac11g/datafile/undotbs1.258.720805881
4 897 USERS *** +DATA/rac11g/datafile/users.259.720805881
5 100 REDUNTBS *** +DATA/rac11g/datafile/reduntbs.334.727582971
6 100 ASMBKP *** +DATA/rac11g/datafile/asmbkp.479.729141249
7 100 MSSM *** +DATA/rac11g/datafile/mssm.520.729490583
8 5 TEST *** +DATA/rac11g/datafile/test.618.741548757
9 7 TEST *** +DATA/rac11g/datafile/test.609.741549777


SQL> select d.name dname,t.name tname from v$datafile d, v$tablespace t where d.ts#=t.ts#;
Datafile Name Tablespace
--------------------------------------------- -----------
+DATA/rac11g/datafile/system.256.720805879 SYSTEM
+DATA/rac11g/datafile/sysaux.257.720805881 SYSAUX
+DATA/rac11g/datafile/undotbs1.258.720805881 UNDOTBS1
+DATA/rac11g/datafile/users.259.720805881 USERS
+DATA/rac11g/datafile/reduntbs.334.727582971 REDUNTBS
+DATA/rac11g/datafile/asmbkp.479.729141249 ASMBKP
+DATA/rac11g/datafile/mssm.520.729490583 MSSM
+DATA/rac11g/datafile/test.618.741548757 TEST
+DATA/rac11g/datafile/test.609.741549777 TEST
Datafile in the flash diskgroup is automatically removed.

Tuesday, January 25, 2011

Roll Forward a Physical Standby 10gR2

This a follow up to the 11gR1 physical standby roll forward. The 10gR2 physical standby roll forward documentation has changed where as in the old one when there's a huge gap or missing archive logs which cannot be applied manually, creating a incremental backup and doing a database recover with noredo was sufficient to get the standby up to date.
There was no mention of restoring controlfile. (Source Oracle Data Guard Concepts and Administration 10g Release 2 (10.2) B14239-04 March 2006). If memory is correct this was even used to bring a production physical standby up to date (10.2.0.3).
But it appears this behavior has changed and few additional steps are required to bring the physical standby up to date according to the new documentation.

1. It is assumed there's working standby configuration and archive logs get apply without a problem
SQL> select sequence#,applied from v$archived_log;

SEQUENCE# APP
---------- ---
87 YES
88 YES
89 YES
90 YES
91 YES
92 YES
2. Shutdown the standby database and listener and run few log switches on the primary and then delete the archive logs to create a archive log gap between standby and primary
SEQUENCE# APP
---------- ---
87 YES
88 YES
89 YES
90 YES
91 YES
92 YES
101 NO
103 NO
102 NO
3. Get the current scn on the standby
SQL> SELECT CURRENT_SCN FROM V$DATABASE;

CURRENT_SCN
-----------
1569995
4. Create a incremental backup from the primary starting with the scn obtained above
RMAN> run {
2> allocate channel d1 device type disk format '/tmp/backup2%U';
3> allocate channel d2 device type disk format '/tmp/backup2%U';
4> BACKUP INCREMENTAL FROM SCN 1569995 database tag 'forstandby';
5> }

using target database control file instead of recovery catalog
allocated channel: d1
channel d1: sid=1080 devtype=DISK

allocated channel: d2
channel d2: sid=1064 devtype=DISK

Starting backup at 25-Jan-2011 10:47:19
channel d1: starting full datafile backupset
channel d1: specifying datafile(s) in backupset
input datafile fno=00002 name=/data/oradata/ENT10G2/datafile/o1_mf_undotbs1_5q87v6wj_.dbf
input datafile fno=00003 name=/data/oradata/ENT10G2/datafile/o1_mf_sysaux_5q87v6v9_.dbf
channel d1: starting piece 1 at 25-Jan-2011 10:47:19
channel d2: starting full datafile backupset
channel d2: specifying datafile(s) in backupset
input datafile fno=00001 name=/data/oradata/ENT10G2/datafile/o1_mf_system_5q87v6v6_.dbf
input datafile fno=00005 name=/data/oradata/ENT10G2/datafile/o1_mf_example_5q87wcj2_.dbf
input datafile fno=00004 name=/data/oradata/ENT10G2/datafile/o1_mf_users_5q87v6wn_.dbf
channel d2: starting piece 1 at 25-Jan-2011 10:47:19
channel d1: finished piece 1 at 25-Jan-2011 10:47:20
piece handle=/tmp/backup21rm307dn_1_1 tag=FORSTANDBY comment=NONE
channel d1: backup set complete, elapsed time: 00:00:01
channel d1: starting full datafile backupset
channel d1: specifying datafile(s) in backupset
including current control file in backupset
channel d1: starting piece 1 at 25-Jan-2011 10:47:20
channel d2: finished piece 1 at 25-Jan-2011 10:47:20
piece handle=/tmp/backup21sm307dn_1_1 tag=FORSTANDBY comment=NONE
channel d2: backup set complete, elapsed time: 00:00:01
channel d2: starting full datafile backupset
channel d2: specifying datafile(s) in backupset
channel d1: finished piece 1 at 25-Jan-2011 10:47:20
piece handle=/tmp/backup21tm307do_1_1 tag=FORSTANDBY comment=NONE
channel d1: backup set complete, elapsed time: 00:00:00
including current SPFILE in backupset
channel d2: starting piece 1 at 25-Jan-2011 10:47:20
channel d2: finished piece 1 at 25-Jan-2011 10:47:21
piece handle=/tmp/backup21um307do_1_1 tag=FORSTANDBY comment=NONE
channel d2: backup set complete, elapsed time: 00:00:01
Finished backup at 25-Jan-2011 10:47:21
released channel: d1
released channel: d2
5. Copy the backup files to the standby and catalog them
RMAN> CATALOG START WITH  '/tmp/backup2';

using target database control file instead of recovery catalog
searching for all files that match the pattern /tmp/backup2

List of Files Unknown to the Database
=====================================
File Name: /tmp/backup21um307do_1_1
File Name: /tmp/backup215m2mc3j_1_1
File Name: /tmp/backup21rm307dn_1_1
File Name: /tmp/backup21tm307do_1_1
File Name: /tmp/backup216m2mc3q_1_1
File Name: /tmp/backup21sm307dn_1_1

Do you really want to catalog the above files (enter YES or NO)? yes
cataloging files...
cataloging done

List of Cataloged Files
=======================
File Name: /tmp/backup21um307do_1_1
File Name: /tmp/backup215m2mc3j_1_1
File Name: /tmp/backup21rm307dn_1_1
File Name: /tmp/backup21tm307do_1_1
File Name: /tmp/backup216m2mc3q_1_1
File Name: /tmp/backup21sm307dn_1_1
6. Recover the standby database with noredo option
RMAN> recover database noredo;

Starting recover at 25-JAN-11
allocated channel: ORA_DISK_1
channel ORA_DISK_1: sid=1080 devtype=DISK
channel ORA_DISK_1: starting incremental datafile backupset restore
channel ORA_DISK_1: specifying datafile(s) to restore from backup set
destination for restore of datafile 00002: /data/oradata/STDBY/datafile/o1_mf_undotbs1_6mx9srfw_.dbf
destination for restore of datafile 00003: /data/oradata/STDBY/datafile/o1_mf_sysaux_6mx9srh8_.dbf
channel ORA_DISK_1: reading from backup piece /tmp/backup21rm307dn_1_1
channel ORA_DISK_1: restored backup piece 1
piece handle=/tmp/backup21rm307dn_1_1 tag=FORSTANDBY
channel ORA_DISK_1: restore complete, elapsed time: 00:00:01
channel ORA_DISK_1: starting incremental datafile backupset restore
channel ORA_DISK_1: specifying datafile(s) to restore from backup set
destination for restore of datafile 00001: /data/oradata/STDBY/datafile/o1_mf_system_6mx9srfz_.dbf
destination for restore of datafile 00004: /data/oradata/STDBY/datafile/o1_mf_users_6mx9srk6_.dbf
destination for restore of datafile 00005: /data/oradata/STDBY/datafile/o1_mf_example_6mx9srj6_.dbf
channel ORA_DISK_1: reading from backup piece /tmp/backup21sm307dn_1_1
channel ORA_DISK_1: restored backup piece 1
piece handle=/tmp/backup21sm307dn_1_1 tag=FORSTANDBY
channel ORA_DISK_1: restore complete, elapsed time: 00:00:02

Finished recover at 25-JAN-11
7. According to old data guard documents this is the last step and you could start log apply and data guard configuration would start to work again. This was tested but it appeared this no longer works (DB version 10.2.0.5). To test enable log apply on standby
SQL> alter database recover managed standby database using current logfile disconnect;

Database altered.
Run few log switches on primary and observed the archive log apply status
 SEQUENCE# APP
---------- ---
87 YES
88 YES
89 YES
90 YES
91 YES
92 YES
101 NO
103 NO
102 NO
104 NO
105 NO
Even though logs are shipped they do not get applied.

8. Rest of the steps on the new document was carried out after this. Cancel the log apply on standby and create a standby controlfile on primary. On standby
SQL> ALTER DATABASE RECOVER MANAGED STANDBY DATABASE CANCEL;
On primary
RMAN> BACKUP CURRENT CONTROLFILE FOR STANDBY FORMAT '/tmp/ForStandbyCTRL.bck';

Starting backup at 25-Jan-2011 10:51:09
using target database control file instead of recovery catalog
allocated channel: ORA_DISK_1
channel ORA_DISK_1: sid=1074 devtype=DISK
channel ORA_DISK_1: starting full datafile backupset
channel ORA_DISK_1: specifying datafile(s) in backupset
including standby control file in backupset
channel ORA_DISK_1: starting piece 1 at 25-Jan-2011 10:51:09
channel ORA_DISK_1: finished piece 1 at 25-Jan-2011 10:51:10
piece handle=/tmp/ForStandbyCTRL.bck tag=TAG20110125T105109 comment=NONE
channel ORA_DISK_1: backup set complete, elapsed time: 00:00:01
Finished backup at 25-Jan-2011 10:51:10
9. Copy the controlfile created above to the standby and restore it and mount the database
SQL> shutdown immediate;
SQL> startup nomount;
RMAN> RESTORE STANDBY CONTROLFILE FROM '/tmp/ForStandbyCTRL.bck';

Starting restore at 25-JAN-11
using target database control file instead of recovery catalog
allocated channel: ORA_DISK_1
channel ORA_DISK_1: sid=1091 devtype=DISK

channel ORA_DISK_1: restoring control file
channel ORA_DISK_1: restore complete, elapsed time: 00:00:01
output filename=/data/flash/STDBY/controlfile/o1_mf_5q87w5tj.ctl
output filename=/data/oradata/STDBY/controlfile/o1_mf_5q87w5ts.ctl
Finished restore at 25-JAN-11

RMAN> alter database mount;
10. Catalog the datafiles and switch the standby database to newly catalog datafiles
RMAN> CATALOG START WITH '/data/oradata/STDBY/datafile';

Starting implicit crosscheck backup at 25-JAN-11
allocated channel: ORA_DISK_1
channel ORA_DISK_1: sid=1091 devtype=DISK
Crosschecked 6 objects
Finished implicit crosscheck backup at 25-JAN-11

Starting implicit crosscheck copy at 25-JAN-11
using channel ORA_DISK_1
Finished implicit crosscheck copy at 25-JAN-11

searching for all files in the recovery area
cataloging files...
cataloging done

List of Cataloged Files
=======================
File Name: /data/flash/STDBY/archivelog/2011_01_25/o1_mf_1_92_6mxcbrk0_.arc
File Name: /data/flash/STDBY/archivelog/2011_01_25/o1_mf_1_104_6mxd4gl9_.arc
File Name: /data/flash/STDBY/archivelog/2011_01_25/o1_mf_1_103_6mxcx355_.arc
File Name: /data/flash/STDBY/archivelog/2011_01_25/o1_mf_1_91_6mxc8wj0_.arc
File Name: /data/flash/STDBY/archivelog/2011_01_25/o1_mf_1_101_6mxcqwqp_.arc
File Name: /data/flash/STDBY/archivelog/2011_01_25/o1_mf_1_102_6mxcx34s_.arc
File Name: /data/flash/STDBY/archivelog/2011_01_25/o1_mf_1_105_6mxd4gnv_.arc
File Name: /data/flash/STDBY/control01.ctl
File Name: /data/flash/STDBY/backupset/2011_01_25/o1_mf_ncnnf_TAG20110125T110046_6mxccg2q_.bkp
File Name: /data/flash/STDBY/backupset/2011_01_25/o1_mf_nnndf_TAG20110125T110046_6mxcbytb_.bkp
File Name: /data/flash/STDBY/backupset/2011_01_25/o1_mf_annnn_TAG20110125T110044_6mxcbxk8_.bkp
File Name: /data/flash/STDBY/backupset/2011_01_25/o1_mf_ncnnf_TAG20110125T105531_6mxc142l_.bkp

searching for all files that match the pattern /data/oradata/STDBY/datafile

List of Files Unknown to the Database
=====================================
File Name: /data/oradata/STDBY/datafile/o1_mf_example_6mx9srj6_.dbf
File Name: /data/oradata/STDBY/datafile/o1_mf_users_6mx9srk6_.dbf
File Name: /data/oradata/STDBY/datafile/o1_mf_sysaux_6mx9srh8_.dbf
File Name: /data/oradata/STDBY/datafile/o1_mf_undotbs1_6mx9srfw_.dbf
File Name: /data/oradata/STDBY/datafile/o1_mf_system_6mx9srfz_.dbf

Do you really want to catalog the above files (enter YES or NO)? yes
cataloging files...
cataloging done

List of Cataloged Files
=======================
File Name: /data/oradata/STDBY/datafile/o1_mf_example_6mx9srj6_.dbf
File Name: /data/oradata/STDBY/datafile/o1_mf_users_6mx9srk6_.dbf
File Name: /data/oradata/STDBY/datafile/o1_mf_sysaux_6mx9srh8_.dbf
File Name: /data/oradata/STDBY/datafile/o1_mf_undotbs1_6mx9srfw_.dbf
File Name: /data/oradata/STDBY/datafile/o1_mf_system_6mx9srfz_.dbf

RMAN> SWITCH DATABASE TO COPY;

datafile 1 switched to datafile copy "/data/oradata/STDBY/datafile/o1_mf_system_6mx9srfz_.dbf"
datafile 2 switched to datafile copy "/data/oradata/STDBY/datafile/o1_mf_undotbs1_6mx9srfw_.dbf"
datafile 3 switched to datafile copy "/data/oradata/STDBY/datafile/o1_mf_sysaux_6mx9srh8_.dbf"
datafile 4 switched to datafile copy "/data/oradata/STDBY/datafile/o1_mf_users_6mx9srk6_.dbf"
datafile 5 switched to datafile copy "/data/oradata/STDBY/datafile/o1_mf_example_6mx9srj6_.dbf"
11. Clear the standby redo log files
SQL> alter database clear logfile group 1;
Database altered.

SQL> alter database clear logfile group 2;
Database altered.

SQL> alter database clear logfile group 3;
Database altered.
12. Clear flash back logs by turning flash back off and on
SQL> ALTER DATABASE FLASHBACK OFF;
Database altered.

SQL> ALTER DATABASE FLASHBACK ON;
Database altered.
12. Start the log apply and observe the archive log apply status
SQL> alter database recover managed standby database using current logfile disconnect;

Database altered.

SQL> select sequence#,applied from v$archived_log order by 1 desc;

SEQUENCE# APP
---------- ---
105 YES
104 YES
103 YES
102 NO
101 NO
92 NO
91 NO
13. After few log switches on primary
SQL> alter system archive log current;

System altered.

SEQUENCE# APP
---------- ---
107 YES
106 YES
105 YES
104 YES
103 YES
102 NO
101 NO
92 NO
91 NO
Standby is up to date with primary.

Friday, January 21, 2011

Roll Foward a Physical Standby on 11gR1

Incremental backups could be used to roll forward a physical standby that lags far behind the primary or has archivelog gaps that cannot be resolved manually (archive log could be already deleted). The steps on this are documented on Oracle Data Guard Concepts and Administration. But this doesn't work exactly as it is. Also these steps are bit different from the original 10gR2 documentation which only specified taking a incremental backup and doing a noredo recovery. There was no need to restore the control file. But in this 11gR1 documents as well as current 10gR2 documents it's mentioned to restore the standby controlfile.

It is assumed there's a valid data guard configuration and it's functioning properly. Then a missing archive log situation is created by shutting down the standby DB,defering the log shipping and deleting the logs created using rman.

1. Once the standby is started after this and log shipping is enabled, expect to see a archive log gap.
SEQUENCE# APPLIED
---------- ---------
1187 NO
1179 YES
1178 YES
1177 YES
1176 YES
1175 YES
1174 YES
1173 YES
1172 YES

SQL> select * from v$archive_gap;

THREAD# LOW_SEQUENCE# HIGH_SEQUENCE#
---------- ------------- --------------
1 1179 1186
2. Cancel the log apply service and get the current scn of the standby database.
SQL>  alter database recover managed standby database cancel;

Database altered.

SQL> SELECT CURRENT_SCN FROM V$DATABASE;

CURRENT_SCN
-----------
12197632
3. Create an incremental backup on the standby starting with the scn obtain above. It helps to have automatic controlfile backup on at this stage.
RMAN> BACKUP INCREMENTAL FROM SCN 12197632 DATABASE FORMAT '/home/oracle/forstandby%U' tag 'forstandby';

Starting backup at 21-Jan-2011 14:45:59

using target database control file instead of recovery catalog
allocated channel: ORA_DISK_1
channel ORA_DISK_1: SID=1056 device type=DISK
backup will be obsolete on date 28-Jan-2011 14:46:00
archived logs will not be kept or backed up
channel ORA_DISK_1: starting full datafile backup set
channel ORA_DISK_1: specifying datafile(s) in backup set
input datafile file number=00006 name=/data/oradata/ENT11G1/datafile/o1_mf_ravelbo_5zb2lltr_.dbf
input datafile file number=00007 name=/data/oradata/ENT11G1/datafile/o1_mf_bxindex_5zb2lzvc_.dbf
input datafile file number=00008 name=/data/oradata/ENT11G1/datafile/o1_mf_bxlobs_5zb2mf60_.dbf
input datafile file number=00002 name=/data/oradata/ENT11G1/datafile/o1_mf_sysaux_5q7x746p_.dbf
input datafile file number=00001 name=/data/oradata/ENT11G1/datafile/o1_mf_system_5q7x745y_.dbf
input datafile file number=00003 name=/data/oradata/ENT11G1/datafile/o1_mf_undotbs1_5q7x746t_.dbf
input datafile file number=00004 name=/data/oradata/ENT11G1/datafile/o1_mf_users_5q7x747g_.dbf
input datafile file number=00005 name=/data/oradata/ENT11G1/datafile/o1_mf_example_5q7xg0kk_.dbf
channel ORA_DISK_1: starting piece 1 at 21-Jan-2011 14:46:00
channel ORA_DISK_1: finished piece 1 at 21-Jan-2011 14:47:45
piece handle=/home/oracle/forstandby12m2m3t8_1_1 tag=FORSTANDBY comment=NONE
channel ORA_DISK_1: backup set complete, elapsed time: 00:01:45
channel ORA_DISK_1: starting full datafile backup set
channel ORA_DISK_1: specifying datafile(s) in backup set
input datafile file number=00009 name=/data/oradata/ENT11G1/datafile/o1_mf_tbx2ktbs_5zb2t0jn_.dbf
channel ORA_DISK_1: starting piece 1 at 21-Jan-2011 14:47:45
channel ORA_DISK_1: finished piece 1 at 21-Jan-2011 14:47:48
piece handle=/home/oracle/forstandby13m2m40h_1_1 tag=FORSTANDBY comment=NONE
channel ORA_DISK_1: backup set complete, elapsed time: 00:00:03

using channel ORA_DISK_1
backup will be obsolete on date 28-Jan-2011 14:47:48
archived logs will not be kept or backed up
channel ORA_DISK_1: starting full datafile backup set
channel ORA_DISK_1: specifying datafile(s) in backup set
including current control file in backup set
channel ORA_DISK_1: starting piece 1 at 21-Jan-2011 14:47:49
channel ORA_DISK_1: finished piece 1 at 21-Jan-2011 14:47:50
piece handle=/home/oracle/forstandby14m2m40k_1_1 tag=FORSTANDBY comment=NONE
channel ORA_DISK_1: backup set complete, elapsed time: 00:00:01
Finished backup at 21-Jan-2011 14:47:50
4. Copy the incrmental backup files from the primary db server to standby db server and catalog them by running the following command on standby.
RMAN> catalog start with '/home/oracle/forstand';

Starting implicit crosscheck backup at 21-JAN-11
using target database control file instead of recovery catalog
allocated channel: ORA_DISK_1
channel ORA_DISK_1: SID=1069 device type=DISK
Finished implicit crosscheck backup at 21-JAN-11

Starting implicit crosscheck copy at 21-JAN-11
using channel ORA_DISK_1
Crosschecked 2 objects
Finished implicit crosscheck copy at 21-JAN-11

searching for all files in the recovery area
cataloging files...
no files cataloged

searching for all files that match the pattern /home/oracle/forstand

List of Files Unknown to the Database
=====================================
File Name: /home/oracle/forstandby14m2m40k_1_1
File Name: /home/oracle/forstandby13m2m40h_1_1
File Name: /home/oracle/forstandby12m2m3t8_1_1

Do you really want to catalog the above files (enter YES or NO)? yes
cataloging files...
cataloging done

List of Cataloged Files
=======================
File Name: /home/oracle/forstandby14m2m40k_1_1
File Name: /home/oracle/forstandby13m2m40h_1_1
File Name: /home/oracle/forstandby12m2m3t8_1_1
5. Start the standby database in nomount mode
RMAN> startup force nomount;

Oracle instance started

Total System Global Area 4275781632 bytes

Fixed Size 2166536 bytes
Variable Size 788529400 bytes
Database Buffers 3472883712 bytes
Redo Buffers 12201984 bytes
6. The next step on the oracle documentation is to restore the standby controlfile but this will not work (unless connected to recovery catalog)
RMAN> RESTORE STANDBY CONTROLFILE FROM TAG 'forstandby';

Starting restore at 21-JAN-11
allocated channel: ORA_DISK_1
channel ORA_DISK_1: SID=1088 device type=DISK

RMAN-00571: ===========================================================
RMAN-00569: =============== ERROR MESSAGE STACK FOLLOWS ===============
RMAN-00571: ===========================================================
RMAN-03002: failure of restore command at 01/21/2011 15:13:58
RMAN-06563: control file or SPFILE must be restored using FROM AUTOBACKUP
7. Create the standby controlfile from one of the incrimental backup that was identified earlier as the one carrying the controlfile backup
RMAN> restore standby controlfile from '/home/oracle/forstandby14m2m40k_1_1';

Starting restore at 21-JAN-11
using channel ORA_DISK_1

channel ORA_DISK_1: restoring control file
channel ORA_DISK_1: restore complete, elapsed time: 00:00:01
output file name=/data/oradata/STDBY/controlfile/control01.ctl
Finished restore at 21-JAN-11
8. Mount the database and run the recover noredo command. In cases with OMF or ASM useage it might require datafiles to be renamed using alter database rename file ...
RMAN> alter database mount;

database mounted
released channel: ORA_DISK_1

RMAN> recover database noredo;

Starting recover at 21-JAN-11
Starting implicit crosscheck backup at 21-JAN-11
allocated channel: ORA_DISK_1
channel ORA_DISK_1: SID=1075 device type=DISK
Crosschecked 2 objects
Finished implicit crosscheck backup at 21-JAN-11

Starting implicit crosscheck copy at 21-JAN-11
using channel ORA_DISK_1
Crosschecked 2 objects
Finished implicit crosscheck copy at 21-JAN-11

searching for all files in the recovery area
cataloging files...
cataloging done

List of Cataloged Files
=======================
File Name: /data/flash/STDBY/archivelog/2011_01_21/o1_mf_1_1176_6mm828b5_.arc
File Name: /data/flash/STDBY/archivelog/2011_01_21/o1_mf_1_1179_6mm829sr_.arc
File Name: /data/flash/STDBY/archivelog/2011_01_21/o1_mf_1_1178_6mm8293o_.arc
File Name: /data/flash/STDBY/archivelog/2011_01_21/o1_mf_1_1172_6mm7rk5o_.arc
File Name: /data/flash/STDBY/archivelog/2011_01_21/o1_mf_1_1173_6mm7rkhd_.arc
File Name: /data/flash/STDBY/archivelog/2011_01_21/o1_mf_1_1174_6mm7rp6j_.arc
File Name: /data/flash/STDBY/archivelog/2011_01_21/o1_mf_1_1187_6mm8d368_.arc
File Name: /data/flash/STDBY/archivelog/2011_01_21/o1_mf_1_1175_6mm827ww_.arc
File Name: /data/flash/STDBY/archivelog/2011_01_21/o1_mf_1_1177_6mm828oz_.arc
File Name: /data/flash/STDBY/archivelog/2011_01_21/o1_mf_1_1186_6mm8f5jb_.arc

using channel ORA_DISK_1
channel ORA_DISK_1: starting incremental datafile backup set restore
channel ORA_DISK_1: specifying datafile(s) to restore from backup set
destination for restore of datafile 00001: /data/oradata/STDBY/datafile/o1_mf_system_5q7x745y_.dbf
destination for restore of datafile 00002: /data/oradata/STDBY/datafile/o1_mf_sysaux_5q7x746p_.dbf
destination for restore of datafile 00003: /data/oradata/STDBY/datafile/o1_mf_undotbs1_5q7x746t_.dbf
destination for restore of datafile 00004: /data/oradata/STDBY/datafile/o1_mf_users_5q7x747g_.dbf
destination for restore of datafile 00005: /data/oradata/STDBY/datafile/o1_mf_example_5q7xg0kk_.dbf
destination for restore of datafile 00006: /data/oradata/STDBY/datafile/o1_mf_ravelbo_5zb2lltr_.dbf
destination for restore of datafile 00007: /data/oradata/STDBY/datafile/o1_mf_bxindex_5zb2lzvc_.dbf
destination for restore of datafile 00008: /data/oradata/STDBY/datafile/o1_mf_bxlobs_5zb2mf60_.dbf
channel ORA_DISK_1: reading from backup piece /home/oracle/forstandby12m2m3t8_1_1
channel ORA_DISK_1: piece handle=/home/oracle/forstandby12m2m3t8_1_1 tag=FORSTANDBY
channel ORA_DISK_1: restored backup piece 1
channel ORA_DISK_1: restore complete, elapsed time: 00:00:02
channel ORA_DISK_1: starting incremental datafile backup set restore
channel ORA_DISK_1: specifying datafile(s) to restore from backup set
destination for restore of datafile 00009: /data/oradata/STDBY/datafile/o1_mf_tbx2ktbs_5zb2t0jn_.dbf
channel ORA_DISK_1: reading from backup piece /home/oracle/forstandby13m2m40h_1_1
channel ORA_DISK_1: piece handle=/home/oracle/forstandby13m2m40h_1_1 tag=FORSTANDBY
channel ORA_DISK_1: restored backup piece 1
channel ORA_DISK_1: restore complete, elapsed time: 00:00:01
Finished recover at 21-JAN-11
9. At the end of the recovery enable log apply and check the apply status of the archive logs
SQL> alter database recover managed standby database using current logfile disconnect;

Database altered.

SQL> select sequence#,applied from v$archived_log order by 1 desc;

SEQUENCE# APPLIED
---------- ---------
1188 IN-MEMORY
1187 NO
1186 NO
1179 NO
1178 NO
1177 NO
1176 NO
1175 NO
1174 NO
1173 NO
1172 NO
10. Run several log switches on the primary and check the log apply status on the standby to confrim logs are applied
SQL> alter system switch logfile;

System altered.

SQL> select sequence#,applied from v$archived_log order by 1 desc;

SEQUENCE# APPLIED
---------- ---------
1193 IN-MEMORY
1192 YES
1191 YES
1190 YES
1189 YES
1188 YES
1187 NO
1186 NO
1179 NO
1178 NO
1177 NO
More on metalink note Roll Forward a Physical Standby Database Using RMAN Incremental Backups "Recover Database Noredo" Does Nothing [ID 841765.1]

Wednesday, January 5, 2011

EXPDP, IMPDP with Network Link

Network link option could be used in export/import pump to get the data from a remote database server. In the case of impdp it could be used to directly import from DB to DB without an intermediary dump file.

1. Create a TNS entry on the local database tnsnames.ora file for the remote database and create a database link using the TNS entry.
REMOTELOC =
   (DESCRIPTION =
     (ADDRESS = (PROTOCOL = TCP)(HOST = remote-loc)(PORT = 1521))
       (CONNECT_DATA =
        (SERVER = DEDICATED)
        (SID = remote-db)
     )
   )
Create the database link under the user that will run the expdp or impdp. If the user cannot find the database link following error will be shown
ORA-39001: invalid argument value
ORA-39200: Link name "remoteloc" is invalid.
ORA-02019: connection description for remote database not found
SQL> create database link remoteloc connect to scott identified by tiger using 'REMOTELOC'
2. Specify the database link on the impdp or expdp commands.
impdp
The import will be run on the local database server and it will import the remote database's scott schema directly into the local database. There won't be any dumpfile generated in this case.
impdp scott/tiger directory=dumpdir logfile=impi.log network_link=remoteloc schemas=scott

Import: Release 11.2.0.1.0 - Production on Wed Jan 5 15:53:33 2011

Copyright (c) 1982, 2009, Oracle and/or its affiliates.  All rights reserved.

Connected to: Oracle Database 11g Enterprise Edition Release 11.2.0.1.0 - 64bit Production
With the Partitioning, Real Application Clusters, Automatic Storage Management, OLAP,
Data Mining and Real Application Testing options
Starting "SCOTT"."SYS_IMPORT_SCHEMA_01":  scott/******** directory=dumpdir logfile=impi.log network_link=remoteloc schemas=scott
Estimate in progress using BLOCKS method...
Processing object type SCHEMA_EXPORT/TABLE/TABLE_DATA
Total estimation using BLOCKS method: 192 KB
Processing object type SCHEMA_EXPORT/USER
ORA-31684: Object type USER:"SCOTT" already exists
Processing object type SCHEMA_EXPORT/SYSTEM_GRANT
Processing object type SCHEMA_EXPORT/ROLE_GRANT
Processing object type SCHEMA_EXPORT/DEFAULT_ROLE
Processing object type SCHEMA_EXPORT/PRE_SCHEMA/PROCACT_SCHEMA
Processing object type SCHEMA_EXPORT/TABLE/TABLE
. . imported "SCOTT"."DEPT"                                   4 rows
. . imported "SCOTT"."EMP"                                   14 rows
. . imported "SCOTT"."SALGRADE"                               5 rows
. . imported "SCOTT"."BONUS"                                  0 rows
Processing object type SCHEMA_EXPORT/TABLE/INDEX/INDEX
Processing object type SCHEMA_EXPORT/TABLE/CONSTRAINT/CONSTRAINT
Processing object type SCHEMA_EXPORT/TABLE/INDEX/STATISTICS/INDEX_STATISTICS
Processing object type SCHEMA_EXPORT/TABLE/CONSTRAINT/REF_CONSTRAINT
Processing object type SCHEMA_EXPORT/TABLE/STATISTICS/TABLE_STATISTICS
Job "SCOTT"."SYS_IMPORT_SCHEMA_01" completed with 1 error(s) at 15:54:19
In this case the "SCOTT"."SYS_IMPORT_SCHEMA_01" meta table is created in the local DB's importing schema not on the remote schema.




expdp
This will export the remote database's scott schema into the local database server's dump file directory.
expdp scott/tiger directory=dumpdir logfile=impi.log network_link=remoteloc schemas=scott dumpfile=newscot.dmp

Export: Release 11.2.0.1.0 - Production on Wed Jan 5 15:58:30 2011

Copyright (c) 1982, 2009, Oracle and/or its affiliates.  All rights reserved.

Connected to: Oracle Database 11g Enterprise Edition Release 11.2.0.1.0 - 64bit Production
With the Partitioning, Real Application Clusters, Automatic Storage Management, OLAP,
Data Mining and Real Application Testing options
Starting "SCOTT"."SYS_EXPORT_SCHEMA_01":  scott/******** directory=dumpdir logfile=impi.log network_link=remoteloc schemas=scott
dumpfile=newscot.dmp
Estimate in progress using BLOCKS method...
Processing object type SCHEMA_EXPORT/TABLE/TABLE_DATA
Total estimation using BLOCKS method: 192 KB
Processing object type SCHEMA_EXPORT/USER
Processing object type SCHEMA_EXPORT/SYSTEM_GRANT
Processing object type SCHEMA_EXPORT/ROLE_GRANT
Processing object type SCHEMA_EXPORT/DEFAULT_ROLE
Processing object type SCHEMA_EXPORT/PRE_SCHEMA/PROCACT_SCHEMA
Processing object type SCHEMA_EXPORT/TABLE/TABLE
Processing object type SCHEMA_EXPORT/TABLE/INDEX/INDEX
Processing object type SCHEMA_EXPORT/TABLE/CONSTRAINT/CONSTRAINT
Processing object type SCHEMA_EXPORT/TABLE/INDEX/STATISTICS/INDEX_STATISTICS
Processing object type SCHEMA_EXPORT/TABLE/CONSTRAINT/REF_CONSTRAINT
Processing object type SCHEMA_EXPORT/TABLE/STATISTICS/TABLE_STATISTICS
. . exported "SCOTT"."DEPT"                              5.937 KB       4 rows
. . exported "SCOTT"."EMP"                               8.570 KB      14 rows
. . exported "SCOTT"."SALGRADE"                          5.867 KB       5 rows
. . exported "SCOTT"."BONUS"                                 0 KB       0 rows
Master table "SCOTT"."SYS_EXPORT_SCHEMA_01" successfully loaded/unloaded
******************************************************************************
Dump file set for SCOTT.SYS_EXPORT_SCHEMA_01 is:
/home/oracle/newscot.dmp
Job "SCOTT"."SYS_EXPORT_SCHEMA_01" successfully completed at 15:59:08

ASM to ASM file transfer

DBMS_FILE_TRANSFER package which was introduced in 10g could be used to transfer files from one ASM disk to another without an intermediary local file.
There are two methods for this, get_file and put_file. get_file could be used to copy a file from a remote location to the local server ASM and put_file would copy a local file to a remote ASM location.

GET_FILE

1. Create a file in ASM. In this case a dumpfile was created in the ASM disk. This will be the remote server. The file created here will be requested by the local server demonstrating get_file operation. For example a ASM directory was created on remote server with
ASMCMD> cd FLASH/RAC11G1
mkdir REMOTEASM
and a directory object created with
SQL> create directory remoteasmdir as '+FLASH/RAC11G1/REMOTEASM';
Directory created.
Grant the user (that's on the remote node) that will be used to create the database link (step 4) read,write access on this directory object
SQL> grant read,write on directory remoteasmdir to scott;
2. On the local database server connect to the ASM instance using asmcmd and create a destination directory for the file.
. oraenv
ORACLE_SID = [rac11g21] ? +ASM1
The Oracle base for ORACLE_HOME=/opt/app/11.2.0/grid is /opt/app/oracle
$ asmcmd
ASMCMD> cd FLASH/RAC11G2  

ASMCMD> mkdir DESTASM
It's better to create the directories in uppercase as procedure converts directory object parameters to uppercase unless they are surrounded by double quotation marks.

3. Create the directory object and grant the user (on the local node) with the read,write privileges on that directory.
SQL> create directory destasmdir as '+DATA/RAC11G2/DESTASM';
Directory created.

SQL> grant read,write on directory destasmdir to scott;
Grant succeeded.
4. Create a TNS entry on the local database tnsnames.ora file for the remote database and create a database link using the TNS entry. The user used in the database link must have read/write permission on the remote servers directory (created on step 1)
REMOTELOC =
 (DESCRIPTION =
   (ADDRESS = (PROTOCOL = TCP)(HOST = remote-loc)(PORT = 1521))
   (CONNECT_DATA =
     (SERVER = DEDICATED)
     (SID = remote-db)
   )
 ) 

SQL> create database link remoteloc connect to scott identified by tiger using 'REMOTELOC'
5. Execute the get_file function.
SQL> exec dbms_file_transfer.get_file('REMOTEASMDIR','SCOTT.DMP','REMOTELOC','DESTASMDIR','destasm.dmp');
Here REMOTEASMDIR is the remote directory object where the source file resides.
SCOTT.DMP is the file name. REMOTELOC is the name of the database link created above.
DESTASMDIR is the directory object on the local server where file will be stored once the transfer is completed. Last parameter is the destination file name which could be different from the source file name.

Only an alias name is created on the location specified on the procedure. Real file resides in a seperate location with a generated file name.
ls -l DESTASM
Type Redund Striped Time Sys Name
N destasm.dmp => +DATA/RAC11G2/DUMPSET/FILE_TRANSFER_0_0.276.739638379
To monitor the progress of a long file copy, query the V$SESSION_LONGOPS dynamic performance view. The size of the copied file must be a multiple of 512 bytes and the copied file must be less than or equal to two terabytes.

PUT_FILE

put_file allows to copy a file from local server to a remote server. The file that was copied using get_file will be copied back to the remote server under a different name. This procedure also uses the same database link and directory objects.
1.In put_file scenario DESTASM is the directory object name where the file resides
destasm.dmp is the name of the file that will be put into the remote server
ASMDUMPDIR is the direcotry object name on the remote server where file will be kept once copied
newasm.dmp is the name of the file at the remote location and
REMOTELOC is the database link to the remote database server.
exec dbms_file_transfer.put_file('DESTASMDIR','destasm.dmp','REMOTEASMDIR','newasm.dmp','REMOTELOC');