create RSA and DSA keys with no passphrase
Tuesday, June 24, 2008
Internal Eerror Code LibraryCacheNotEmptyOnClose
Found in
Oracle Server - Enterprise Edition - Version: 10.2.0.1 to 10.2.0.4
in any platform.
Metalink note
Note:466425.1
Note:4483084.8
Note:365103.1
Signs
ORA-600 [LibraryCacheNotEmptyOnClose] , [], [], [] [], [], [], []
Bug
4483084
Fixed in
11.1.0.6 (Base Release)
Workaround
Ignore the error as it just indicates that there are some items in the library cache when closing down the instance. The error itself occurs AFTER the database close and dismount stages so only affects the instance shutdown itself. Datafiles have been closed cleanly.
It is intended that this will be fixed in 10.2.0.5 but this has not yet been confirmed.
Oracle Server - Enterprise Edition - Version: 10.2.0.1 to 10.2.0.4
in any platform.
Metalink note
Note:466425.1
Note:4483084.8
Note:365103.1
Signs
ORA-600 [LibraryCacheNotEmptyOnClose] , [], [], [] [], [], [], []
Bug
4483084
Fixed in
11.1.0.6 (Base Release)
Workaround
Ignore the error as it just indicates that there are some items in the library cache when closing down the instance. The error itself occurs AFTER the database close and dismount stages so only affects the instance shutdown itself. Datafiles have been closed cleanly.
It is intended that this will be fixed in 10.2.0.5 but this has not yet been confirmed.
Labels:
4483084,
internal error,
LibraryCacheNotEmptyOnClose,
oracle
Sunday, June 22, 2008
Converting from Standard to Enterprise Edition
1. Install enterprise edition on a separate home
2. create pfile of the existing database and move to enterprise home
3. move the password file to enterprise home or create new password file
4. backup the database.
5. switch paths, sid, and other oracle home related properties to enterprise home
6. startup the database
7. run catalog, catproc and utlrp scripts
8. de-install the standard software
Database version must be same. (ie. if standard is on 10.2.0.3 enterprise should also be on 10.2.0.3
if versions are different then replace steps 6 and 7 with
6. startup the database in upgrade/migrate mode.
7. run catupgrd.sql script of the current patch set level
more on metalink Note:117048.1
The only way to properly convert from an Enterprise Edition back to a Standard Edition is through an Export/Import operation. This way you will get rid of all Enterprise Edition specific Data Dictionary objects, as the SYS schema objects are not exported. The Standard Edition EXP utility is preferred to be used to export the data.
After the Import in the Standard Edition database, you only need to drop all user schemas related to Enterprise Edition features, such as the MDSYS account (used with Oracle Spatial).
more on metalink Note:139642.1
2. create pfile of the existing database and move to enterprise home
3. move the password file to enterprise home or create new password file
4. backup the database.
5. switch paths, sid, and other oracle home related properties to enterprise home
6. startup the database
7. run catalog, catproc and utlrp scripts
8. de-install the standard software
Database version must be same. (ie. if standard is on 10.2.0.3 enterprise should also be on 10.2.0.3
if versions are different then replace steps 6 and 7 with
6. startup the database in upgrade/migrate mode.
7. run catupgrd.sql script of the current patch set level
more on metalink Note:117048.1
The only way to properly convert from an Enterprise Edition back to a Standard Edition is through an Export/Import operation. This way you will get rid of all Enterprise Edition specific Data Dictionary objects, as the SYS schema objects are not exported. The Standard Edition EXP utility is preferred to be used to export the data.
After the Import in the Standard Edition database, you only need to drop all user schemas related to Enterprise Edition features, such as the MDSYS account (used with Oracle Spatial).
more on metalink Note:139642.1
Wednesday, June 18, 2008
Script for creating bonded interfaces
Usage:
Copy the text below to create the bond.sh script
bond.sh interface1 interface2 bondname IP boradcastIP netmask network mode link_monitoring_frequency(miimon)
./bond.sh eth5 eth6 bond0 192.168.0.100 192.168.0.255 255.255.255.0 192.168.0.0 6 100
Copy the text below to create the bond.sh script
#! /bin/sh
if [ $# -ne 9 ]; then
printf "Usage:\nbond.sh interface1 interface2 bondname IP boradcastIP netmask network mode link_monitoring_frequency(miimon)\n"
printf "eg: bond.sh eth5 eth6 bond0 192.168.0.100 192.168.0.255 255.255.255.0 192.168.0.0 6 100\n"
exit 0
fi
printf "\t\t*************** Bonded interface script ***************\n"
printf "\t\t*************** Author : Asanga Pradeep ***************\n"
if [ -f /etc/sysconfig/network-scripts/ifcfg-$1 ]; then
if [ -f /etc/sysconfig/network-scripts/ifcfg-$2 ]; then
mv /etc/sysconfig/network-scripts/ifcfg-$1 /root/ifcfg-$1
mv /etc/sysconfig/network-scripts/ifcfg-$2 /root/ifcfg-$2
echo "Original files moved to /root"
else
echo "No "$2" found"
exit 0
fi
else
echo "No "$1" found"
exit 0
fi
#creating bond ---------------------------------------------------------------
echo "creating bond " $3
touch /etc/sysconfig/network-scripts/ifcfg-$3
echo "DEVICE="$3 >> /etc/sysconfig/network-scripts/ifcfg-$3
echo "BOOTPROTO=static" >> /etc/sysconfig/network-scripts/ifcfg-$3
echo "BROADCAST="$5 >> /etc/sysconfig/network-scripts/ifcfg-$3
echo "IPADDR="$4 >> /etc/sysconfig/network-scripts/ifcfg-$3
echo "IPV6ADDR=" >> /etc/sysconfig/network-scripts/ifcfg-$3
echo "IPV6PREFIX=" >> /etc/sysconfig/network-scripts/ifcfg-$3
echo "IPV6_AUTOCONF=yes" >> /etc/sysconfig/network-scripts/ifcfg-$3
echo "NETMASK="$6 >> /etc/sysconfig/network-scripts/ifcfg-$3
echo "NETWORK="$7 >> /etc/sysconfig/network-scripts/ifcfg-$3
echo "ONBOOT=yes" >> /etc/sysconfig/network-scripts/ifcfg-$3
#creating first eth -------------------------------------------------------------------
echo "creating "$1
touch /etc/sysconfig/network-scripts/ifcfg-$1
echo "DEVICE="$1 >> /etc/sysconfig/network-scripts/ifcfg-$1
echo "BOOTPROTO=none" >> /etc/sysconfig/network-scripts/ifcfg-$1
echo "ONBOOT=yes" >> /etc/sysconfig/network-scripts/ifcfg-$1
echo "USERCTL=no" >> /etc/sysconfig/network-scripts/ifcfg-$1
echo "MASTER=$3" >> /etc/sysconfig/network-scripts/ifcfg-$1
echo "SLAVE=yes" >> /etc/sysconfig/network-scripts/ifcfg-$1
echo "MTU=9000" >> /etc/sysconfig/network-scripts/ifcfg-$1
#creating second eth -------------------------------------------------------------------
echo "creating "$2
touch /etc/sysconfig/network-scripts/ifcfg-$2
echo "DEVICE="$2 >> /etc/sysconfig/network-scripts/ifcfg-$2
echo "BOOTPROTO=none" >> /etc/sysconfig/network-scripts/ifcfg-$2
echo "ONBOOT=yes" >> /etc/sysconfig/network-scripts/ifcfg-$2
echo "USERCTL=no" >> /etc/sysconfig/network-scripts/ifcfg-$2
echo "MASTER=$3" >> /etc/sysconfig/network-scripts/ifcfg-$2
echo "SLAVE=yes" >> /etc/sysconfig/network-scripts/ifcfg-$2
echo "MTU=9000" >> /etc/sysconfig/network-scripts/ifcfg-$2
#editng modprobe.conf
if [ -f /etc/modprobe.conf ]; then
cp /etc/modprobe.conf /etc/modprobe.conf.bak
echo "Original file backup as /etc/modprobe.conf.bak"
echo "alias "$3" bonding" >> /etc/modprobe.conf
echo "options "$3" mode="$8" miimon="$9 >> /etc/modprobe.conf
/sbin/modprobe bonding
else
echo "NO /etc/modprobe.conf found"
fi
Services to disable on Linux
on both RHEL 4 and RHEL 5
chkconfig sendmail off
chkconfig rhnsd off
chkconfig readahead_early off
chkconfig readahead_later off
chkconfig yum-updatesd off
chkconfig cups off
chkconfig iptables off
chkconfig bluetooth off
chkconfig isdn off
chkconfig kudzu off
chkconfig netfs off
chkconfig rpcgssd off
chkconfig rpcidmapd
chkconfig rpcidmapd off
chkconfig auditd off
chkconfig avahi-daemon off
chkconfig firstboot off
chkconfig hidd off
chkconfig portmap off
chkconfig ip6tables off
chkconfig xend off
chkconfig xendomains off
chkconfig autofs off
chkconfig setroubleshoot off
chkconfig pcscd off
chkconfig nfslock off
chkconfig arptables_jf off
chkconfig cups-config-daemon off
chkconfig rawdevices off
chkconfig readahead off
chkconfig smartd off
chkconfig xinetd off
chkconfig xfs off
chkconfig pcmcia off
Wednesday, June 11, 2008
ORADEBUG
oradebug help
HELP [command] Describe one or all commands
SETMYPID Debug current process
SETOSPID ospid Set OS pid of process to debug
SETORAPID orapid ['force'] Set Oracle pid of process to debug
SHORT_STACK Dump abridged OS stack
DUMP dump_name lvl [addr] Invoke named dump
DUMPSGA [bytes] Dump fixed SGA
DUMPLIST Print a list of available dumps
EVENT text Set trace event in process
SESSION_EVENT text Set trace event in session
DUMPVAR p|s|uga name [level] Print/dump a fixed PGA/SGA/UGA DUMPTYPE
/name/p|s|uga/text/text/lvl/dump_name/orapid/ospid address type count Print/dump an address with type info
SETVAR p|s|uga name value Modify a fixed PGA/SGA/UGA PEEK addr len [level] Print/Dump memory
POKE addr len value Modify memory
WAKEUP orapid Wake up Oracle process
SUSPEND Suspend execution
RESUME Resume execution
FLUSH Flush pending writes to trace CLOSE_TRACE Close trace file
TRACEFILE_NAME Get name of trace file
LKDEBUG Invoke global enqueue service
SGATOFILE sga dump="" dir="" Dump SGA to file
DMPCOWSGA sga dump="" dir="" Dump & map SGA as COW
MAPCOWSGA sga dump="" dir="" Map SGA as COW
HANGANALYZE [level] [syslevel] Analyze system hang
FFBEGIN Flash Freeze the Instance
FFDEREGISTER FF deregister instance from cluster
FFTERMINST Call exit and terminate instance
FFRESUMEINST Resume the flash frozen instance
FFSTATUS Flash freeze status of instance
UNLIMIT Unlimit the size of the trace file
PROCSTAT Dump process statistics
Level 1 – Basic level of trace information.
Level 8 - Level 1 plus the wait events for elapsed times that are more than current CPU timings.
Level 12- All the previous trace level information in addition to all wait event information.
Tracing a session
SQL> oradebug setmypid
SQL> alter session set tracefile_identifier='trace identifier';
SQL> oradebug tracefile_name /home/oracle/trace_file.trc (default location is user_dump_dest)
SQL> oradebug unlimit
SQL> oradebug event 10046 trace name context forever, level 12
SQL> sql statments to trace ......
SQL> sql statments to trace ......
SQL> sql statments to trace ......
SQL> oradebug event 10046 trace name context off
SQL> oradebug tracefile_name /home/oracle/trace_file.trc
Monday, June 9, 2008
Transporting Database
1. Check if the database can be transported to destination platform. Put the DB in read only mode and use DBMS_TDB.CHECK_DB for this. It can be called without arguments to see if any condition at the source database prevents transport. It can also be called with one or both of the following arguments
If SERVEROUTPUT is ON, and DBMS_TDB.CHECK_DB returns FALSE, then the output includes the reason why the database cannot be transported.
If you call DBMS_TDB.CHECK_DB and no messages are displayed indicating conditions preventing transport before the PL/SQL procedure successfully completed message, then your database is ready for transport.
2. Use DBMS_TDB.CHECK_EXTERNAL to identify any external tables, directories or BFILEs. RMAN cannot automate the transport of such files. BMS_TDB.CHECK_EXTERNAL takes no parameters. With SERVEROUTPUT set to ON, the output of DBMS_TDB.CHECK_EXTERNAL lists the external tables, directories and BFILEs of your database.
3. To convert the database at the source
This will generate datafiles at /home/oracle/dump ready for transport, a PFile at $ORACLE_HOME/dbs and a transport.sql used to create the DB at the target.
4. Copy the datafiles, PFile and transport sql file to the target host.
5. At the target edit the pfile to reflect the new directory structure. control_files, *dump_dest, and recovery area are some of the things to consider. If the pfile is renamed from the original system generated name edit the transport sql to reflect the changes.
6. Once all the changes are done set the oracle sid to new DB and run the transport sql
7. At the end of the script DB is created and opened. Towards the end of the script utlirp.sql and utlrp.sql are run. IF utlrp.sql gives out errors then manually open the DB in upgrade mode and run utlirp and utlrp to compile the packages.
8. Create a password file for the database.
9. To generate a new SID for the database use the DBNEWID utility.
10. After the new id is generated database is shutdown. open it with resetlogs.
11. Create new directory objects in the target
target_platform_name : The name of the destination platform, as it appears in V$DB_TRANSPORTABLE_PLATFORM.
skip_option : Specifies which, if any, parts of the database to skip when checking whether the database can be transported.
* SKIP_NONE (or 0), which checks all tablespaces
* SKIP_OFFLINE (or 2), which skips checking datafiles in offline tablespaces
* SKIP_READONLY (or 3), which skips checking datafiles in read-only tablespaces
If SERVEROUTPUT is ON, and DBMS_TDB.CHECK_DB returns FALSE, then the output includes the reason why the database cannot be transported.
If you call DBMS_TDB.CHECK_DB and no messages are displayed indicating conditions preventing transport before the PL/SQL procedure successfully completed message, then your database is ready for transport.
set serveroutput on
declare
db_ready boolean;
begin
db_ready := dbms_tdb.check_db('Solaris Operating System (AMD64)',dbms_tdb.skip_readonly);
end;
/
2. Use DBMS_TDB.CHECK_EXTERNAL to identify any external tables, directories or BFILEs. RMAN cannot automate the transport of such files. BMS_TDB.CHECK_EXTERNAL takes no parameters. With SERVEROUTPUT set to ON, the output of DBMS_TDB.CHECK_EXTERNAL lists the external tables, directories and BFILEs of your database.
set serveroutput on
declare
external boolean;
begin
external := dbms_tdb.check_external;
end;
/
3. To convert the database at the source
CONVERT DATABASE NEW DATABASE 'newdb' transport script '/home/oracle/dump/tranport.sql' to platform 'Solaris Operating System (AMD64)' db_file_name_convert '/orace/oradata/test','/home/oracle/dump';
This will generate datafiles at /home/oracle/dump ready for transport, a PFile at $ORACLE_HOME/dbs and a transport.sql used to create the DB at the target.
4. Copy the datafiles, PFile and transport sql file to the target host.
5. At the target edit the pfile to reflect the new directory structure. control_files, *dump_dest, and recovery area are some of the things to consider. If the pfile is renamed from the original system generated name edit the transport sql to reflect the changes.
6. Once all the changes are done set the oracle sid to new DB and run the transport sql
7. At the end of the script DB is created and opened. Towards the end of the script utlirp.sql and utlrp.sql are run. IF utlrp.sql gives out errors then manually open the DB in upgrade mode and run utlirp and utlrp to compile the packages.
8. Create a password file for the database.
9. To generate a new SID for the database use the DBNEWID utility.
nid target=sys/newdb
10. After the new id is generated database is shutdown. open it with resetlogs.
alter database open resetlogs;
11. Create new directory objects in the target
select directory_path from dba_directories;
update dba_directories set directory_path='new path' where directory_path='old path';
Sunday, June 8, 2008
Transporting Tablespaces
1. Select the endian format of source DB and target DB
To list all OS and their endian formats
To find the endian format of the target and source DB
If the endian formats are different then datafiles need to be converted to target DB version before transporting.
2. Check if the tablespaces that are transported are self contained with referential integrity constraints taken into consideration (indicated by TRUE).
Any violations are listed in the TRANSPORT_SET_VIOLATIONS view.
If the set of tablespaces is self-contained, this view is empty.
3. Generate a Transportable Tablespace Set with the following steps.
4. Transport the tablespace set which includes the datafile and the expdp dumpfile to the target DB host.
5. Import the tablespace metadata using impdp
6. After the import tablespace will be in read only mode.
7. Make the tablespace read write if needed.
Instead of convert tablespace which does the conversion at the source DB convert datafile can be used to do the conversion at the target DB.
All the steps are similar to above except for the fact the step 3.3 convert tablespace ... is omitted.
Instead after the export copy the datafiles which are in read only mode to the target DB. Before doing the import use datafile convert
Restrictions on CONVERT TABLESPACE and CONVERT DATAFILE
* Both source and destination databases must be running with the COMPATIBLE initialization parameter set to 10.0 or higher.
* Not all combinations of source and destination platforms are supported. To determine whether your source and destination platforms are supported, query V$TRANSPORTABLE_PLATFORM. If both the source and destination platforms are listed in this view, then CONVERT can be used to prepare datafiles from one platform for use on the other.
* A tablespace must be made read-write at least once in release 10g before it can be transported to another platform using CONVERT. Hence, any read-only tablespaces (or previously transported tablespaces) from a previous release must first be made read-write before they can be transported to another platform.
* RMAN does not process user datatypes that require endian conversions.
* Prior to release 10g, CLOBs were created with a variable-width character set and stored in an endian-dependent format. The CONVERT command does not perform conversions on these CLOBs. Instead, RMAN captures the endian format of each LOB column and propagates it to the target database. Subsequent reads of this data by the SQL layer interpret the data correctly based upon either endian format, and write the data out in an endian-independent format if the tablespace is writeable.
CLOBs created in Oracle Database Release 10g are stored in character set AL16UTF16, which is platform-independent.
To list all OS and their endian formats
SELECT * FROM V$TRANSPORTABLE_PLATFORM;
To find the endian format of the target and source DB
SELECT d.PLATFORM_NAME, ENDIAN_FORMAT
FROM V$TRANSPORTABLE_PLATFORM tp, V$DATABASE d
WHERE tp.PLATFORM_NAME = d.PLATFORM_NAME;
If the endian formats are different then datafiles need to be converted to target DB version before transporting.
2. Check if the tablespaces that are transported are self contained with referential integrity constraints taken into consideration (indicated by TRUE).
exec DBMS_TTS.TRANSPORT_SET_CHECK('dumps',TRUE);
Any violations are listed in the TRANSPORT_SET_VIOLATIONS view.
SELECT * FROM TRANSPORT_SET_VIOLATIONS;
If the set of tablespaces is self-contained, this view is empty.
3. Generate a Transportable Tablespace Set with the following steps.
1. put the source tablespaces in read only mode.
alter tablespace dumps read only;
2. Invoke the Data Pump export utility on the host system and specify which tablespaces are in the transportable set.
expdp uname/pw directory=dumpdir dumpfile=tbs.dmp logfile=tbs.log TRANSPORT_TABLESPACES =dumps
3. convert the tablespace for transport using rman if the endian types are different. If compressed backupset is enabled disable it before converting.
convert tablespace dumps to platform 'Linux 64-bit for AMD' format '/home/orace/dump%U';
4. Transport the tablespace set which includes the datafile and the expdp dumpfile to the target DB host.
5. Import the tablespace metadata using impdp
impdp system/testdb directory=dumpdir dumpfile=tbs.dmp logfile=tbs.log TRANSPORT_DATAFILES=/oracle/oradata/test/dump.dbf
6. After the import tablespace will be in read only mode.
select tablespace_name,status from dba_tablespaces;
DUMPS READ ONLY
7. Make the tablespace read write if needed.
alter tablespace dumps read write;
Instead of convert tablespace which does the conversion at the source DB convert datafile can be used to do the conversion at the target DB.
All the steps are similar to above except for the fact the step 3.3 convert tablespace ... is omitted.
Instead after the export copy the datafiles which are in read only mode to the target DB. Before doing the import use datafile convert
convert datafile '/oracle/oradata/test/dump.dbf' from platform 'Solaris Operating System (AMD64)';
Restrictions on CONVERT TABLESPACE and CONVERT DATAFILE
* Both source and destination databases must be running with the COMPATIBLE initialization parameter set to 10.0 or higher.
* Not all combinations of source and destination platforms are supported. To determine whether your source and destination platforms are supported, query V$TRANSPORTABLE_PLATFORM. If both the source and destination platforms are listed in this view, then CONVERT can be used to prepare datafiles from one platform for use on the other.
* A tablespace must be made read-write at least once in release 10g before it can be transported to another platform using CONVERT. Hence, any read-only tablespaces (or previously transported tablespaces) from a previous release must first be made read-write before they can be transported to another platform.
* RMAN does not process user datatypes that require endian conversions.
* Prior to release 10g, CLOBs were created with a variable-width character set and stored in an endian-dependent format. The CONVERT command does not perform conversions on these CLOBs. Instead, RMAN captures the endian format of each LOB column and propagates it to the target database. Subsequent reads of this data by the SQL layer interpret the data correctly based upon either endian format, and write the data out in an endian-independent format if the tablespace is writeable.
CLOBs created in Oracle Database Release 10g are stored in character set AL16UTF16, which is platform-independent.
How Redo Send Works
ARCn Archival Processing
Archiving happens when a log switch occurs on the primary database:
* On the primary database, after one archival process (ARC0) has successfully archived the local online redo log to the local destination (LOG_ARCHIVE_DEST_1), another archival process (ARC1) process transmits redo from the local archived redo log files (instead of the online redo log files) to the remote standby destination (LOG_ARCHIVE_DEST_2).
* On the remote destination, the remote file server process (RFS) will, in turn, write the redo data to an archived redo log file from a standby redo log file. Log apply services use Redo Apply (managed recovery process - MRP ) or SQL Apply ( logical standby process - LSP uses parallel execution (Pnnn) processes) to apply the redo to the standby database.
Because the online redo log files are archived locally first, the LGWR process reuses the online redo log files much earlier than would be possible if the ARCn processes archived to the standby database concurrently with the local destination.
Using the Log Writer Process (LGWR) to Archive Redo Data
Using the LGWR process differs from ARCn processing because instead of waiting for the online redo log to switch at the primary database and then writing the entire archived redo log at the remote destination all at once, the LGWR process selects a standby redo log file at the standby site that reflects the log sequence number (and size) of the current online redo log file of the primary database. Then, as redo is generated at the primary database, it is also transmitted to the remote destination. The transmission to the remote destination will either be synchronous or asynchronous. Synchronous LGWR processing is required for the maximum protection and maximum availability modes of data protection in Data Guard configurations.
LGWR SYNC Archival Processing
LGWR process to synchronously transmit redo data to the standby system at the same time it is writing redo data to the online redo log file on the primary database:
* On the primary database, the LGWR process submits the redo data to one or more network server (LGWR Network Server process LNSn) processes, which then initiate the network I/O in parallel to multiple remote destinations. Transactions are not committed on the primary database until the redo data necessary to recover the transaction is received by all LGWR SYNC destinations.
* On the standby system, the remote file server (RFS) receives redo data over the network from the LGWR process and writes the redo data to the standby redo log files.
A log switch on the primary database triggers a log switch on the standby database, causing ARCn processes on the standby database to archive the standby redo log files to archived redo log files on the standby database. Then, Redo Apply (MRP process) or SQL Apply (LSP process) applies the redo data to the standby database. If real-time apply is enabled, Data Guard recovers redo data directly from the current standby redo log file as it is being filled up by the RFS process.
LGWR ASYNC Archival Processing
When the LGWR and ASYNC attributes are specified, the log writer process writes to the local online redo log file, while the network server (LNSn) processes (one for each destination) asynchronously transmit the redo to remote destinations. The LGWR process continues processing the next request without waiting for the LNS network I/O to complete.
If redo transport services transmit redo data to multiple remote destinations, the LNSn processes (one for each destination) initiate the network I/O to all of the destinations in parallel.
When an online redo log file fills up, a log switch occurs and an archiver process archives the log file locally, as usual.
Archiving happens when a log switch occurs on the primary database:
* On the primary database, after one archival process (ARC0) has successfully archived the local online redo log to the local destination (LOG_ARCHIVE_DEST_1), another archival process (ARC1) process transmits redo from the local archived redo log files (instead of the online redo log files) to the remote standby destination (LOG_ARCHIVE_DEST_2).
* On the remote destination, the remote file server process (RFS) will, in turn, write the redo data to an archived redo log file from a standby redo log file. Log apply services use Redo Apply (managed recovery process - MRP ) or SQL Apply ( logical standby process - LSP uses parallel execution (Pnnn) processes) to apply the redo to the standby database.
Because the online redo log files are archived locally first, the LGWR process reuses the online redo log files much earlier than would be possible if the ARCn processes archived to the standby database concurrently with the local destination.
Using the Log Writer Process (LGWR) to Archive Redo Data
Using the LGWR process differs from ARCn processing because instead of waiting for the online redo log to switch at the primary database and then writing the entire archived redo log at the remote destination all at once, the LGWR process selects a standby redo log file at the standby site that reflects the log sequence number (and size) of the current online redo log file of the primary database. Then, as redo is generated at the primary database, it is also transmitted to the remote destination. The transmission to the remote destination will either be synchronous or asynchronous. Synchronous LGWR processing is required for the maximum protection and maximum availability modes of data protection in Data Guard configurations.
LGWR SYNC Archival Processing
LGWR process to synchronously transmit redo data to the standby system at the same time it is writing redo data to the online redo log file on the primary database:
* On the primary database, the LGWR process submits the redo data to one or more network server (LGWR Network Server process LNSn) processes, which then initiate the network I/O in parallel to multiple remote destinations. Transactions are not committed on the primary database until the redo data necessary to recover the transaction is received by all LGWR SYNC destinations.
* On the standby system, the remote file server (RFS) receives redo data over the network from the LGWR process and writes the redo data to the standby redo log files.
A log switch on the primary database triggers a log switch on the standby database, causing ARCn processes on the standby database to archive the standby redo log files to archived redo log files on the standby database. Then, Redo Apply (MRP process) or SQL Apply (LSP process) applies the redo data to the standby database. If real-time apply is enabled, Data Guard recovers redo data directly from the current standby redo log file as it is being filled up by the RFS process.
LGWR ASYNC Archival Processing
When the LGWR and ASYNC attributes are specified, the log writer process writes to the local online redo log file, while the network server (LNSn) processes (one for each destination) asynchronously transmit the redo to remote destinations. The LGWR process continues processing the next request without waiting for the LNS network I/O to complete.
If redo transport services transmit redo data to multiple remote destinations, the LNSn processes (one for each destination) initiate the network I/O to all of the destinations in parallel.
When an online redo log file fills up, a log switch occurs and an archiver process archives the log file locally, as usual.
Wednesday, June 4, 2008
aq_tm_processes Is Set To 0
Signs
Fix
If it is set to zero, it is recommended to unset the parameter.
However, this requires bouncing the database if unable to do so
After upgrading to 10.2.0.3 using DBUA the message "WARNING: AQ_TM_PROCESSES is set to 0" begins appearing in the alert log file.
DBUA has set the aq_tm_processes initialization parameter explicitly to zero.
Fix
In 10.2, it is recommended to leave the parameter aq_tm_processes unset and let the database autotune the parameter.
Setting aq_tm_processes parameter explicitly to zero which disables the time monitor process (qmn), can disrupt the operation of the database due to several system queue tables used when the standard database features are used.
You cannot determine if aq_tm_processes is set explicitly to zero just by querying v$parameter.
A check to see if the parameter is explicitly zero is:
connect / as sysdba
set serveroutput on
declare
mycheck number;
begin
select 1 into mycheck from v$parameter where name = 'aq_tm_processes' and value = '0'
and (ismodified <> 'FALSE' OR isdefault='FALSE');
if mycheck = 1 then
dbms_output.put_line('The parameter ''aq_tm_processes'' is explicitly set to 0!');
end if;
exception when no_data_found then
dbms_output.put_line('The parameter ''aq_tm_processes'' is not explicitly set to 0.');
end;
/
If it is set to zero, it is recommended to unset the parameter.
alter system reset aq_tm_processes scope=spfile sid='*';
However, this requires bouncing the database if unable to do so
alter system set aq_tm_processes = 1;
Labels:
aq_tm_processes,
bug,
oracle
Tuesday, June 3, 2008
Moving Datafiles to new location
1. Make datafile backups as copy
2. shutdown immediate
3. startup mount
4. switch data file to copy
5. recover the datafiles
6. open the database
7. Add new tmep file to the new datafile location and drop the old tempfile
8. delete old datafile copies
Following sqls will be helpful in generating the above commands
Above method is useful even when datafiles reside in ASM. There's another easier way if operating system level file movement is possible.
1. Shutdown immediate;
2. copy the datafiles to new location using OS utilities (such as cp in linux)
3. startup mount;
4. switch the datafile locations
5. alter database open;
Following sqls will be useful in generating the above commands
run {
SET CONTROLFILE AUTOBACKUP FORMAT FOR DEVICE TYPE DISK TO '/tmp/%FU';
backup as copy datafile 1 format '/oracle/oradata/test/system01.dbf';
backup as copy datafile 2 format '/oracle/oradata/test/undotbs01.dbf';
backup as copy datafile 3 format '/oracle/oradata/test/sysaux01.dbf';
backup as copy datafile 4 format '/oracle/oradata/test/users01.dbf';
backup as copy datafile 5 format '/oracle/oradata/test/gravelsox.dbf';
}
2. shutdown immediate
3. startup mount
4. switch data file to copy
switch datafile 1 to copy;
switch datafile 2 to copy;
switch datafile 3 to copy;
switch datafile 4 to copy;
switch datafile 5 to copy;
5. recover the datafiles
recover datafiles 1,2,3,4,5;
6. open the database
7. Add new tmep file to the new datafile location and drop the old tempfile
alter tablespace temp add tempfile.....
alter tablespace temp drop tempfile
8. delete old datafile copies
delete datafilecopy all;
Following sqls will be helpful in generating the above commands
select 'backup as copy datafile '||file#||' format '''||replace(name,'old_location','new_location')||''';' from v$datafile;
select 'switch datafile '||file#||' to copy;' from v$datafile;
Above method is useful even when datafiles reside in ASM. There's another easier way if operating system level file movement is possible.
1. Shutdown immediate;
2. copy the datafiles to new location using OS utilities (such as cp in linux)
3. startup mount;
4. switch the datafile locations
alter database rename file '/oradata/livestg/system01.dbf' to '/u1/oradata/livestg/system01.dbf';
alter database rename file '/oradata/livestg/undotbs01.dbf' to '/u1/oradata/livestg/undotbs01.dbf';
alter database rename file '/oradata/livestg/sysaux01.dbf' to '/u1/oradata/livestg/sysaux01.dbf';
alter database rename file '/oradata/livestg/users01.dbf' to '/u1/oradata/livestg/users01.dbf';
5. alter database open;
Following sqls will be useful in generating the above commands
select 'alter database rename file '''||name||''' to '''||replace(name,'oracle','u1')||''';' as sql from v$datafile;
select 'alter database rename file '''||name||''' to '''||replace(name,'oracle','u1')||''';' as sql from v$tempfile;
Remote Debugging with SQL Developer
1. Create a remote debugging session on SQL Developer.
2. Give a port and a IP of the machine or localhost if the debugging client is also on the same machine.
3. Create break points on the PL/SQL function, trigger, package etc.
4. On the client session run
Remember to change the port and IP accordingly. User needs
5. Execute the function,package or trigger event from the client session
6. Use SQL Developer to trace the execution path, monitor values, change values and etc.
2. Give a port and a IP of the machine or localhost if the debugging client is also on the same machine.
3. Create break points on the PL/SQL function, trigger, package etc.
4. On the client session run
exec DBMS_DEBUG_JDWP.CONNECT_TCP( '192.168.0.124', 4000 );
Remember to change the port and IP accordingly. User needs
DEBUG CONNECT SESSIONprvilege so grant it to the user.
5. Execute the function,package or trigger event from the client session
6. Use SQL Developer to trace the execution path, monitor values, change values and etc.
Labels:
oracle.plsql,
remote debugging,
sql developer
Sunday, June 1, 2008
Internal Error Code Kkslgbv0
Found in
Metalink note
Signs
Bug
Fixed in
Apply Patch
Workaround
Oracle Server - Enterprise Edition - Version: 10.2.0.1 to 10.2.0.3
in any platform.
Metalink note
Note:382567.1
Note:416001.1
Note:5155885.8
Signs
The alert log shows the error:
ORA-00600: internal error code, arguments: [kkslgbv0], [], [], [], [], [], [], []
Bug
5155885
Fixed in
10.2.0.4 (Server Patch Set)
11.1.0.6 (Base Release)
Apply Patch
Patch 5155885
Workaround
Use CURSOR_SHARING=EXACT or
set "_optim_peek_user_binds"=false so that bind values are not peeked.
OALL8 is in an inconsistent state With JDBC Thin Driver
Found in
Metalink Notes
Signs
Fixed in
Apply Patch
JDBC - Version: 10.1.0.0 to 10.2.0.3
in any platform.
Metalink Notes
Note: 549409.1 OALL8 is in an inconsistent state" With JDBC Thin Driver and selecting non-ascii Characters
Note: 944692.1 Master Note: Understanding the "OALL8 is in an Inconsistent State" Exception
Signs
"OALL8 is in an inconsistent state" is thrown when using the 10.2.0.3 JDBC thin driver to select non ascii characters from the database.
Fixed in
11.1.0.6.0 version of Oracle JDBC driver.
Apply Patch
Patch 4390875
Internal error Code Kcbz_check_objd_typ_3
Found in
Signs
Fixed in
Apply patch
Oracle Server - Enterprise Edition - Version: 10.2.0.2 to 10.2.0.3Metalink Note
in any platform.
The cause of this problem has been identified and verified in an Unpublished Bug 4430244.
Note:430223.1
Signs
Segment Advisor is being used.
ORA-00600: internal error code, arguments: [kcbz_check_objd_typ_3], [4], [0], [15], [], [], [], []
Fixed in
10.2.0.4 (Server Patch Set)
11.1.0.6 (Base Release)
Apply patch
Patch 4430244
Labels:
4430244,
bug,
internal error,
Kcbz_check_objd_typ_3,
oracle
Shared server tied to session WAIT(RECEIVE)
Bug
Metalink Note
Fixed in
Description
Bug 5447395 - Shared server tied to session when BFILE used
Metalink Note
Note:5447395.8
Fixed in
10.2.0.4 (Server Patch Set)
11.1.0.6 (Base Release)
Description
If a shareed server session (MTS) ever opens a BFILE and also closes it then the shared server gets tied to that particular session and remains in receiving state "WAIT(RECEIVE)".
Subscribe to:
Posts (Atom)