Thursday, December 17, 2009

upgrade 11g R1 to 11g R2 without DBUA

1. update the timezone file for ease, refere

2. then follow the same step as in 10g to 11g

3. Few modifications

instead of utlu111i.sql use utlu112i.sql,
utlu111s.sql use utlu112s.sql



Find out about the invalid views with
SELECT count(*) FROM dba_invalid_objects;
SELECT distinct object_name FROM dba_invalid_objects;

Wednesday, December 2, 2009

Result Cache for Ref Cursor returning PL/SQL codes

According to Oracle documentation if a PL/SQL SP is returning a ref cursor then result cache is not supported. But there's a way to circumvent with somewhat beneficial results.

Below is a package written for the HR sample schema

create or replace package emps_pkg as

type detail is ref cursor;

function getempdetails (options in INTEGER, departname in VARCHAR2, jobtitle in VARCHAR2) return detail;

end;
/


create or replace package body emps_pkg as

function getempdetails(options in INTEGER,departname in VARCHAR2, jobtitle in VARCHAR2) return detail is

employees detail;
main_query VARCHAR2(2000) := 'select /*+ result_cache */ e.* from employees e,departments d,jobs j
where e.department_id = d.department_id and j.job_id = e.job_id ';
begin
if options = 1 then
main_query := main_query ||'and d.department_name=:1';
open employees for main_query using departname;

return employees;

else
main_query := main_query ||'and j.job_title=:1';

open employees for main_query using jobtitle;

return employees;
end IF ;
end;
end;
/


Package body contains some dynamic sql (there's no real logic behind this package only to demonstrate the use of result cache) which is constructed with the result_cache hint.

execute the package code and observe the plan


select emps_pkg.getempdetails(2,'Purchasing','Purchasing Manager') from dual;
select * from table(dbms_xplan.display_cursor(null,null,'ALL'));

Plan hash value: 980169617

------------------------------------------------------------------------------------------------------------
| Id | Operation | Name | Rows | Bytes | Cost (%CPU)| Time |
------------------------------------------------------------------------------------------------------------
| 0 | SELECT STATEMENT | | | | 4 (100)| |
| 1 | RESULT CACHE | a68hth9dfygb5ddg0hr8hnty9b | | | | |
| 2 | NESTED LOOPS | | | | | |
| 3 | NESTED LOOPS | | 6 | 570 | 4 (0)| 00:00:01 |
|* 4 | TABLE ACCESS FULL | JOBS | 1 | 27 | 3 (0)| 00:00:01 |
|* 5 | INDEX RANGE SCAN | EMP_JOB_IX | 6 | | 0 (0)| |
|* 6 | TABLE ACCESS BY INDEX ROWID| EMPLOYEES | 6 | 408 | 1 (0)| 00:00:01 |
------------------------------------------------------------------------------------------------------------


Recompile the package body without the result cache hint and observe the plan and it would be

Plan hash value: 980169617

-------------------------------------------------------------------------------------------
| Id | Operation | Name | Rows | Bytes | Cost (%CPU)| Time |
-------------------------------------------------------------------------------------------
| 0 | SELECT STATEMENT | | | | 4 (100)| |
| 1 | NESTED LOOPS | | | | | |
| 2 | NESTED LOOPS | | 6 | 570 | 4 (0)| 00:00:01 |
|* 3 | TABLE ACCESS FULL | JOBS | 1 | 27 | 3 (0)| 00:00:01 |
|* 4 | INDEX RANGE SCAN | EMP_JOB_IX | 6 | | 0 (0)| |
|* 5 | TABLE ACCESS BY INDEX ROWID| EMPLOYEES | 6 | 408 | 1 (0)| 00:00:01 |
-------------------------------------------------------------------------------------------


The benefit comes from the fact the sql query inside the SP could use the result cache. But the overhead of executing the PL/SQL code is still there which I believe won't be present in "proper" result cached SPs.

Any changes to depending tables will make the cache invalid just like the expected behavior.

Oracle Parallel Query Differences (not enough processes)

10g

If users initiate too many concurrent operations, Oracle might not have enough query server processes. In this case, Oracle executes the operations sequentially or displays an error if PARALLEL_MIN_PERCENT is set to a value other than the default value of 0 (zero).
This condition can be verified through the GV$SYSSTAT view by comparing the statistics for parallel operations not downgraded and parallel operations downgraded to serial. For example:
SELECT * FROM GV$SYSSTAT WHERE name LIKE 'Parallel operation%';


11g R1

Oracle Database can process a parallel operation with fewer than the requested number of processes.
If all parallel execution servers in the pool are occupied and the maximum number of parallel execution servers has been started, the parallel execution coordinator switches to serial processing.

11g R2

When the parameter PARALLEL_DEGREE_POLICY is set to AUTO, Oracle will queue SQL statements that require parallel execution, if the necessary parallel server processes are not available. Once the necessary resources become available, the SQL statement will be dequeued and allowed to execute. The queue is a simple First In - First Out queue based on the time a statement was issued.
Statement queuing will begin once the number of parallel server processes active on the system is equal to or greater than PARALLEL_SERVERS_TARGET. By default, this parameter is set to 4 X CPU_COUNT X PARALLEL_THREADS_PER_CPU X ACTIVE_INSTANCES. This is not the maximum number of parallel server processes allowed on the system, but the number available to run parallel statements before statement queuing will be used. It is set lower than the maximum number of parallel server processes allowed on the system (PARALLEL_MAX_SERVERS) to ensure each parallel statement will get all of the parallel server resources required and to prevent overloading the system with parallel server processes. Note all serial (non-parallel) statements will execute immediately even if statement queuing has been activated.
There are two wait events to help identity if a statement has been queued. A statement waiting on the event PX QUEUING: statement queue is the first statement in the statement queue. Once the necessary resource become available for this statement, it will be dequeued and will be executed. All other statements in the queue will be waiting on PX QUEUING: statement queue. Only when a statement gets to the head of the queue will the wait event switch to PX QUEUING: statement queue.

Friday, November 27, 2009

11g Compression and Segment Space Reuse

An interesting behavior could be observed when rows are deleted and re-inserted to compressed and uncompressed table.

First the uncompressed table.
create table insertdml as select * from all_objects where 1 = 2;
alter table insertdml add constraint insert_pk primary key (object_id);

Use the pl/sql code sinppet to find the space usage.
full_blocks 0
full_bytes 0

total_blocks 8
total_bytes 65536
unused_blocks 5
unused_bytes 40960
last_used_extent_file_id 6
last_used_extent_block_id 34416

Add rows to the table to expand it
declare

type insertrow_type is table of all_objects%rowtype index by pls_integer;
insertrow insertrow_type;
begin
select * bulk collect into insertrow from all_objects;

forall row in insertrow.first .. insertrow.last
insert into insertdml values insertrow(row);
end;
/

Check the space usage after the insert
full_blocks 1002
full_bytes 8208384

total_blocks 1152
total_bytes 9437184
unused_blocks 0
unused_bytes 0
last_used_extent_file_id 6
last_used_extent_block_id 36096
Note the last used extent block id and delete the rows from the table and check the space usage

delete from insertdml;
commit;

full_blocks 0
full_bytes 0

total_blocks 1152
total_bytes 9437184
unused_blocks 0
unused_bytes 0
last_used_extent_file_id 6
last_used_extent_block_id 36096
No more full blocks. Again insert the rows and check the space usage

full_blocks 999
full_bytes 8183808

total_blocks 1152
total_bytes 9437184
unused_blocks 0
unused_bytes 0
last_used_extent_file_id 6
last_used_extent_block_id 36096
Total number of blocks hasn't changed and last used extent block is still the same which means no new extents were added to the segment and old extents are reused.

Now the compressed table

drop table insertdml purge;
create table insertdml compress for all operations as select * from all_objects where 1 = 2;
alter table insertdml add constraint insert_pk primary key (object_id);

check the initial space usage as before
full_blocks 0
full_bytes 0

total_blocks 8
total_bytes 65536
unused_blocks 5
unused_bytes 40960
last_used_extent_file_id 6
last_used_extent_block_id 34416

insert rows to extend the table
declare

type insertrow_type is table of all_objects%rowtype index by pls_integer;
insertrow insertrow_type;
begin
select * bulk collect into insertrow from all_objects;

forall row in insertrow.first .. insertrow.last
insert into insertdml values insertrow(row);
end;
/
and check the space usage
full_blocks 401
full_bytes 3284992

total_blocks 640
total_bytes 5242880
unused_blocks 0
unused_bytes 0
last_used_extent_file_id 6
last_used_extent_block_id 35584

delete rows and check the space usage again
delete from insertdml;
commit;

full_blocks 0
full_bytes 0

total_blocks 640
total_bytes 5242880
unused_blocks 0
unused_bytes 0
last_used_extent_file_id 6
last_used_extent_block_id 35584

insert rows again and check the space usage
full_blocks 645
full_bytes 5283840

total_blocks 768
total_bytes 6291456
unused_blocks 0
unused_bytes 0
last_used_extent_file_id 6
last_used_extent_block_id 35712

Total number of blocks has changed and also the last used extent block id which indicates new extents are being added instead of old extents being reused.

Instead of the pl/sql code sinppet quering the user_segments view would also reveal this fact

select segment_name,bytes from user_segments where segment_name='INSERTDML';


This behavior was not observed in Oracle 11g R2.



--after initial insert

full_blocks 326
full_bytes 2670592

total_blocks 512
total_bytes 4194304
unused_blocks 0
unused_bytes 0
last_used_extent_file_id 4
last_used_extent_block_id 135296

--delete the rows

full_blocks 0
full_bytes 0

total_blocks 512
total_bytes 4194304
unused_blocks 0
unused_bytes 0
last_used_extent_file_id 4
last_used_extent_block_id 135296


--insert again

full_blocks 327
full_bytes 2678784

total_blocks 512
total_bytes 4194304
unused_blocks 0
unused_bytes 0
last_used_extent_file_id 4
last_used_extent_block_id 135296


Last used extent block id hasn't changed.

11g Compression and Segment Size

An easy way to observer the compresson in action is to query the user_segments view.

select segment_name,sum(bytes)
from user_extents where segment_type='TABLE'
group by segment_name order by 2 desc;

SEGMENT_NAME SUM(BYTES)
------------------------------ ----------
AIRLINE 196608

Compress the table with
alter table airline move compress for all operations;

But the reduction is always not visible in the user_segments view as the segment is not shrunk beyond the initial storage parameter.

select segment_name,sum(bytes)
from user_extents where segment_type='TABLE'
group by segment_name order by 2 desc;

SEGMENT_NAME SUM(BYTES)
------------------------------ ----------
AIRLINE 196608

It could be observed that compression is being enabled

select table_name,compression,compress_for from user_tables;

TABLE_NAME COMPRESS COMPRESS_FOR
------------------------------ -------- ------------------
AIRLINE ENABLED FOR ALL OPERATIONS

Looking at the storage parameters

... STORAGE(INITIAL 196608 NEXT 1048576 MINEXTENTS 1 MAXEXTENTS 2147483645
PCTINCREASE 0 FREELISTS 1 FREELIST GROUPS 1 BUFFER_POOL KEEP)

But the actual compression could be observed with the dbms_space package.

Before compression

full_blocks 3
full_bytes 24576

total_blocks 24
total_bytes 196608
unused_blocks 18
unused_bytes 147456
last_used_extent_file_id 6
last_used_extent_block_id 1544

After compression

full_blocks 2
full_bytes 16384

total_blocks 24
total_bytes 196608
unused_blocks 19
unused_bytes 155648
last_used_extent_file_id 6
last_used_extent_block_id 128

There's a reduction in the total number of blocks and increase in the number of unused blocks.

The PL/SQL code is listed below

DECLARE
unformatted_blocks NUMBER;
unformatted_bytes NUMBER;
fs1_blocks NUMBER;
fs1_bytes NUMBER;
fs2_blocks NUMBER;
fs2_bytes NUMBER;
fs3_blocks NUMBER;
fs3_bytes NUMBER;
fs4_blocks NUMBER;
fs4_bytes NUMBER;
full_blocks NUMBER;
full_bytes NUMBER;

total_blocks NUMBER;
total_bytes NUMBER;
unused_blocks NUMBER;
unused_bytes NUMBER;
last_used_extent_file_id NUMBER;
last_used_extent_block_id NUMBER;
last_used_block NUMBER;

-- cursor t is select table_name from user_tables order by 1;
-- tname user_tables.table_name%type;
BEGIN
--open t;
--loop
--fetch t into tname;
--exit when t%notfound;

DBMS_SPACE.SPACE_USAGE('TX',
'AIRLINE',
'TABLE',
unformatted_blocks,
unformatted_bytes,
fs1_blocks,
fs1_bytes,
fs2_blocks,
fs2_bytes,
fs3_blocks,
fs3_bytes,
fs4_blocks,
fs4_bytes,
full_blocks,
full_bytes,
NULL);

-- dbms_output.put_line('table name '|| tname);
-- dbms_output.put_line('fs1_blocks ' || fs1_blocks );
-- dbms_output.put_line('fs1_bytes ' || fs1_bytes );
-- dbms_output.put_line('fs2_blocks '||fs2_blocks );
-- dbms_output.put_line('fs2_bytes '||fs2_bytes );
-- dbms_output.put_line('fs3_blocks '||fs3_blocks );
-- dbms_output.put_line('fs3_bytes '|| fs3_bytes );
-- dbms_output.put_line('fs4_blocks '|| fs4_blocks );
-- dbms_output.put_line('fs4_bytes '|| fs4_bytes );
dbms_output.put_line('full_blocks '|| full_blocks);
dbms_output.put_line('full_bytes '|| full_bytes );
dbms_output.put_line('');
--end loop;
--close t;

dbms_space.unused_space('TX',
'AIRLINE',
'TABLE',
total_blocks ,
total_bytes ,
unused_blocks ,
unused_bytes ,
last_used_extent_file_id ,
last_used_extent_block_id ,
last_used_block ,
NULL);

dbms_output.put_line('total_blocks '|| total_blocks);
dbms_output.put_line('total_bytes '|| total_bytes );
dbms_output.put_line('unused_blocks '|| unused_blocks);
dbms_output.put_line('unused_bytes '|| unused_bytes );
dbms_output.put_line('last_used_extent_file_id '|| last_used_extent_file_id);
dbms_output.put_line('last_used_extent_block_id '|| last_used_extent_block_id );
dbms_output.put_line('');


end;
/


Wednesday, November 11, 2009

Updating Timezone

Updating 10.2.0.4 timezone to timezone 11. This is specially handy if the database is later upgraded to 11g Release 2.

Couple of metalink documents to go through 840118.1 and 412160.1 and few KEY points

1. Oracle 9, 10 and 11.1 can be upgraded to RDBMS DSTv11 using patch 8524113, DSTv11 is included in 11gR2 standard version.

2. download and run the utltzuv11.sql (url included in the metalink note 412160.1)

further steps are needed depending on the result of utltzuv11.sql script.

After running utltzuv11.sql, you need do a
select * from sys.sys_tzuv2_temptab;

If a "select * from sys.sys_tzuv2_temptab;" gives no rows then there is no action to take you can simply apply the RDBMS DSTv11 patch without action on stored TSTZ data.

If a "select * from sys.sys_tzuv2_temptab;" give(s) affected TSTZ data then you need to back up the TSTZ data before applying the RDBMS DSTv11 patch.


3. Patch 7695070 needs to be applied before the RDBMS DSTv11 patch can be installed.
Unlike previous DST patches the DSTv11 patch 8524113 needs to have 2 codefixes in place (Bug 7695070 and bug 7395472) to work. 10.2.0.5 and 11.2.0.2 (and higher) include the needed fixes, no need to apply patch 7695070 before installing the RDBMS DSTv11 patch on these versions.

once updated

WARNING: --> Database is using a timezone file older than version 11.
.... After the release migration, it is recommended that DBMS_DST package
.... be used to upgrade the 10.2.0.4.0 database timezone version
.... to the latest version which comes with the new release.


will not be prompted when upgrading to 11gR2.

Sunday, October 11, 2009

Memory Target on 11g

If MEMORY_TARGET is set to a non-zero value:

  1. If SGA_TARGET and PGA_AGGREGATE_TARGET are set, they will be considered the minimum values for the sizes of SGA and the PGA respectively. MEMORY_TARGET can take values from SGA_TARGET + PGA_AGGREGATE_TARGET to MEMORY_MAX_SIZE.
  2. If SGA_TARGET is set and PGA_AGGREGATE_TARGET is not set, still both parameters will be auto-tuned. PGA_AGGREGATE_TARGET will be initialized to a value of (MEMORY_TARGET-SGA_TARGET).
  3. If PGA_AGGREGATE_TARGET is set and SGA_TARGET is not set, still both parameters will be auto-tuned. SGA_TARGET will be initialized to a value of min(MEMORY_TARGET-PGA_AGGREGATE_TARGET, SGA_MAX_SIZE (if set by the user)) and will auto-tune subcomps.
  4. If neither is set, they will be auto-tuned without any minimum or default values. The policy is to give 60% for sga and 40% for PGA at startup.

If MEMORY_TARGET is not set or set to set to 0 explicitly (default value is 0 for 11g):

  1. If SGA_TARGET is set it will only auto-tune the sizes of the sub-components of the SGA. PGA will be autotuned independent of whether it is explicitly set or not. Though the whole SGA(SGA_TARGET) and the PGA(PGA_AGGREGATE_TARGET) will not be auto-tuned, i.e., will not grow or shrink automatically.
  2. If neither SGA_TARGET nor PGA_AGGREGATE_TARGET is set, it will follow the same policy as it have today; PGA will be auto-tuned and the SGA will not be auto-tuned and parameters for some of the sub-components will have to be set explicitly (for SGA_TARGET).
  3. If only MEMORY_MAX_TARGET is set, MEMORY_TARGET will default to 0 and it will not auto tune sga and pga. It will default to 10gR2 behavior within sga and pga.
  4. If sga_max_size is not user set, it will internally set it to MEMORY_MAX_TARGET.
  5. In a text initialization parameter file, if you omit the line for MEMORY_MAX_TARGET and include a value for MEMORY_TARGET, the database automatically sets MEMORY_MAX_TARGET to the value of MEMORY_TARGET. If you omit the line for MEMORY_TARGET and include a value for MEMORY_MAX_TARGET, the MEMORY_TARGET parameter defaults to zero. After startup, you can then dynamically change MEMORY_TARGET to a non-zero value, provided that it does not exceed the value of MEMORY_MAX_TARGET.

following views can be useful

* V$MEMORY_DYNAMIC_COMPONENTS has the current status of all memory components
* V$MEMORY_RESIZE_OPS has a circular history buffer of the last 800 SGA resize requests

metalink note 452512.1

Monday, October 5, 2009

Java Code Sinppet: JAX-WS & MTOM

To enable MTOM (Message Transmission Optimization Mechanism) there are several things need to be done and in a nutshell

1. add the MTOM annotation on the services class

@javax.xml.ws.soap.MTOM
@WebService(serviceName =


2. Following table list the relationship between MIME type and java type

MIME type Java type
image/gif java.awt.Image
image/jpeg java.awt.Image
text/plain java.lang.String
text/xml javax.xml.transform.Source
application/xml javax.xml.transform.Source
*/* javax.activation.DataHandler


3. Edit the WSDL to and list the correct mime type which will change the schema from

<xs:sequence>
<xs:element name="return" type="xs:base64Binary" minOccurs="0" maxOccurs="unbounded"/>
</xs:sequence>

to

<xs:sequence>
<xs:element name="return" type="xs:base64Binary" minOccurs="0" maxOccurs="unbounded"
xmime:expectedContentTypes="image/jpeg" xmlns:xmime="http://www.w3.org/2005/05/xmlmime"/>
</xs:sequence>


4. Make the container use the edited WSDL than generated one

@WebService(serviceName = ...., wsdlLocation="WEB-INF/wsdl/edited.wsdl")


5. Deploy the new service

for more info netbeans tutorial pages 1, page 2, page 3



To do the above without any modification to wsdl (using annotation on the java class)

1. Set the @MTOM as above on the service
@MTOM
@WebService
public class NewWebService {

2. Annotate the data type on the web method as follows
@WebMethod
public void setImage(@XmlMimeType("image/gif")Image img){

3. Can also annotate a return type
@WebMethod
public @XmlMimeType("image/gif")Image getImage( String name){

4. All kind of data types could be used
@WebMethod
public @XmlMimeType("*/*")Image getFiles


Sunday, October 4, 2009

Rolling Forward Image Copy Backups

On the first day create a incremental level 0 copy of the database

backup as copy incremental level 0 tag 'roll_update' database;

On the second day create a incremental level 1 of the database

backup incremental level 1 for recover of copy with tag 'roll_update' database;

On the thrid day use the second day incremental level 1 backup to roll forward the first day level 0 backup and then take a new incremental level 1 backup and repeat this step.

run{
recover copy of database with tag 'roll_update';
backup incremental level 1 for recover of copy with tag 'roll_update' database;
}


The whole thing could be simplified with


run{
recover copy of database with tag 'roll_update';
backup incremental level 1 for recover of copy with tag 'roll_update' database;
}


Points to note.

1. The BACKUP INCREMENTAL LEVEL 1... FOR RECOVER OF COPY WITH TAG... command does not actually always create a level 1 incremental backup. If there is no level 0 image copy backup of an particular datafile, then executing this command creates an image copy backup of the datafile on disk with the specified tag instead of creating the level 1 backup. Thus, the first time the script runs, it creates the image copy of the datafile needed to begin the cycle of incremental updates. In the second run and all subsequent runs, it produces level 1 incremental backups of the datafile.

2. The RECOVER COPY OF DATABASE WITH TAG... command causes RMAN to apply any available incremental level 1 backups to a set of datafile copies with the specified tag. If there is no incremental backup or no datafile copy, the command generates a message but does not generate an error. The first time the script runs, this command has no effect, because there is neither a datafile copy nor a level 1 incremental backup. The second time the script runs, there is a datafile copy (created by the first BACKUP command), but no incremental level 1 backup, so again, the command has no effect. On the third run and all subsequent runs, there is a datafile copy and a level 1 incremental from the previous run, so the level 1 incremental is applied to the datafile copy, bringing the datafile copy up to the checkpoint SCN of the level 1 incremental.

3. Each time a datafile is added to the database, an image copy of the new datafile is created the next time the script runs. The time after that, the first level 1 incremental for that datafile is created, and on all subsequent runs the new datafile is processed like any other datafile.

4. Tags must be used to identify the incremental level 0 datafile copies created for use in this strategy, so that they do not interfere with other backup strategies you implement. If you have multiple incremental backup strategies in effect, RMAN cannot unambiguously create incremental level 1 backups unless you tag level 0 backups. The incremental level 1 backups to apply to those image copies are selected based upon the checkpoint SCNs of the image copy datafiles and the available incremental level 1 backups. (The tag used on the image copy being recovered is not a factor in the selection of the incremental level backups.)

more on incremental backups

Monday, September 21, 2009

Relational Division

Relational division normally requires two not exists sub queries used in three different tables. Below sql snippet shows how division could be employed when all the data is in a single table.


PROJ SUBJ
----- -----
1 1
1 2
1 3
2 3
2 1
3 1
3 2
3 3
4 1
5 2
6 3


Hypothetical situation is that, there are several projects and each project uses number of different subjects. Idea is to find projects that are uses all the subjects.

The theory


theory implementation


select distinct proj from x where proj not in
(select distinct proj from (select * from
(select distinct proj from x) t1,
(select distinct subj from x) t2) t3
where (proj,subj) not in (select proj,subj from x));


Using With clause in Oracle


with t1 as (select DISTINCT proj from x),
t2 as (select distinct subj from x)
select distinct proj from x where proj not in
( select proj from t1,t2 where (proj,subj) not in (select * from x));


Using not exists sub queries

select * from (select distinct proj from x) projects
where not exists(
select * from (select distinct subj from x) subject
where not exists
(select * from x
where x.proj = projects.proj
and x.subj = subject.subj
));


Monday, September 14, 2009

Upgrading 10g to 11g R 2 with DBUA

Another post on upgrading from 10.2.0.4 to 11.2.0.3.

1. Install Oracle 11gR2 software only (enterprise or standard to match the source db. This blog is on standard version. More on metalink note 870814.1).
2. In order to downgrade Oracle Enterprise Manager Database Control after upgrading to the new Oracle Database 11g release save Database Control files and data before upgrade. The emdwgrd utility resides in the ORACLE_HOME/bin directory in the new Oracle
Database 11g release.
3. Follow these steps to save your Database Control files and data:
1. Install the new Oracle Database 11g release.
This step is not required for an inplace patchset upgrade.
2. Set ORACLE_HOME to your old Oracle home.
This step is not required for an inplace patchset upgrade.
3. Set ORACLE_SID to the SID of the database being upgraded.
4. Set PATH, LD_LIBRARY_PATH, and SHLIB_PATH to point to the Oracle home
from which the database is being upgraded.
5. Go to the Oracle home of the new Oracle Database 11g release.
6. Execute one of the following:
if Single DB
emdwgrd -save -sid old_SID -path save_directory
for RAC
setenv EM_REMCP /usr/bin/scp
emdwgrd -save -cluster -sid old_SID -path save_directory
If the release 10g Oracle home is on a shared device, add -shared to the
previous command line.

create a TNS name entry in the 10g Oracle Home (network/admin) with the same name as the SID before running the above command
[oracle@server110 std]$ emdwgrd -save -sid std1 -path /home/oracle/emdu
Enter sys password for database std1?

Mon Sep 14 19:14:24 2009 - Verify EM DB Control files ... pass
Mon Sep 14 19:14:24 2009 - Validating DB Connection to std1 ... pass
ENV var EM_REMCP not defined, check if rcp or scp is configured.
RCP = /usr/bin/rcp -rp, REMSH = /usr/bin/rsh
shared = 0
Mon Sep 14 19:14:28 2009 - Creating directory ... created
Mon Sep 14 19:14:29 2009 - Stopping DB Control ... stopped
Mon Sep 14 19:14:34 2009 - Saving DB Control files
... saved
Mon Sep 14 19:14:50 2009 - Recompiling invalid objects ... recompiled
Mon Sep 14 19:14:57 2009 - Exporting sysman schema for std1 ... exported
Mon Sep 14 19:15:56 2009 - DB Control was saved successfully.
Mon Sep 14 19:15:56 2009 - Starting DB Control ... started
Mon Sep 14 19:17:33 2009 - Dump directory was dropped successfully.


7. Copy the Pre-Upgrade Information Tool (utlu112i.sql) from the Oracle Database 11g Release 2 (11.2) ORACLE_HOME/rdbms/admin directory to a directory outside of the Oracle home.
8. run above script and examin the output
SQL> SPOOL upgrade_info.log
SQL> @utlu112i.sql
SQL> SPOOL OFF


9. Gather statistics on schemas instructed by utlu111i.sql
EXEC DBMS_STATS.GATHER_DICTIONARY_STATS; EXEC DBMS_STATS.GATHER_SCHEMA_STATS(’user_name’);

10. Grant privileges revoked from public to resolve invalid objects
11. If the timezone warning is
Database is using a timezone file older than version 11.
then check the timezone with
select * from v$timezone_file;

if it is not 11 then to fix the problem use the DBMS_DST PL/SQL package and follow the instructions in "Steps to Upgrade Time Zone File and Timestamp with Time Zone Data" in Oracle Database Globalization Support Guide.

if the timezone warning is
Database is using a timezone file greater than version 11.
then before upgrading the database, patch the 11gR2 $ORACLE_HOME/oracore/zoneinfo/ with a timezone data file of the same version as the one used in the source release database.

12. Create a listener for the new 11g R2 Oracle home
13. Run DBUA from new 11g Home





















14. set the compatible parameter.
alter system set compatible='11.2.0.0.0' scope=spfile ;


15. To upgrade the timezone
shutdown the database and start in upgrade mode
startup upgrade;
exec DBMS_DST.BEGIN_UPGRADE(11);

After PL/SQL executes start the database in normal mode and truncate timezone trigger tables (refer 1585343.1 for automating time-zone upgrade)
truncate table sys.dst$error_table;
truncate table sys.dst$trigger_table;

VAR numfail number
BEGIN
DBMS_DST.UPGRADE_DATABASE(:numfail,
parallel                  => TRUE,
log_errors                => TRUE,
log_errors_table          => 'SYS.DST$ERROR_TABLE',
log_triggers_table        => 'SYS.DST$TRIGGER_TABLE',
error_on_overlap_time     => TRUE,
error_on_nonexisting_time => TRUE);
DBMS_OUTPUT.PUT_LINE('Failures:'|| :numfail);
END;
/

after finishes
BEGIN
DBMS_DST.END_UPGRADE(:numfail);
END;
/

Thursday, August 27, 2009

PHP5 with Apache HTTP Server on windows

After PHP5 and Apache server are installed on the windows machine...

add the following towards the end of httpd.conf


#load the php main library
Loadfile "C:/php-5.2.10-Win32/php5ts.dll"

#load the sapi so that apache can use php
LoadModule php5_module "C:/php-5.2.10-Win32/php5apache2_2.dll"

#set the php.ini location
PHPIniDir "C:/php-5.2.10-Win32"

#Hook the php file extensions
AddHandler application/x-httpd-php .php
AddHandler application/x-httpd-php-source .phps


make a copy of php.ini-recommended and rename it to php.ini
restart the http server and test a php page.

Thursday, August 13, 2009

Global Temporary Table & JDBC

Few things to keep in mind

1. if no connection caching is enable, data source only has

OracleDataSource dataSource = new OracleDataSource();
dataSource.setURL("jdbc:oracle:thin:@IP:1521:ent10");
dataSource.setUser("asanga");
dataSource.setPassword("asa");

and if the global temporary table (GTT) is created with on commit preserve,
when the connection is closed con.close() the GTT is cleared and subsequent connections will not see any rows.

2. if connection caching is enabled and GTT is on commit preserve rows

Properties prop = new Properties();
prop.setProperty("MinLimit", "4");
prop.setProperty("MaxLimit", "10");
prop.setProperty("InitialLimit", "5");
dataSource.setConnectionCacheProperties(prop);
dataSource.setConnectionCachingEnabled(true);

even after closing the connection con.close() subsequent connection will see the rows inserted by the previous connections.

everything below applies to connection created with a pool, connection caching enabeld.

3. if GTT is on commit delete rows then better setAutoCommit(false) otherwise after insert no rows will be visible.

4. if GTT is on commit delete then committing connection will clear the table and others will not see any rows but will add IO

5. if GTT is on commit preserve then inserting rows within a PL/SQL function still makes rows visible for the subsequent connections

6. if GTT is on commit delete then inserting rows within a PL/SQL will only be visible within the PL/SQL block. The outside java connection will not see any rows and also any subsequent connections.


example codes

for on commit preserve

create global temporary table
temp(a varchar2(10), b number)
on commit preserve rows;


for on commit delete


create global temporary table
temp(a varchar2(10), b number)
on commit delete rows;


function

create or replace function proc return number as
rowscount number;
begin
for i in 1 .. 100
loop
insert into temp values ('aaa '||i, i);
end loop;
select count(*) into rowscount from temp;
return rowscount;
end;
/


import java.sql.CallableStatement;
import java.sql.Connection;
import java.sql.PreparedStatement;
import java.sql.ResultSet;
import java.util.logging.Level;
import java.util.logging.Logger;
import oracle.jdbc.OracleTypes;
import oracle.jdbc.pool.OracleDataSource;


public class RowSetTest {

public static void main(String[] args) {

new RowSetTest().test();
}

void test() {

try {

java.util.Properties prop = new java.util.Properties();
prop.setProperty("MinLimit", "4"); // the cache size is 5 at least
prop.setProperty("MaxLimit", "10");
prop.setProperty("InitialLimit", "5"); // create


OracleDataSource dataSource;
dataSource = new OracleDataSource();
dataSource.setURL("jdbc:oracle:thin:@192.168.0.110:1521:ent10");
dataSource.setUser("asanga");
dataSource.setPassword("asa");
dataSource.setConnectionCacheProperties(prop);
dataSource.setConnectionCachingEnabled(true);


Connection conn = dataSource.getConnection();

String fares = " BEGIN ? := proc() ;END; ";

CallableStatement fares_call = conn.prepareCall(fares);

fares_call.registerOutParameter(1, OracleTypes.INTEGER);
fares_call.execute();


System.out.println("count from plsql " + fares_call.getInt(1));

PreparedStatement pr;

System.out.println("befroe close ++++++++++++++++++++++++");
pr = conn.prepareStatement("select count(*) from temp");
ResultSet rs = pr.executeQuery();
if (rs.next()) {
System.out.println("count " + rs.getString(1));
}
rs.close();
pr.close();
conn.close();

System.out.println("after close +++++++++++++++++++++");
conn = dataSource.getConnection();
pr = conn.prepareStatement("select count(*) from temp");
rs = pr.executeQuery();
if (rs.next()) {
System.out.println("count " + rs.getString(1));
}
rs.close();
pr.close();

conn.close();

System.out.println("after close +++++++++++++++++++++");
conn = dataSource.getConnection();
conn.setAutoCommit(false);
pr = conn.prepareStatement("select count(*) from temp");
rs = pr.executeQuery();
if (rs.next()) {
System.out.println("count " + rs.getString(1));
}
rs.close();
pr.close();
conn.close();

System.out.println("after close +++++++++++++++++++++");
conn = dataSource.getConnection();
conn.setAutoCommit(false);
pr = conn.prepareStatement("select count(*) from temp");
rs = pr.executeQuery();
if (rs.next()) {
System.out.println("count " + rs.getString(1));
}
rs.close();
pr.close();
conn.close();


} catch (Exception ex) {
Logger.getLogger(RowSetTest.class.getName()).log(Level.SEVERE, null, ex);
}
}
}


Wednesday, August 12, 2009

SQL Sinppet : Parent/Child tables

SQL to find out foreign key relationships between tables

select p.table_name as parent_table,
pc.column_name as parent_table_column,
C.Table_Name As Child_Table,
Cc.Column_Name As Child_Table_Column ,
C.R_Constraint_Name As Parent_Constraint_Name,
c.constraint_name as child_constraint_name
from user_constraints p, user_constraints c,
user_cons_columns pc, user_cons_columns cc
where p.constraint_name = c.r_constraint_name
and p.constraint_name = pc.constraint_name
and pc.position = cc.position
and c.constraint_name = cc.constraint_name
and c.table_name = 'table name';


Thursday, July 30, 2009

Refresh Fast

To enable refresh fast on a materialized view with joins

1. create materialized view logs on the base tables with rowids and participating columns
CREATE MATERIALIZED VIEW LOG ON FM_ADDON_SECTOR  WITH ROWID
(addon_id,departure,destination,SECTOR_NO,RETURN_COMPULSORY,STOPOVER_ONLY,
IN_BOUND_FOR_IN_DATE,CALC_METHOD)INCLUDING NEW VALUES;


2. add rowid of participating tables in the select list

select ...
,rt.rowid rtid
,oas.rowid oasid
,oadepz.rowid oadepzid
,rdepz.rowid rdepzid
,dadesz.rowid dadeszid
,rdesz.rowid rdeszid
,das.rowid dasid
,daa.rowid daaid
,oaa.rowid oaaid
,oa.rowid oaid
,da.rowid daid
....


3. Then create the materialized view with

CREATE MATERIALIZED VIEW "ROUTES"
nologging
BUILD IMMEDIATE
USING INDEX
REFRESH FAST on commit
AS SELECT ...


Wednesday, July 22, 2009

Java Code Sinppet: OWL Demo with SWRL

In the classpath :
protege.jar
and every jar in the
Protege_3.4\plugins\edu.stanford.smi.protegex.owl
directory
public class OWLDemo {

public static void main(String[] args) {
try {
String uri = "file:///C:/MSC/project/old/travel.owl";
// String uri = "http://localhost:8080/owl/travel.owl";
OWLModel owlModel = ProtegeOWL.createJenaOWLModelFromURI(uri);
OWLNamedClass classs= owlModel.getOWLNamedClass("Trade_Association");

Collection c = classs.getNamedSubclasses();
Iterator itr = c.iterator();

while(itr.hasNext()){
OWLNamedClass cc = (OWLNamedClass) itr.next();
System.out.println(cc.getNestedBrowserText());
System.out.println(cc.getName());
}

SWRLFactory factory = new SWRLFactory(owlModel);
// factory.replaceImps(owlModel); // to run the same query again uncomment
SWRLImp imp = factory.createImp("Query-2", "Use_Inconjunction_With(?x, ?y) -> sqwrl:select(?x, ?y)");

SWRLRuleEngineBridge bridge = BridgeFactory.createBridge("SWRLJessBridge", owlModel);
bridge.infer();
SQWRLResult result = bridge.getSQWRLResult("Query-2");
while (result.hasNext()) {
System.out.println("X: " + result.getObjectValue("?y").getIndividualName());
System.out.println("Y: " + result.getObjectValue("?x").getIndividualName());

result.next();
}

} catch (Exception ex) {

}

}

}


A Useful Guide

Friday, July 17, 2009

Enterprise Manager Error

Symptoms

emctl start dbconsole 
fails and emdctl.trc has the following
Thread-4135626432 WARN  http: snmehl_connect: connect failed to
(hostname:3938): Connection refused (error = 111)


caused by agentTZRegion pointing to wrong timezone in emd.properties file (more on metalink note 751634.1)

find a supporting timezone from $ORACLE_HOME/sysman/admin/supportedtzs.lst file

and set the TZ value with
 export TZ=time_zone
ORACLE_HOME/bin/emctl resetTZ agent

emctl start dbconsole


If emctl has never started, when trying to reset the TZ (more on metalink note 388280.1)


ERROR at line 1:
ORA-20233: Invalid agent name hostname:3938
ORA-06512: at "SYSMAN.MGMT_TARGET", line 3737
ORA-06512: at line 1
will occur. This could be confirmed with

select target_name, target_type from mgmt_targets;


To execute mgmt_target.set_agent_tzrgn procedure, there should be a target call 'oracle_emd' in table.

To fix this

$ export EMDROOT=$ORACLE_HOME
$ export EMSTATE=$ORACLE_HOME

and comment the agentTZRegion parameter in the AGENT_HOME/sysman/config/emd.properties file. Then reset the agent timezone and start again


emctl resetTZ agent
emdctl validateTZ agent
emctl start dbconsole


Even all of the above it may still not work and following described on the metalink note 821137.1 finally did the trick.

1.
 grant execute on utl_smtp to public;
grant execute on utl_tcp to public;
grant execute on utl_file to public;
grant execute on dbms_sql to public;
grant execute on DBMS_OBFUSCATION_TOOLKIT to public;


2. Delete the files manually
$ORACLE_HOME/hostname_sid
$ORACLE_HOME/oc4j/j2ee/OC4J_DBConsole_hostname_sid


3. create the dbcontrol again
emca -config dbcontrol db -repos create


More infor is available on following metalink notes

Note 278100.1 - How to Drop, Create and Recreate DBconsole in Database R2.
Note 750768.1 - Quick Guide to Avoid Issues in Creating DB Control Using EMCA.

Monday, July 13, 2009

OLAPIHISTORYRETENTION

caused by
 BUG 3386542 - OLAPI trggers that are installed with seed database, (OLAPISTARTUPTRIGGER and OLAPISHUTDOWNTRIGGER), does not handle absence of Oracle OLAP.


Can also occur when oracle standard edition is installed in which OLAP is missig.

solution is detailed on metalink note 266728.1


Disable OLAPISTARTUPTRIGGER and OLAPISHUTDOWNTRIGGER to avoid error from being generated.

ALTER TRIGGER SYS.OLAPISTARTUPTRIGGER DISABLE;
ALTER TRIGGER SYS.OLAPISHUTDOWNTRIGGER DISABLE;



Friday, July 3, 2009

Changing OBIEE default port

OBIEE default port is 9704 to change it ...

Before making any changes backup the files

1. Open the file named default-web-site.xml in OracleBI\oc4j_bi\j2ee\home\config and change the port value

2. Change the port in OracleBI\oc4j_bi\j2ee\home\applications\bioffice\bioffice\WEB-INF\bioffice.xml file

3. Change the port in OracleBI\xmlp\XMLP\Admin\Configuration\xmlp-server-config.xml

4. Change the port in OracleBIData\web\config\instanceconfig.xml

If port is changed to 80 (web port) in Linux only root user can run an application which binds to a port less than 1024.

After the changes restart OC4J

Wednesday, June 24, 2009

Stats of My Session

for 10g
select name,value 
from v$mystat,v$statname 
where v$mystat.statistic#=v$statname.statistic# 
and v$statname.statistic# in (11,12,50,51,52,53,54,62,134,179);

for 11gR1
select name,value 
from v$mystat,v$statname 
where v$mystat.statistic#=v$statname.statistic# 
and v$statname.statistic# in (11,12,53,54,55,56,57,58,59,60,61,62,64,65,66,68,72,146);

for 11gR2 (11.2.0.1)
select name,value 
from v$mystat,v$statname 
where v$mystat.statistic#=v$statname.statistic# 
and v$statname.statistic# in (13,14,63,64,65,66,67,68,69,70,71,72,75,76,77,79,83,169);

for 11gR2 (11.2.0.2)
select name,value 
from v$mystat,v$statname 
where v$mystat.statistic#=v$statname.statistic# 
and v$statname.statistic# in (15,16,66,67,68,69,70,71,72,73,74,75,78,79,80,82,86,175);

for 11gR2 (11.2.0.3)
select name,value 
from v$mystat,v$statname 
where v$mystat.statistic#=v$statname.statistic# 
and v$statname.statistic# in (16,17,68,69,70,71,72,73,74,75,76,78,81,82,83,85,89,178);

for all versions
select name,value 
from v$mystat,v$statname 
where v$mystat.statistic#=v$statname.statistic# 
and v$statname.name in ('CPU used when call started',
'CPU used by this session',
'db block gets',
'db block gets from cache',
'db block gets from cache (fastpath)',
'db block gets direct',
'consistent gets',
'consistent gets from cache',
'consistent gets from cache (fastpath)',
'consistent gets - examination',
'consistent gets direct',
'physical reads',
'physical reads direct',
'physical read IO requests',
'physical read bytes',
'consistent changes',
'physical writes',
'redo size');


Tuesday, June 23, 2009

ADMon 2.0

ADMon 2.0 2.5 3.1 3.3 3.4 3.5 3.8 new 3.9 (md5sum b925dec809a6d7557d5492fd620348b3, chksum 1632924045 bytes 18755192)
User Guide (md5sum 23c7ccfa44d2f2f6015a52c249bca644, chksum 3726284878 bytes 356797)






What's new on ADMon 3.1

Two outlier detection methods have been added to ADMon. One is the modified Z-Score method and the other is the Box Plot method. Under box plot method it is possible to detect either mild outliers or extreme outliers.

Once an outlier is detected it could be handle in seven different ways. These could be configured under setup -> Outlier Handling.




What's new on ADMon 3.3
One Click Save : Save the output you see on ADMon to a preset file location (Defaults to Desktop on Windows and home directory on Linux).
Table data could be saved to an Excel file (xlsx or xls format).
Graphs could be saved as either PNG or JPG files.
Both table data and graphs saved in single PDF file.
File formats and locations could be changed with setup -> One Click Save



What's new on ADMon 3.8
Remove rows from the table view. Rows will be removed only from the front end view, not from the database table.
Select the rows to be removed and right click to get the remove popup.
Connected database name (given at ADMon connection creation) or SID (if no name is given at ADMon connection creation) is shown next to ADMon title.


What's new on ADMon 3.9
Important : ADMon 3.9 does not include any JDBC libraries (ojdbc*.jar). Please download relevant JDBC library for your database and JDK combination and copy it to the lib folder shipped with ADMon. The JDBC jar files must be one of the following ojdbc[5|6|7|8].jar.
ADMon 3.9 is compiled with JDK8.
Removal of rows now take in to account user sorting (user changing the row ordering).

Wednesday, June 17, 2009

Oracle10gR2 On RHEL 5/OEL 5 (x86_64)

Changes to the packages and pre-req parameters

more on metalink 421308.1

Jan 19, 2009:

1. The required packages this article have been changed from Release Notes because of non-existent names and/or versions given in Release Notes.
a) Packages versions changed
1. compat-db-4.1 to compat-db-4.2
2. glibc-2.3 (i386) to glibc-2.5 (i386)
3. libstdc++-3.4.3-9.EL4 to libstdc++-4.1.1
4. libstdc++-devel-3.4.3-22.1 to libstdc++-devel-4.1.1

b) Package names changed
1. pdksh-5.2 to ksh-20060214-1.4
2. gnome-libs to libgnome-2.16.0 and libgnomeui-2.16.0

c) Packages Added
1. libXp-1.0.0 (i386) due to Bug 7680459

2. Updated kernel parameter net.ipv4.ip_local_port_range as per correction request given by Oracle development team.

Feb 11, 2009:

1. Updated the physical memory required as 1GB

2. Removed swap space requirements when physical memory < 1GB

3. Added information about disabling SELinux
Feb 27, 2009:

1. Removed kernel parameter fs.file-max = 65536 , because the default value 202804 is more than 65536.

2. Corrected typo:
glibc-2.5-12 (i386) to glibc-2.5-12 (i686)

Mar 2, 2009:
Removed kernel parameters net.ipv4.tcp_wmem and net.ipv4.tcp_rmem as they are not required. (Reference: unpublished Bugs 7647281 and 7647303)

Tuesday, May 26, 2009

Extracting Oracle DDL Statements


select
dbms_metadata.get_ddl('TABLE','C_FEATURE','QA')
From dual;


parameters are
object type, object name, owner schema name 


Wednesday, May 20, 2009

Cache Buffers Chains

Find latches with high contention

select addr,gets,misses,sleeps
from v$latch_children
where name='cache buffers chains'
and misses > 100 order by 3 desc;

ADDR GETS MISSES SLEEPS
---------------- ---------- ---------- ----------
000000015BF383D8 38773656 8016 6
000000015B6E0968 11215621 4215 0
000000015B5A27B8 26410038 3834 0
000000015B58B5D0 14755159 2289 6



use the address to get the file id and block number


select file#,dbablk,state,class from x$bh where hladdr='000000015BF383D8';

FILE# DBABLK STATE CLASS
---------- ---------- ---------- ----------
1 78757 1 1
11 614483 1 4
9 249625 1 1
11 714956 1 1
3 33047 1 1


using the block number find out the object and the owner


select owner,segment_name
from dba_extents
where file_id=&p1
and &p2 between
block_id and block_id + blocks -1;

Enter value for p1: 11
Enter value for p2: 614483

OWNER SEGMENT_NAME
--------- ------------------
TRNSTG STATIC_CACHE_QUEUE


Investigate the sqls accessing the objects for hot blocks


Metalink (Doc ID: 163424.1) provides following query for identifying hot blocks


column segment_name format a35
select /*+ RULE */
e.owner ||'.'|| e.segment_name segment_name,
e.extent_id extent#,
x.dbablk - e.block_id + 1 block#,
x.tch,
l.child#
from
sys.v$latch_children l,
sys.x$bh x,
sys.dba_extents e
where
x.hladdr = &address and
e.file_id = x.file# and
x.hladdr = l.addr and
x.dbablk between e.block_id and e.block_id + e.blocks -1
order by x.tch desc ;


Metalink recommendations

Depending on the TCH column (The number of times the block is hit by a SQL
statement), you can identify a hotblock. The higher the value of the TCH column,
the more frequent the block is accessed by SQL statements.

In order to reduce contention for this object the following mechanisms can be put in place:

1) Examine the application to see if the execution of certain DML and SELECT statements
can be reorganized to eliminate contention on the object.

2) Decrease the buffer cache -although this may only help in a small amount of cases.

3) DBWR throughput may have a factor in this as well.
If using multiple DBWR's then increase the number of DBWR's.

4) Increase the PCTFREE for the table storage parameters via ALTER TABLE
or rebuild. This will result in less rows per block.

5) Consider implementing reverse key indexes
(if range scans aren't commonly used against the segment)

Wednesday, March 25, 2009

JProfiler Setup

set Both x86 and x64 on the LD_LIBRARY_PATH

/home/demo/test/jprofiler4/bin/linux-x86:/home/demo/test/jprofiler4/bin/linux-x64

if not following error happens

Error occurred during initialization of VM
Could not find agent library on the library path or in the local directory: jprofilerti


Friday, March 13, 2009

TM enq: Locking

If an index does not exists on a foreign key column then,
when an update happens on that foreign key column the referenced column on the parent table is locked. No update will be possible on this column. Other columns in the parent table are not affected.

create table x (a number primary key, b number);
create table y (c number,d number, foreign key (c) references x(a));

begin
for i in 1 .. 10
loop
insert into x values (i, i + 10);
end loop;
end;
/

begin
for i in 1 .. 5
loop
insert into y values (i, i + 5);
end loop;
end;
/

session 1
update y set c = 8 where d=10;

1 row updated.

session 2
update x set a = 12 where a = 7;

hangs.....

query dba_waiters from session 1
select lock_type,mode_held,mode_requested,lock_id1 from dba_waiters;


LOCK_ MODE_HELD MODE_REQUE LOCK_ID1
----- ---------- ---------- ----------
DML Row-X (SX) Share 227738



lock_id1 gives the object id of the child table which has the lock

select object_name from user_objects where object_id=227738;
object_name
-----------
Y

quering v$lock it is possible to find the parent table as well.

There's also a insert, update/delete combination which will leave one session hanging.

session 1
insert into x values (11,21);

session 2
update x set a = 12 where a = 7;

or

delete from x where b = 18;

hangs....

query dba_waiters from session 1
select lock_type,mode_held,mode_requested,lock_id1 from dba_waiters;


LOCK_ MODE_HELD MODE_REQUE LOCK_ID1
----- ---------- ---------- ----------
DML Row-X (SX) Share 227738



select object_name from user_objects where object_id=227738;
object_name
-----------
Y

running the delete or update statement first and insert statement second doesn't cause this blocking.