According to Oracle documentation "In the first step of I/O calibration with the DBMS_RESOURCE_MANAGER.CALIBRATE_IO procedure, the procedure issues random database-block-sized reads, by default, 8 KB, to all data files from all database instances. This step provides the maximum IOPS, in the output parameter max_iops, that the database can sustain. The value max_iops is an important metric for OLTP databases. The output parameter actual_latency provides the average latency for this workload. When you need a specific target latency, you can specify the target latency with the input parameter max_latency (specifies the maximum tolerable latency in milliseconds for database-block-sized IO requests)."
The second step of calibration using the DBMS_RESOURCE_MANAGER.CALIBRATE_IO procedure issues random, 1 MB reads to all data files from all database instances. The second step yields the output parameter max_mbps, which specifies the maximum MBPS of I/O that the database can sustain. This step provides an important metric for data warehouses.
Though documentation says that to determine the IOPS calibrate_IO procedure issues database-block-size reads this is not the case when the database has non-default block size tablespaces. When there are non-default block size tablespaces the block size used in the first step is the maximum block size in the system. For example if there only 8K block size tablespaces and 2K block size tablespaces then first step (IOPS phase) will use 8K, which may seem like it's using the default block size. However if there are 8K block size tablespaces and 32K block size tablespaces then first step will use 32K as the block size for read operations. Even if there are mixture of block sizes for example 8K, 16K, 32K still the first step would choose the 32K as the block size for the reads.
This behavior could be observed tracing the IO calibration operations. To enable the tracing set following events before running the IO calibration
alter system set events '56724 trace name context forever, level 2'; alter system set events '10298 trace name context forever, level 1';To turn off tracing set the following after IO calibration has finished
alter system set events '56724 trace name context off'; alter system set events '10298 trace name context off';Tracing does have some overhead and final values seen while tracing is enabled may be lower compared to values seen from calibration without tracing enabled. When the tracing is enabled several trace files will be created in the trace file directory. They are of the form
SID_ora_pid.trc (eg: std11g21_ora_2307.trc) SID_cs##_pid.trc (eg: std11g21_cs00_2314.trc, std11g21_cs01_2316.trc)Files with SID_cs##_pid.trc name will have the block sizes used by the random reads. These could be identified by the nbyt field in the line.
ksfd_osdrqfil:fob=0x8e4ffe38 bufp=0x793f7e00 blkno=4750400 nbyt=32768 flags=0x4 ksfd_osdrqfil:fob=0x8e4fff70 bufp=0x76452000 blkno=1144832 nbyt=1048576 flags=0x4 ksfd_osdrqfil:fob=0x8e4fec08 bufp=0x878b7600 blkno=4967936 nbyt=16384 flags=0x4Total number of read requests issues with various block sizes could be counted with the following shell command. The database only has 8K and 32K block size tablespaces.
SQL> select block_size,count(*) from v$datafile group by block_size; BLOCK_SIZE COUNT(*) ---------- ---------- 32768 1 8192 9 SQL> select name,value from v$parameter where name='db_block_size'; NAME VALUE --------------- ----- db_block_size 8192 $ grep nbyt=8192 *cs*trc | wc -l 0 $ grep nbyt=32768 *cs*trc | wc -l 1394478 $ grep nbyt=1048576 *cs*trc | wc -l 3489041048576 is the 1MB request issued during the second step. From the above output it could be seen that even though database had 9 tablespaces with the default block size 8K and only one tablespace with a non-default block size of 32K, all the read request issues during first step were of 32K block size and not a single 8K block size read request was issued. Because of this values seen for IOPS differ depending on whether the database has tablespaces with non-default block sizes.
Tested on 11.2.0.3, 11.2.0.4 and 12.1.0.1. It's not clear at this point if this is a bug or expected behavior even though documentation says IOPS is determined by default block size reads.
Update 18 April 2017
SR has been closed. The issue could be tracked with bug 17434257 : DISCREPANCIES IN THE VALUES OBSERVED WHEN IO CALIBRATION IS RUN
Related Post
I/O Elevator Comparision Using CALIBRATE_IO