Thursday, November 24, 2011

APConsole

I was tired of not having something similar to emconsole for standard edition databases.

So I developed APConsole.

APConsole Features
  • Operational when CONTROL_MANAGEMENT_PACK_ACCESS is set to none where performance views don't get populated.
  • Does not use any of the views mentioned in diagnostic pack nor tuning pack.
  • One APConsole installation monitors Oracle Database 10gR2 through 11gR2.
  • Works with both single instance and Real Application Cluster Databases.
  • Get performance visualisation of Standby databases, even physical standby databases in mount mode. (Not open as read only with 11g active data guard feature).
  • Monitor multiple wait classes and none related wait events in a single performance view.
  • Drill down IO Functions by file types.
  • View multiple IO Functions in a single performance view with respective file types.
  • APConsole is a client side tool with only read operations on the database without any write overhead.
  • Easy access to historical data and extract historical performance views as PDFs or image files.
Notification
  • Email performance views as a scheduled notification or on request with few clicks.
  • Send email notification using standard email server or using free email services (Hotmail, Gmail). Other emailing servers could be integrated to APConsole on request.
Save
  • APConsole allows performance views to be saved quickly and easily.
  • Save multiple performance views into single PDF or as PNG images.
Configure
  • Set desired degree of granularity for wait event reporting which range from microseconds to seconds.
  • Set threshold values on wait class level or on event level to filter out low value events.
  • Set number of hours visible on the performance views.
  • Change the look and feel of APConsole to fit the client OS and APConsole behaviour to fit DBAs needs.

Full Page Demo



Comparison of EM Console vs APConsole
APCosnole has a longer x-axis than EM Cosnole therefore the graph's shape may seemed wider but time values are same and also the y-axis values.

Waits (APConsole vs EM Console)



IO Throughput (APConsole vs EM Console)


IO Rate (APConsole vs EM Console)


IO Type : Throughput (APConsole vs EM Console)


IO Type : Rate (APConsole vs EM Console)


Database load was generated with DBMS_RESOURCE_MANAGER.CALIBRATE_IO which gave the output of max_iops = 119, latency = 7, max_mbps = 19

Tuesday, November 22, 2011

ORA-00600 [kqlindp-bad-user]

ORA-00600 [kqlindp-bad-user] error started to happen in a two node RAC running 11.1.0.7 on RHEL 64 bit system.

Effect on the system was both expdp and impdp was giving the following errors on the client side and ORA-00600 [kqlindp-bad-user] being logged on the alert.log. For import
UDI-31623: operation generated ORACLE error 31623
ORA-31623: a job is not attached to this session via the specified handle
ORA-06512: at "SYS.DBMS_DATAPUMP", line 2862
ORA-06512: at "SYS.DBMS_DATAPUMP", line 4052
ORA-06512: at line 1
For export
UDE-31623: operation generated ORACLE error 31623
ORA-31623: a job is not attached to this session via the specified handle
ORA-06512: at "SYS.DBMS_DATAPUMP", line 2862
ORA-06512: at "SYS.DBMS_DATAPUMP", line 4052
ORA-06512: at line 1 and also any local PL/SQL package execution would a
The only metalink note (at the time of writing) related to this ora-600 was ORA-600 [kqlindp-bad-user] [ID 753007.1] which wasn't much help.

Other issue was execution of PL/SQL packages (even users own ones, not the pl/sql packages on a different schema) throw up the ORA-00600 [kqlindp-bad-user] error. The error was coming up on the client side (JDBC connections) as well as beign logged on the server side.

There were plenty of metalink notes for errors UDI-31623 and ORA-31623 but none of those solutions helped.

After raising a SR following steps were recommended which resolved the issue.

1. Execute the following query and examine the output
select owner#, name
from sys.obj$ where owner# not in (select user# from sys.user$) and type# != 10;
This returned set of PL/SQL objects with owner id in this case all the objects seem to be from one owner.

2. After submitting the above query output Oracle came back with The output of the previous query shows non-existent users referenced by table sys.obj$. If you run hcheck.full it may show some data dictionary inconsistency. The ORA-600 error is not directly a consequence of the orphan obj$ rows; it is caused by one or more of the bad obj# being present in the objerror$ table.

Offered solution was take a full backup and in order to fix the issue remove the row from objerror$:
startup restrict
delete from sys.objerror$ where obj# in (select obj# from sys.obj$ where owner# = owner_id from previous sql and owner# not in (select user# from sys.user$));
commit;
shutdown abort;
The shutdown abort is required to save changes to the data dictionary. Once the database is shutdown, you must then perform a startup and a clean shutdown before it can be used again.
startup restrict;
shutdown immediate;
startup;
3. This resolved the issue with ORA-00600 [kqlindp-bad-user] coming up when executing pl/sql packages but expdp and impdp still gave errors, but this time it was different error.
ORA-31626: job does not exist
ORA-31637: cannot create job SYS_EXPORT_FULL_01 for user SYSTEM
ORA-06512: at "SYS.DBMS_SYS_ERROR", line 95
ORA-06512: at "SYS.KUPV$FT_INT", line 601
ORA-39080: failed to create queues "" and "" for Data Pump job
ORA-06512: at "SYS.DBMS_SYS_ERROR", line 95
ORA-06512: at "SYS.KUPC$QUE_INT", line 1550
ORA-01403: no data found
4. After refereing metalink notes
Impdp or Expdp Fails With ORA-31626 and ORA-31637 [ID 345198.1]
How To Reload Datapump Utility EXPDP/IMPDP [ID 430221.1] and
Invalid Objects After Installing a 10.2 Patchset [ID 361025.1]

@?/rdbms/admin/catproc.sql + steps in 361025.1 and @?/rdbms/admin/utlrp was executed which resolved the issue.

Friday, November 18, 2011

CLSC_RECEIVE: (*) ERROR 2

crsd.log would have lines similar to
2011-11-17 19:12:19.860: [ COMMCRS][1455892800]clsc_receive: (0x1582a560) error 2
According to metalink note 733659.1 "This is basically a warning/informational message. The "error 2" is a generic failure code for a single message transmit."

More on Crsd.Log Showing The Following Message: CLSC_RECEIVE: (*) ERROR 2 [ID 733659.1]

Wednesday, November 16, 2011

Multiple Shared Memory Segments Created by Default on 11.2.0.3

There appear to be a change in the way shared memory segments are allocated in 11.2.0.3 when using automatic shared memory management (ASMM not AMM). Metalink note 15566.1 describes the three possible ways shared memory segments are allocated to depending on the SHMMAX value and the size of the SGA. They are "one-segment,contiguous multi-segment, and non-contiguous multi-segment."

When the SHMMAX value is larger than the SGA only one shared memory segment would be created. This was the behavior seen from 10gR2 upto 11.2.0.2. But as of 11.2.0.3 no matter how big the SHMMAX three segments get created.

Below is a comparison of segment creation for two databases one with 11.2.0.2 and other with 11.2.0.3. Both database reside in the same physical server.

The SHM* values on the server.
ipcs -lm

------ Shared Memory Limits --------
max number of segments = 4096
max seg size (kbytes) = 67108864
max total shared memory (kbytes) = 17179869184
min seg size (bytes) = 1
The maximum shared segment size possible is 64GB and the system has 12GB of RAM. Server is running RHEL 5 (2.6.18-194.el5). There are no other databases or applications running beside these two databases. Only one of these database will be up and running at one time. Server was rebooted after shutting down one database and before starting the other.

First the 11.2.0.2 database information
memory_max_target                    big integer 0
memory_target                        big integer 0
pga_aggregate_target                 big integer 1200M
sga_max_size                         big integer 3600M
sga_target                           big integer 3600M
Find the shared memory segments used by this database.
$ORACLE_HOME/bin/sysresv

IPC Resources for ORACLE_SID "ent11g2" :
Shared Memory:
ID              KEY
60751874        0x3b6207ac
Semaphores:
ID              KEY
10551318        0x74afa7cc
Oracle Instance alive for sid "ent11g2"
The shared memory ID and key will be used to match the key shown with ipcs command. Since this is the only database running at this time and also shmmax is larger than the sga size only one shared segment is expected
------ Shared Memory Segments --------
key        shmid      owner      perms      bytes      nattch     status
0x3b6207ac 60751874   oracle    660        3776970752  29
The key and shmid match confirming that this is the shared segment created for this database.

Now the 11.2.0.3 database information (similar size sga as 11.2.0.2)
memory_max_target                    big integer 0
memory_target                        big integer 0
pga_aggregate_target                 big integer 1200M
sga_max_size                         big integer 3600M
sga_target                           big integer 3600M
Finding the shared memory segment for the database as above
$ORACLE_HOME/bin/sysresv

IPC Resources for ORACLE_SID "ent11g3" :
Shared Memory:
ID              KEY
61079554        0x00000000
61112324        0x00000000
61145093        0xf9058128
Semaphores:
ID              KEY
10977302        0xf1843220
Oracle Instance alive for sid "ent11g3"
Three segments have been created. Looking at the ipcs command output
ipcs -m

------ Shared Memory Segments --------
key        shmid      owner      perms      bytes      nattch     status
0x00000000 61079554   oracle    640        33554432     28
0x00000000 61112324   oracle    640        3741319168   28
0xf9058128 61145093   oracle    640        2097152      28
Instead of one as before three shared segments are created. If the size of all these segments are added up it would be equal (33554432 + 3741319168 + 2097152 = 3776970752) to the size of the single segment created with 11.2.0.2 database.

This is the behavior shown on all 11.2.0.3 databases whether they were upgraded from 11.2.0.2 to 11.2.0.3 or created new as in this case. Another observation is in all cases the last segment is of 2M size and second segment is the largest and close to the actual SGA size while size of the first segment changes depending on the SGA size.

Unless this a bug introduced in 11.2.0.3, this signals the change in behavior for allocating shared memory segments when using ASMM.

Interestingly even with AMM the number of created shared memory segments between 11.2.0.2 and 11.2.0.3 are 1 to 3 (this may or may not be related to the above observations and ipcs is not a correct way of finding out about the shared memory segments when using AMM. More on metalink note 731658.1)

With AMM on 11.2.0.2
$ORACLE_HOME/bin/sysresv
IPC Resources for ORACLE_SID "ent11g2" :
Shared Memory:
ID              KEY
3244034         0x3b6207ac
Semaphores:
ID              KEY
3801110         0x74afa7cc
Oracle Instance alive for sid "ent11g2"
With ipcs -m
ipcs -m

------ Shared Memory Segments --------
key        shmid      owner      perms      bytes      nattch     status
0x3b6207ac 3244034    oracle    660        4096        0
With AMM on 11.2.0.3
$ORACLE_HOME/bin/sysresv
IPC Resources for ORACLE_SID "ent11g3" :
Shared Memory:
ID              KEY
3047426         0x00000000
3080196         0x00000000
3112965         0xf9058128
Semaphores:
ID              KEY
3506198         0xf1843220
Oracle Instance alive for sid "ent11g3"
ipcs -m output shows three segments
ipcs -m

------ Shared Memory Segments --------
key        shmid      owner      perms      bytes      nattch     status
0x00000000 3047426    oracle    640        4096       0
0x00000000 3080196    oracle    640        4096       0
0xf9058128 3112965    oracle    640        4096       0

Update 29 November 2011
Had raised a SR about this and oracle seem to be missing the point that 11.2.0.3 is creating more segments than 11.2.0.2. Instead focusing on shmmax values which was set to 64GB saying it's too much. This has no effect on the number of segments,system only creates segments of requested sizes. Only good thing to come out of this so far is a request for a strace output. Start the database with
strace -f -o start.out sqlplus "/ as sysdba"
once started exit the SQL prompt. This may get stuck and might require killing the process manually.
ipcs -m output
ipcs -m
------ Shared Memory Segments --------
key        shmid      owner      perms      bytes      nattch     status
0x00000000 8880130    oracle    640        33554432    25
0x00000000 8912900    oracle    640        3741319168  25
0xf9058128 8945669    oracle    640        2097152     25
sysresv output
$ORACLE_HOME/bin/sysresv

IPC Resources for ORACLE_SID "ent11g3" :
Shared Memory:
ID              KEY
8880130         0x00000000
8912900         0x00000000
8945669         0xf9058128
Semaphores:
ID              KEY
9502742         0xf1843220
Oracle Instance alive for sid "ent11g3"
From the strace output these segment creation system calls could be picked up. Size of the segment and segment id has been highlighted.
21293 shmget(IPC_PRIVATE, 33554432, IPC_CREAT|IPC_EXCL|0640) = 8880130
21293 mmap(NULL, 1048576, PROT_READ|PROT_WRITE, MAP_PRIVATE|MAP_ANONYMOUS, -1, 0) = 0x2b8cd1281000
21293 getrlimit(RLIMIT_STACK, {rlim_cur=32768*1024, rlim_max=RLIM_INFINITY}) = 0
21293 open("/proc/self/maps", O_RDONLY) = 11


21293 shmget(IPC_PRIVATE, 3741319168, IPC_CREAT|IPC_EXCL|0640) = 8912900
21293 getrlimit(RLIMIT_STACK, {rlim_cur=32768*1024, rlim_max=RLIM_INFINITY}) = 0
21293 open("/proc/self/maps", O_RDONLY) = 11

21293 shmget(0xf9058128, 2097152, IPC_CREAT|IPC_EXCL|0640) = 8945669
21293 shmget(0xf9058129, 4096, 0)       = -1 ENOENT (No such file or directory)
21293 shmget(0xf9058129, 4096, 0)       = -1 ENOENT (No such file or directory)
21293 shmget(0xf9058129, 0, 0)          = -1 ENOENT (No such file or directory)
21293 shmget(0xf905812a, 4096, 0)       = -1 ENOENT (No such file or directory)
21293 shmget(0xf905812a, 4096, 0)       = -1 ENOENT (No such file or directory)
21293 shmget(0xf905812a, 0, 0)          = -1 ENOENT (No such file or directory)
21293 shmget(0xf905812b, 4096, 0)       = -1 ENOENT (No such file or directory)
21293 shmget(0xf905812b, 4096, 0)       = -1 ENOENT (No such file or directory)
21293 shmget(0xf905812b, 0, 0)          = -1 ENOENT (No such file or directory)
21293 getrlimit(RLIMIT_STACK, {rlim_cur=32768*1024, rlim_max=RLIM_INFINITY}) = 0
21293 open("/proc/self/maps", O_RDONLY) = 11
However on 11.2.0.2 database you only see one segment creation system call on the strace output matching the entire sga size.
ipcs -m

------ Shared Memory Segments --------
key        shmid      owner      perms      bytes      nattch     status
0x3b6207ac 9142278    oracle    660        3776970752   30


21778 munmap(0x2af10d7c3000, 4096)      = 0
21778 shmget(0x3b6207ac, 3776970752, IPC_CREAT|IPC_EXCL|0660) = 9142278
21778 shmget(0x3b6207ad, 4096, 0)       = -1 ENOENT (No such file or directory)
21778 shmget(0x3b6207ad, 4096, 0)       = -1 ENOENT (No such file or directory)
21778 shmget(0x3b6207ad, 0, 0)          = -1 ENOENT (No such file or directory)
21778 shmget(0x3b6207ae, 4096, 0)       = -1 ENOENT (No such file or directory)
21778 shmget(0x3b6207ae, 4096, 0)       = -1 ENOENT (No such file or directory)
21778 shmget(0x3b6207ae, 0, 0)          = -1 ENOENT (No such file or directory)
21778 shmget(0x3b6207af, 4096, 0)       = -1 ENOENT (No such file or directory)
21778 shmget(0x3b6207af, 4096, 0)       = -1 ENOENT (No such file or directory)
21778 shmget(0x3b6207af, 0, 0)          = -1 ENOENT (No such file or directory)
21778 mmap(NULL, 1048576, PROT_READ|PROT_WRITE, MAP_PRIVATE|MAP_ANONYMOUS, -1, 0) = 0x2af10d7c3000
21778 getrlimit(RLIMIT_STACK, {rlim_cur=32768*1024, rlim_max=RLIM_INFINITY}) = 0
21778 open("/proc/self/maps", O_RDONLY) = 5
Update 21 December 2011
Latest update is this issue could be related to following bug Bug 13360043 - ORA-27123: UNABLE TO ATTACH TO SHARED MEMORY SEGMENT STARTING 11.2.0.3 DATABASE
Oracle still investigating.

Update 28 December 2011
Oracle asked to comapre the db_block_numa value on both 11.2.0.2 and 11.2.0.3. On both databases it was the same value
select x.ksppinm, y.ksppstvl from x$ksppi x , x$ksppcv y where x.indx = y.indx and x.ksppinm like '\_db_block_numa%' escape '\' order by x.ksppinm;

KSPPINM KSPPSTVL
-----------------
_db_block_numa 1
Then asked to change the enable_NUMA_optimization to TRUE only on 11.2.0.3 and see if the same behavior exists. Logic was with _enable_NUMA_optimization=false and _db_block_numa=1 oracle would allocate only 1 shared segment for the SGA. (Couldn't understand why then change it to true). By default _enable_NUMA_optimization is false, could be found out with
select x.ksppinm, y.ksppstvl from x$ksppi x , x$ksppcv y where x.indx = y.indx and x.ksppinm like '\_enable_NUMA_optimization%' escape '\' order by x.ksppinm;
Changed it to true ask instructed but no change.

Following two metalink notes also listed
11G on HP Creates 6 Shared Memory Segments [Doc ID 601552.1]
Shared Memory Segment Is Split Up In Different Pieces With NUMA Setting [Doc ID 429872.1]

Update 06 January 2012
Oracle has finally reproduced the behavior in house. This will be logged as a bug. Blog will be updated with the findings.

Update 03 February 2012
Oracle has created a metalink note explaining this behavior. Why Multiple Shared Memory Segments are Created From 11.2.0.3 [ID 1399908.1]
This segment creation behavior change is the side effect of a fix for the unpublished bug 12654172. Not yet known if this side effect is localised to 11.2.0.3 and will be "fixed" with a PSU or in 11.2.0.4 or 12.1
It seem this is the expected behavior going forward from 11.2.0.3. This means that one segment model described in 15566.1 won't be there anymore.

Configuring NTP Service on Linux

Servers' time fall out of sync with one another after some time. This is problematic if the time gap between each server is wide such that it affects the execution of business logic. In RAC configuration this could even lead to node eviction.

The solution is to sync all servers with a NTP server that keep accurate time. For this to work the servers must be able to connect to the internet to access these NTP servers. But for security reason this may not always be possible. However it is possible to have a setup as shown below where only one server connect to the NTP servers and all others get their time sync from this "internal" server without having to connect to the internet.


It is assumed ntp is installed on the Linux servers if not install.

1. Backup the current /etc/ntp.conf file on all servers.
mv /etc/ntp.conf /etc/ntp.conf.orig
2. On the server that is going to act as the internal NTP server create a new ntp.conf file (make sure it's owned by root and permissions are 644) in /etc and add the following entries. Important parts are shown in bold
server  ntpserver1.org           
server  ntpserver2.org           

restrict ntpserver1.org  mask 255.255.255.255 nomodify notrap noquery
restrict ntpserver1.org  mask 255.255.255.255 nomodify notrap noquery

restrict 192.168.0.0 mask 255.255.255.0 nomodify notrap

restrict 127.0.0.1
restrict -6 ::1

# Undisciplined Local Clock. This is a fake driver intended for backup
# and when no outside source of synchronized time is available.
server  127.127.1.0     # local clock
fudge   127.127.1.0 stratum 10

driftfile /var/lib/ntp/drift

# Key file containing the keys and key identifiers used when operating
# with symmetric key cryptography.
keys /etc/ntp/keys

# Specify the key identifiers which are trusted.
#trustedkey 4 8 42

# Specify the key identifier to use with the ntpdc utility.
#requestkey 8

# Specify the key identifier to use with the ntpq utility.
#controlkey 8
ntpserver1/2.org are the NTP servers that are being used for synchronization. (netserver1/2.org used here is just an example).

The restirct line tells what is allowed for the NTP servers, in this case NTP servers are not allowed to modify nor query the local server. The mask 255.255.255.255 limits access to the single IP address of the remote NTP servers.

Third line is the key for allowing other servers to get the time from this server. This is done by removing the noquery line from the restrict and specifiying computers coming from which network segment are allowed to query. In this case all the servers in the local network would be able to get their time sync from this server.

3. Leave the other options as it is.

4. If the server time is lagging far behind the standard time, then use the ntpdate command to get the time updated allowing ntp service to start with a small time lag. To do this first stop the ntp service
/etc/init.d/ntpd stop
and then run
ntpdate -u ntpserver1.org
16 Nov 12:58:49 ntpdate[13104]: adjust time server 129.67.1.160 offset -0.070418 sec
Run the above command 2-3 times.

5. Start the ntp service and monitor the ntp synchronization in progress
/etc/init.d/ntpd start

watch -n 5 ntpq -p

Every 5.0s: ntpq -p             Wed Nov 16 13:08:44 2011

     remote           refid      st t when poll reach   delay   offset  jitter
==============================================================================
*ntpserver1.org 192.6.2.82     2 u    7   64  377    3.732  -67.808   0.520
+ntpserver2.org 192.6.2.74     2 u   61   64  377    3.755  -67.672  17.760
 LOCAL(0)        .LOCL.          10 l   53   64  377    0.000    0.000   0.001
6. On other servers (servers get their time sync from the internal NTP server) eg. App and DB servers on the above diagram, create a new /etc/ntp.conf file and add the following entries
server  internal_ntp.server.net

restrict internal_ntp.server.net   mask 255.255.255.255 nomodify notrap noquery

restrict 127.0.0.1
restrict -6 ::1

# Undisciplined Local Clock. This is a fake driver intended for backup
# and when no outside source of synchronized time is available.
server  127.127.1.0     # local clock
fudge   127.127.1.0 stratum 10

driftfile /var/lib/ntp/drift

# Key file containing the keys and key identifiers used when operating
# with symmetric key cryptography.
keys /etc/ntp/keys

# Specify the key identifiers which are trusted.
#trustedkey 4 8 42

# Specify the key identifier to use with the ntpdc utility.
#requestkey 8

# Specify the key identifier to use with the ntpq utility.
#controlkey 8
In this configuraiton the ntp server would be the hostname or the IP of the internal ntp server.

7. If this is a RAC server then before start the ntp service add -x to /etc/sysconfig/ntpd file(metalink note 551704.1)
# Drop root to id 'ntp:ntp' by default.
OPTIONS="-x -u ntp:ntp -p /var/run/ntpd.pid"
8. Start the ntp service and monitor the synchronization progress
Every 5.0s: ntpq -p Wed Nov 16 13:19:40 2011

     remote           refid      st t when poll reach   delay   offset  jitter
==============================================================================
 internal_ntp_IP    163.1.2.160      3 u    1   64    1    0.279    0.074   0.001
 LOCAL(0)        .LOCL.          10 l    -   64    0    0.000    0.000   0.001

Useful metalink notes

NTP Information and Setup [ID 1010136.1]
Ntpd Does not Use Defined NTP Server [ID 1178614.1]
An Example NTP Client Configuration to use with Oracle Clusterware 11gR2 [ID 1104473.1]
Linux OS Service ntpd [ID 551704.1]
How to Set Up a Network Time Protocol (NTP) Client in Solaris [ID 1005887.1]
CTSSD Runs in Observer Mode Even Though No Time Sync Software is Running [ID 1054006.1]
NTP leap second event causing Oracle Clusterware node reboot [ID 759143.1]

Friday, November 11, 2011

ASM for Standalone Server in 11gR2 with Role Separation

One of the major differences in 11gR2 compared to previous version is the moving of ASM configuration to Grid Infrastructure. Therefore to setup a standalone database with ASM, grid infrastructure is needed.
In this blog the grid user will be used for installing and managing grid infrastructure and oracle user will be used for installing and managing the database software. This is not a comprehensive step by step guide, some general database installation steps are not shown here focusing mainly on the steps that are specific to the topic at hand. The software used here are 11gR2 (11.2.0.3)

1. Create the following groups in the OS
groupadd oinstall
groupadd dba
groupadd oper
groupadd asmadmin
groupadd asmdba
groupadd asmoper
and create the grid user and oracle user as follows
useradd -g oinstall -G asmadmin,asmdba,asmoper grid
useradd -g oinstall -G dba,oper,asmdba oracle
If Oralce user does not have the asmdba then ASM disk groups created by grid user won't be visible to Oracle user.

2. Create the grid infrastructure location and make the ownership as grid:oinstall
cd /opt/app/11.2.0/
ls -l
drwxr-xr-x 2 grid oinstall 4096 Nov  3 17:41 grid
3. Install and configure oracleasm libs. User grid user and asmdba as the oracleasm interface user and group
/etc/init.d/oracleasm configure -i
Configuring the Oracle ASM library driver.

This will configure the on-boot properties of the Oracle ASM library
driver.  The following questions will determine whether the driver is
loaded on boot and what permissions it will have.  The current values
will be shown in brackets ('[]').  Hitting  without typing an
answer will keep that current value.  Ctrl-C will abort.

Default user to own the driver interface []: grid
Default group to own the driver interface []: asmdba
Start Oracle ASM library driver on boot (y/n) [n]: y
Scan for Oracle ASM disks on boot (y/n) [y]: y
Writing Oracle ASM library driver configuration: done
Initializing the Oracle ASMLib driver:                     [  OK  ]
Scanning the system for Oracle ASMLib disks:               [  OK  ]
/etc/init.d/oracleasm createdisk DATA /dev/sdb1
Marking disk "DATA" as an ASM disk:                        [  OK  ]
 /etc/init.d/oracleasm createdisk FLASH /dev/sdc1
Marking disk "FLASH" as an ASM disk:                       [  OK  ]
4. Login as grid user and start the grid infrastructure installation and select grid infrastrcuture for standalone server.

5. Create a ASM diskgroup to store data files. On this step only one diskgroup could be created. Once the grid infrastrcuture is installed use ASMCA to create any additional diskgroups needed.

6. Select the OS group related ASM activities. If the OS groups mentioned in step 1 are created this step would come pre-selected as shown below.





7. When grid infrastructure is installed for RAC it is advised not to install it as a sub directory of ORACLE BASE. But in this case trying to install out side oracle base would give a warning as shown below.



The grid infrastructure path is selected as below.


8. Installing summary


9. Execute the root scripts when prompted
/opt/app/oracle/product/11.2.0/grid/root.sh
Performing root user operation for Oracle 11g

The following environment variables are set as:
    ORACLE_OWNER= grid
    ORACLE_HOME=  /opt/app/oracle/product/11.2.0/grid

Enter the full pathname of the local bin directory: [/usr/local/bin]:
   Copying dbhome to /usr/local/bin ...
   Copying oraenv to /usr/local/bin ...
   Copying coraenv to /usr/local/bin ...


Creating /etc/oratab file...
Entries will be added to the /etc/oratab file as needed by
Database Configuration Assistant when a database is created
Finished running generic part of root script.
Now product-specific root actions will be performed.
Using configuration parameter file: /opt/app/oracle/product/11.2.0/grid/crs/install/crsconfig_params
Creating trace directory
LOCAL ADD MODE
Creating OCR keys for user 'grid', privgrp 'oinstall'..
Operation successful.
LOCAL ONLY MODE
Successfully accumulated necessary OCR keys.
Creating OCR keys for user 'root', privgrp 'root'..
Operation successful.
CRS-4664: Node rhel5 successfully pinned.
Adding Clusterware entries to inittab

rhel5     2011/11/03 18:15:01     /opt/app/oracle/product/11.2.0/grid/cdata/rhel5/backup_20111103_181501.olr
Successfully configured Oracle Grid Infrastructure for a Standalone Server
10. Once the root scripts are finished listener and ASM instance will be created.


11. Following commands could be used to stop and start the grid infrastructure related services
crsctl stop has
crsctl start has
12. ASM's SPFile will be residing inside ASM itself. This could be found out in number of ways, looking at the alert log of ASM when ASM starts
Machine:        x86_64
Using parameter settings in server-side spfile +DATA/asm/asmparameterfile/registry.253.766260991
System parameters with non-default values:
  large_pool_size          = 12M
  instance_type            = "asm"
  remote_login_passwordfile= "EXCLUSIVE"
  asm_diskgroups           = "FLASH"
  asm_diskgroups           = "DATA"
  asm_power_limit          = 1
  diagnostic_dest          = "/opt/app/oracle"
Or using the asmcmd's spget command which shows the spfile location registered with GnP profile
ASMCMD> spget
+DATA/asm/asmparameterfile/registry.253.766260991
To copy the ASM spfile to a file systme location use spcopy (use spmove to move it). Before executing the spcopy stop the diskgroups
srvctl stop diskgroup -g data[flash] 
ASMCMD> spcopy +DATA/asm/asmparameterfile/registry.253.766260991 /opt/app/oracle/product/11.2.0/grid/dbs/spfile+ASM.ora
and update the GnP profile with the new location using spset
ASMCMD> spset /opt/app/oracle/product/11.2.0/grid/dbs/spfile+ASM.ora
ASMCMD> spget
/opt/app/oracle/product/11.2.0/grid/dbs/spfile+ASM.ora
When ASM starts next the new location is listed as below
Machine:        x86_64
Using parameter settings in server-side spfile /opt/app/oracle/product/11.2.0/grid/dbs/spfile+ASM.ora
System parameters with non-default values:
  large_pool_size          = 12M
  instance_type            = "asm"
  remote_login_passwordfile= "EXCLUSIVE"
  asm_diskgroups           = "FLASH"
  asm_diskgroups           = "DATA"
  asm_power_limit          = 1
  diagnostic_dest          = "/opt/app/oracle"
With this concludes the installation of grid infrastructure and creation of ASM.


13. Login as Oracle user and start the database software installation.


14. Some locations ($ORACLE_BASE/admin, $ORACLE_BASE/cfgtoollogs) may not have write permissions for Oracle user, permissions might be set as 755 grid : oinstall. Change permissions as 775 to allow oracle user the access to these location.



15. There is a pre-req check to see if grid user is also a member of the dba group. As per metalink note 1084186.1 "In case of stand alone set up, if the 11.2 Database be managed by Oracle Restart ( srvctl stop/start ), then 'grid' user should also be a part of 'dba' group."



16. Complete the installation.



17. Run dbca as Oracle user from the database software home. Creating database is straightforward. If anything can go wrong then it will be that ASM disk are not being shown when ASM is created for data file locations.


This will be because Oracle user is not in the asmdba group. Once added the disk will appear for selection. This concludes the creation of standalone database with ASM in 11gR2.

The database comes with Oracle restart configured as well as ASM as a target in the EM Console


Some useful metalink notes

DBCA Does Not Display ASM Disk Groups In 11.2 [ID 1177483.1] (issue seems to be fixed in 11.2.0.3)
Patch 11gR2 Grid Infrastructure Standalone (Oracle Restart) [ID 1089476.1]
11gR2 (11.2.0.1) Unable To Create ASM Spfile Standalone [ID 1082332.1]
Database Creation on 11.2 Grid Infracture with Role Separation ( ORA-15025, KFSG-00312, ORA-15081 ) [ID 1084186.1]
How to change ownership and group memberships of Grid home in 11.2 Standalone [ID 1171674.1]
Ora-01115, Ora-01110, Ora-15081 When Connecting DB with non-dba Group Users [ID 1372571.1]
ASM 11.2 Configuration KIT (ASM 11gR2 Installation & Configuration, Deinstallation, Upgrade, ASM Job Role Separation (Standalone Only). [ID 1092213.1]
How To Upgrade Oracle Restart i.e. Single Node Grid Infrastructure/ASM from 11.2.0.2 To 11.2.0.3. [ID 1400185.1]
Listener Log Showing * Service_died * Lsnragt * 12537 Died Message Every Minute. [ID 1349844.1]
WARNING: Subscription for node down event still pending' in Listener Log [ID 372959.1]

Related Post
Installing 11gR2 Standalone Server with ASM and Role Separation in RHEL 6