Friday, February 11, 2011

Deleting a Node From 11gR2 RAC

Order of things to do, to delete a node on 11gR2 RAC is same as in 11gR1. First remove the database instance, then the oracle home and finally the grid infrastructure home.
The version of grid infrastructure and oracle home used here is 11.2.0.2, which is the patch upgrade but could also be used for a fresh installation. The cluster here was a fresh installation created from this patch set. RAC database is of two instances and admin-managed. Here the two nodes are called rac4 (DB instance rac11g11) and rac5 (DB instance rac11g12), and rac5 will be removed.

1. Backup the ocr using ocrconfig -manualbackup

2. Run DBCA from a node that will remain in the cluster and select instance management and delete instance. In the subsequent steps select the instance on the node that will be removed from the cluster and proceed to finish step. This instance that's being removed must be up and running.

3. Check if the redo log thread for the deleted instance is removed by querying the v$log if not remove it with
alter database disable thread thread#
4. Check if the instance is removed from the cluster
srvctl config database -d rac11g2
Database unique name: rac11g2
Database name: rac11g2
Oracle home: /opt/app/oracle/product/11.2.0/db_1
Oracle user: oracle
Spfile: +DATA/rac11g2/spfilerac11g2.ora
Domain: domain.net
Start options: open
Stop options: immediate
Database role: PRIMARY
Management policy: AUTOMATIC
Server pools: rac11g2
Database instances: rac11g21
Disk Groups: DATA,FLASH
Mount point paths:
Services:
Type: RAC
Database is administrator managed
As seen from above only one instance is present. This conclude the first phase of the node removal. Next is to remove the Oracle Home.

5. Check if any listeners are running on the oracle home to be deleted. In 11.2 the default listener runs from grid home. But if any listeners were explicitly created on oracle home then it must be disabled and stopped.

Check from which home listener is running
srvctl config listener -a
Name: LISTENER
Network: 1, Owner: oracle
Home: 
/opt/app/11.2.0/grid on node(s) rac5,rac4
End points: TCP:1521
Since it's running from grid home this step could be skipped if it was running from oracle home use the following commands to disable and stop it
$ srvctl disable listener -l listener_name -n name_of_node_to_delete
$ srvctl stop listener -l listener_name -n name_of_node_to_delete
6. Run the following on the node to be deleted to update the inventory.
cd $ORACLE_HOME/oui/bin
./runInstaller -updateNodeList ORACLE_HOME=$ORACLE_HOME "CLUSTER_NODES={rac5}" -local -silent
Starting Oracle Universal Installer...

Checking swap space: must be greater than 500 MB.   Actual 4094 MB    Passed
The inventory pointer is located at /etc/oraInst.loc
The inventory is located at /opt/app/oraInventory
'UpdateNodeList' was successful.
After executing the above command on the node to be deleted inventory.xml will show only the node to be deleted under the oracle home name
<HOME NAME="OraDb11g_home1" LOC="/opt/app/oracle/product/11.2.0/db_1" TYPE="O" IDX="2">
<NODE_LIST>
<NODE NAME="rac5"/>
</NODE_LIST>
</HOME>
other nodes will show all the nodes in the cluster
<HOME NAME="OraDb11g_home1" LOC="/opt/app/oracle/product/11.2.0/db_1" TYPE="O" IDX="2">
<NODE_LIST>
<NODE NAME="rac4"/>
<NODE NAME="rac5"/>
</NODE_LIST>
</HOME>
7. For shared home detach the oracle home from the inventory using
./runInstaller -detachHome ORACLE_HOME=Oracle_home_location
For non shared home run deinstall from the oracle home on the node that's being removed with -local option.
If -local is not specified this will apply to the entire cluster.
cd $ORACLE_HOME/deinstall
./deinstall -local
Checking for required files and bootstrapping ...
Please wait ...
Location of logs /opt/app/oraInventory/logs/

############ ORACLE DEINSTALL & DECONFIG TOOL START ############


######################### CHECK OPERATION START #########################
Install check configuration START


Checking for existence of the Oracle home location /opt/app/oracle/product/11.2.0/db_1
Oracle Home type selected for de-install is: RACDB
Oracle Base selected for de-install is: /opt/app/oracle
Checking for existence of central inventory location /opt/app/oraInventory
Checking for existence of the Oracle Grid Infrastructure home /opt/app/11.2.0/grid
The following nodes are part of this cluster: rac5

Install check configuration END
Skipping Windows and .NET products configuration check
Checking Windows and .NET products configuration END

Network Configuration check config START
Network de-configuration trace file location: /opt/app/oraInventory/logs/netdc_check2011-02-10_01-11-55-PM.log
Network Configuration check config END

Database Check Configuration START
Database de-configuration trace file location: /opt/app/oraInventory/logs/databasedc_check2011-02-10_01-11-59-PM.log
Database Check Configuration END

Enterprise Manager Configuration Assistant START
EMCA de-configuration trace file location: /opt/app/oraInventory/logs/emcadc_check2011-02-10_01-12-03-PM.log

Enterprise Manager Configuration Assistant END
Oracle Configuration Manager check START
OCM check log file location : /opt/app/oraInventory/logs//ocm_check7566.log
Oracle Configuration Manager check END

######################### CHECK OPERATION END #########################


####################### CHECK OPERATION SUMMARY #######################
Oracle Grid Infrastructure Home is: /opt/app/11.2.0/grid
The cluster node(s) on which the Oracle home de-installation will be performed are:rac5
Since -local option has been specified, the Oracle home will be de-installed only on the local node, 'rac5', and the global configuration will be removed.
Oracle Home selected for de-install is: /opt/app/oracle/product/11.2.0/db_1
Inventory Location where the Oracle home registered is: /opt/app/oraInventory
Skipping Windows and .NET products configuration check
The option -local will not modify any database configuration for this Oracle home.

No Enterprise Manager configuration to be updated for any database(s)
No Enterprise Manager ASM targets to update
No Enterprise Manager listener targets to migrate
Checking the config status for CCR
Oracle Home exists with CCR directory, but CCR is not configured
CCR check is finished
Do you want to continue (y - yes, n - no)? [n]: y
A log of this session will be written to: '/opt/app/oraInventory/logs/deinstall_deconfig2011-02-10_01-11-51-PM.out'
Any error messages from this session will be written to: '/opt/app/oraInventory/logs/deinstall_deconfig2011-02-10_01-11-51-PM.err'

######################## CLEAN OPERATION START ########################

Enterprise Manager Configuration Assistant START
EMCA de-configuration trace file location: /opt/app/oraInventory/logs/emcadc_clean2011-02-10_01-12-03-PM.log

Updating Enterprise Manager ASM targets (if any)
Updating Enterprise Manager listener targets (if any)
Enterprise Manager Configuration Assistant END
Database de-configuration trace file location: /opt/app/oraInventory/logs/databasedc_clean2011-02-10_01-12-10-PM.log

Network Configuration clean config START
Network de-configuration trace file location: /opt/app/oraInventory/logs/netdc_clean2011-02-10_01-12-10-PM.log
De-configuring Local Net Service Names configuration file...
Local Net Service Names configuration file de-configured successfully.
De-configuring backup files...
Backup files de-configured successfully.
The network configuration has been cleaned up successfully.
Network Configuration clean config END

Oracle Configuration Manager clean START
OCM clean log file location : /opt/app/oraInventory/logs//ocm_clean7566.log
Oracle Configuration Manager clean END
Removing Windows and .NET products configuration END
Oracle Universal Installer clean START

Detach Oracle home '/opt/app/oracle/product/11.2.0/db_1' from the central inventory on the local node : Done


Failed to delete the directory '/opt/app/oracle/product/11.2.0/db_1'. The directory is in use.
Delete directory '/opt/app/oracle/product/11.2.0/db_1' on the local node : Failed <<<<
The Oracle Base directory '/opt/app/oracle' will not be removed on local node. The directory is in use by Oracle Home '/opt/app/11.2.0/grid'.
Oracle Universal Installer cleanup was successful.
Oracle Universal Installer clean END
Oracle install clean START
Clean install operation removing temporary directory '/tmp/deinstall2011-02-10_01-11-28PM' on node 'rac5'
Oracle install clean END


######################### CLEAN OPERATION END #########################

####################### CLEAN OPERATION SUMMARY #######################
Cleaning the config for CCR
As CCR is not configured, so skipping the cleaning of CCR configuration
CCR clean is finished
Skipping Windows and .NET products configuration clean
Successfully detached Oracle home '/opt/app/oracle/product/11.2.0/db_1' from the central inventory on the local node.
Failed to delete directory '/opt/app/oracle/product/11.2.0/db_1' on the local node.
Oracle Universal Installer cleanup was successful.

Oracle deinstall tool successfully cleaned up temporary directories.
#######################################################################

############# ORACLE DEINSTALL & DECONFIG TOOL END #############
Oracle Home would have been removed from the inventory.xml at the end of this.

8. On the remaining nodes run the following to update the inventory with the remaining nodes of the cluster.
cd $ORACLE_HOME/oui/bin
./runInstaller -updateNodeList ORACLE_HOME=$ORACLE_HOME "CLUSTER_NODES={rac4}"
Starting Oracle Universal Installer...

Checking swap space: must be greater than 500 MB.   Actual 4094 MB    Passed
The inventory pointer is located at /etc/oraInst.loc
The inventory is located at /opt/app/oraInventory
'UpdateNodeList' was successful.
Before running this inventory would have the delete oracle home
<HOME NAME="OraDb11g_home1" LOC="/opt/app/oracle/product/11.2.0/db_1" TYPE="O" IDX="2">
<NODE_LIST>
<NODE NAME="rac4"/>
<NODE NAME="rac5"/>
</NODE_LIST>
</HOME>
After running this it will only have the remaining oralce homes
<HOME NAME="OraDb11g_home1" LOC="/opt/app/oracle/product/11.2.0/db_1" TYPE="O" IDX="2">
<NODE_LIST>
<NODE NAME="rac4"/>
</NODE_LIST>
</HOME>
This concludes the second phase of removing the node. Next the final phase is to remove the grid home.


9. If the node to be removed is pinned then unpin it with crsctl unpin css -n nodename. use olsnodes -s -t to find out if nodes are pinned or unpinned
olsnodes -s -t
rac4    Active  Unpinned
rac5    Active  Unpinned
If they are already unpinned then no need to run the unpin command above.

10. Stop emagent and run the deconfig script rootcrs.pl -deconfig -force from $GI_HOME/crs/install as root user on node that's being removed. If the node deleted is the last node in the cluster then include the -lastnode option
# ./rootcrs.pl -deconfig -force
Using configuration parameter file: ./crsconfig_params
Network exists: 1/192.168.0.0/255.255.255.0/eth0, type static
VIP exists: /rac4-vip/192.168.0.90/192.168.0.0/255.255.255.0/eth0, hosting node rac4
VIP exists: /rac5-vip/192.168.0.89/192.168.0.0/255.255.255.0/eth0, hosting node rac5
GSD exists
ONS exists: Local port 6100, remote port 6200, EM port 2016
ACFS-9200: Supported
CRS-2673: Attempting to stop 'ora.registry.acfs' on 'rac5'
CRS-2677: Stop of 'ora.registry.acfs' on 'rac5' succeeded
CRS-2791: Starting shutdown of Oracle High Availability Services-managed resources on 'rac5'
CRS-2673: Attempting to stop 'ora.crsd' on 'rac5'
CRS-2790: Starting shutdown of Cluster Ready Services-managed resources on 'rac5'
CRS-2673: Attempting to stop 'ora.CLUSTERDG.dg' on 'rac5'
CRS-2673: Attempting to stop 'ora.DATA.dg' on 'rac5'
CRS-2673: Attempting to stop 'ora.FLASH.dg' on 'rac5'
CRS-2673: Attempting to stop 'ora.oc4j' on 'rac5'
CRS-2677: Stop of 'ora.FLASH.dg' on 'rac5' succeeded
CRS-2677: Stop of 'ora.DATA.dg' on 'rac5' succeeded
CRS-2677: Stop of 'ora.oc4j' on 'rac5' succeeded
CRS-2672: Attempting to start 'ora.oc4j' on 'rac4'
CRS-2676: Start of 'ora.oc4j' on 'rac4' succeeded
CRS-2677: Stop of 'ora.CLUSTERDG.dg' on 'rac5' succeeded
CRS-2673: Attempting to stop 'ora.asm' on 'rac5'
CRS-2677: Stop of 'ora.asm' on 'rac5' succeeded
CRS-2792: Shutdown of Cluster Ready Services-managed resources on 'rac5' has completed
CRS-2677: Stop of 'ora.crsd' on 'rac5' succeeded
CRS-2673: Attempting to stop 'ora.crf' on 'rac5'
CRS-2673: Attempting to stop 'ora.ctssd' on 'rac5'
CRS-2673: Attempting to stop 'ora.evmd' on 'rac5'
CRS-2673: Attempting to stop 'ora.asm' on 'rac5'
CRS-2673: Attempting to stop 'ora.mdnsd' on 'rac5'
CRS-2673: Attempting to stop 'ora.drivers.acfs' on 'rac5'
CRS-2677: Stop of 'ora.drivers.acfs' on 'rac5' succeeded
CRS-2677: Stop of 'ora.asm' on 'rac5' succeeded
CRS-2673: Attempting to stop 'ora.cluster_interconnect.haip' on 'rac5'
CRS-2677: Stop of 'ora.crf' on 'rac5' succeeded
CRS-2677: Stop of 'ora.mdnsd' on 'rac5' succeeded
CRS-2677: Stop of 'ora.evmd' on 'rac5' succeeded
CRS-2677: Stop of 'ora.cluster_interconnect.haip' on 'rac5' succeeded
CRS-2677: Stop of 'ora.ctssd' on 'rac5' succeeded
CRS-2673: Attempting to stop 'ora.cssd' on 'rac5'
CRS-2677: Stop of 'ora.cssd' on 'rac5' succeeded
CRS-2673: Attempting to stop 'ora.gipcd' on 'rac5'
CRS-2673: Attempting to stop 'ora.diskmon' on 'rac5'
CRS-2677: Stop of 'ora.diskmon' on 'rac5' succeeded
CRS-2677: Stop of 'ora.gipcd' on 'rac5' succeeded
CRS-2673: Attempting to stop 'ora.gpnpd' on 'rac5'
CRS-2677: Stop of 'ora.gpnpd' on 'rac5' succeeded
CRS-2793: Shutdown of Oracle High Availability Services-managed resources on 'rac5' has completed
CRS-4133: Oracle High Availability Services has been stopped.
Successfully deconfigured Oracle clusterware stack on this node
11. Even after running the above command if vip is still active stop and remove it manually. using -force option in the above command should prevent this from happening.
# srvctl stop vip -i vip_name -f
# srvctl remove vip -i vip_name -f
12. From a node that's not being deleted run the following command as root specifying the node being deleted
# crsctl delete node -n rac5
CRS-4661: Node rac5 successfully deleted.
13. On the node that's being deleted update the node list. this is missing on oracle documentation (Oracle Clusterware Administration and Deployment Guide 11g Release 2 (11.2) E10717-11 April 2010).
cd $GI_HOME/oui/bin
./runInstaller -updateNodeList ORACLE_HOME=$GI_HOME "CLUSTER_NODES={rac5}" -silent -local CRS=TRUE
Starting Oracle Universal Installer...

Checking swap space: must be greater than 500 MB.   Actual 4094 MB    Passed
The inventory pointer is located at /etc/oraInst.loc
The inventory is located at /opt/app/oraInventory
'UpdateNodeList' was successful.
inventory.xml content on the node to be deleted before running update node list
<HOME NAME="Ora11g_gridinfrahome1" LOC="/opt/app/11.2.0/grid" TYPE="O" IDX="1" CRS="true">
<NODE_LIST>
<NODE NAME="rac4"/>
<NODE NAME="rac5"/>
</NODE_LIST>
</HOME>
After update node list
<HOME NAME="Ora11g_gridinfrahome1" LOC="/opt/app/11.2.0/grid" TYPE="O" IDX="1" CRS="true">
<NODE_LIST>
<NODE NAME="rac5"/>
</NODE_LIST>
</HOME>
14. If the grid home is shared then detach it from the inventory. if it's a non shared grid home then deinstall with
$GI_HOME/deinstall/deinstall -local
without -local this will apply to the entire cluster.
./deinstall -local
Checking for required files and bootstrapping ...
Please wait ...
Location of logs /tmp/deinstall2011-02-10_01-29-40PM/logs/

############ ORACLE DEINSTALL & DECONFIG TOOL START ############

######################### CHECK OPERATION START #########################
Install check configuration START

Checking for existence of the Oracle home location /opt/app/11.2.0/grid
Oracle Home type selected for de-install is: CRS
Oracle Base selected for de-install is: /opt/app/oracle
Checking for existence of central inventory location /opt/app/oraInventory
Checking for existence of the Oracle Grid Infrastructure home /opt/app/11.2.0/grid
The following nodes are part of this cluster: rac5

Install check configuration END

Skipping Windows and .NET products configuration check
Checking Windows and .NET products configuration END

Traces log file: /tmp/deinstall2011-02-10_01-29-40PM/logs//crsdc.log
Enter an address or the name of the virtual IP used on node "rac5"[rac5-vip]
>

The following information can be collected by running "/sbin/ifconfig -a" on node "rac5"
Enter the IP netmask of Virtual IP "192.168.0.89" on node "rac5"[255.255.255.0]
>

Enter the network interface name on which the virtual IP address "192.168.0.89" is active
>

Enter an address or the name of the virtual IP[]
>

Network Configuration check config START

Network de-configuration trace file location: /tmp/deinstall2011-02-10_01-29-40PM/logs/netdc_check2011-02-10_01-30-26-PM.log

Specify all RAC listeners (do not include SCAN listener) that are to be de-configured [LISTENER,LISTENER_SCAN1]:LISTENER

At least one listener from the discovered listener list [LISTENER,LISTENER_SCAN1] is missing in the specified listener list [LISTENER]. The Oracle home will be cleaned up, so all the listeners will not be available after deinstall. If you want to remove a specific listener, please use Oracle Net Configuration Assistant instead. Do you want to continue? (y|n) [n]: y

Network Configuration check config END
Asm Check Configuration START
ASM de-configuration trace file location: /tmp/deinstall2011-02-10_01-29-40PM/logs/asmcadc_check2011-02-10_01-31-29-PM.log

######################### CHECK OPERATION END #########################

####################### CHECK OPERATION SUMMARY #######################
Oracle Grid Infrastructure Home is: /opt/app/11.2.0/grid
The cluster node(s) on which the Oracle home de-installation will be performed are:rac5
Since -local option has been specified, the Oracle home will be de-installed only on the local node, 'rac5', and the global configuration will be removed.
Oracle Home selected for de-install is: /opt/app/11.2.0/grid
Inventory Location where the Oracle home registered is: /opt/app/oraInventory
Skipping Windows and .NET products configuration check
Following RAC listener(s) will be de-configured: LISTENER
Option -local will not modify any ASM configuration.
Do you want to continue (y - yes, n - no)? [n]: y
A log of this session will be written to: '/tmp/deinstall2011-02-10_01-29-40PM/logs/deinstall_deconfig2011-02-10_01-29-56-PM.out'
Any error messages from this session will be written to: '/tmp/deinstall2011-02-10_01-29-40PM/logs/deinstall_deconfig2011-02-10_01-29-56-PM.err'

######################## CLEAN OPERATION START ########################
ASM de-configuration trace file location: /tmp/deinstall2011-02-10_01-29-40PM/logs/asmcadc_clean2011-02-10_01-31-34-PM.log
ASM Clean Configuration END

Network Configuration clean config START
Network de-configuration trace file location: /tmp/deinstall2011-02-10_01-29-40PM/logs/netdc_clean2011-02-10_01-31-34-PM.log
De-configuring RAC listener(s): LISTENER
De-configuring listener: LISTENER
Stopping listener on node "rac5": LISTENER
Warning: Failed to stop listener. Listener may not be running.
Listener de-configured successfully.

De-configuring Naming Methods configuration file...
Naming Methods configuration file de-configured successfully.
De-configuring backup files...
Backup files de-configured successfully.
The network configuration has been cleaned up successfully.
Network Configuration clean config END

---------------------------------------->

The deconfig command below can be executed in parallel on all the remote nodes. Execute the command on  the local node after the execution completes on all the remote nodes.

Run the following command as the root user or the administrator on node "rac5".

/tmp/deinstall2011-02-10_01-29-40PM/perl/bin/perl -I/tmp/deinstall2011-02-10_01-29-40PM/perl/lib -I/tmp/deinstall2011-02-10_01-29-40PM/crs/install /tmp/deinstall2011-02-10_01-29-40PM/crs/install/rootcrs.pl -force  -deconfig -paramfile "/tmp/deinstall2011-02-10_01-29-40PM/response/deinstall_Ora11g_gridinfrahome1.rsp"

Press Enter after you finish running the above commands

<----------------------------------------
Run as root user above command on a different shell
/tmp/deinstall2011-02-10_01-29-40PM/perl/bin/perl -I/tmp/deinstall2011-02-10_01-29-40PM/perl/lib -I/tmp/deinstall2011-02-10_01-29-40PM/crs/install /tmp/deinstall2011-02-10_01-29-40PM/crs/install/rootcrs.pl -force  -deconfig -paramfile "/tmp/deinstall2011-02-10_01-29-40PM/response/deinstall_Ora11g_gridinfrahome1.rsp"
Using configuration parameter file: /tmp/deinstall2011-02-10_01-29-40PM/response/deinstall_Ora11g_gridinfrahome1.rsp
Usage: srvctl <command> <object> [<options>]
commands: enable|disable|start|stop|status|add|remove|modify|getenv|setenv|unsetenv|config
objects: database|service|asm|diskgroup|listener|home|ons
For detailed help on each command and object and its options use:
srvctl <command> -h or
srvctl <command> <object> -h
PRKO-2012 : nodeapps object is not supported in Oracle Restart
ACFS-9200: Supported
CRS-4047: No Oracle Clusterware components configured.
CRS-4000: Command Stop failed, or completed with errors.
CRS-4047: No Oracle Clusterware components configured.
CRS-4000: Command Stop failed, or completed with errors.
You must kill crs processes or reboot the system to properly
cleanup the processes started by Oracle clusterware
ACFS-9313: No ADVM/ACFS installation detected.
Either /etc/oracle/olr.loc does not exist or is not readable
Make sure the file exists and it has read and execute access
Failure in execution (rc=-1, 256, No such file or directory) for command 1 /etc/init.d/ohasd deinstall
error: package cvuqdisk is not installed
Successfully deconfigured Oracle clusterware stack on this node
Once completed press enter on the first shell session.
Remove the directory: /tmp/deinstall2011-02-10_01-29-40PM on node:
Removing Windows and .NET products configuration END
Oracle Universal Installer clean START

Detach Oracle home '/opt/app/11.2.0/grid' from the central inventory on the local node : Done
Delete directory '/opt/app/11.2.0/grid' on the local node : Done
Delete directory '/opt/app/oraInventory' on the local node : Done
The Oracle Base directory '/opt/app/oracle' will not be removed on local node. The directory is not empty.
Oracle Universal Installer cleanup was successful.
Oracle Universal Installer clean END
Oracle install clean START
Clean install operation removing temporary directory '/tmp/deinstall2011-02-10_01-29-40PM' on node 'rac5'
Oracle install clean END

######################### CLEAN OPERATION END #########################

####################### CLEAN OPERATION SUMMARY #######################
Following RAC listener(s) were de-configured successfully: LISTENER
Oracle Clusterware is stopped and successfully de-configured on node "rac5"
Oracle Clusterware is stopped and de-configured successfully.
Skipping Windows and .NET products configuration clean
Successfully detached Oracle home '/opt/app/11.2.0/grid' from the central inventory on the local node.
Successfully deleted directory '/opt/app/11.2.0/grid' on the local node.
Successfully deleted directory '/opt/app/oraInventory' on the local node.
Oracle Universal Installer cleanup was successful.

Run 'rm -rf /etc/oraInst.loc' as root on node(s) 'rac5' at the end of the session.
Run 'rm -rf /opt/ORCLfmap' as root on node(s) 'rac5 ' at the end of the session.
Oracle deinstall tool successfully cleaned up temporary directories.
#######################################################################

############# ORACLE DEINSTALL & DECONFIG TOOL END #############
15. Run the following on all remaining nodes to update the node list on the inventory
cd $GI_HOME/oui/bin
./runInstaller -updateNodeList ORACLE_HOME=$GI_HOME "CLUSTER_NODES={rac4}" CRS=TRUE
Starting Oracle Universal Installer...

Checking swap space: must be greater than 500 MB.   Actual 3744 MB    Passed
The inventory pointer is located at /etc/oraInst.loc
The inventory is located at /opt/app/oraInventory
'UpdateNodeList' was successful.
16. Use cluvfy to check node was removed sucessfully.
cluvfy stage -post  nodedel -n rac5

Performing post-checks for node removal
Checking CRS integrity...
CRS integrity check passed
Node removal check passed
Post-check for node removal was successful.
This concludes the removing of the node from the RAC. But if an ocr dump is taken it would have information about the deleted node (though this node is no longer part of the cluster). Below are few bits and pieces from an OCRDUMPFILE
[SYSTEM.version.hostnames.rac5]
ORATEXT : 11.2.0.2.0
SECURITY : {USER_PERMISSION : PROCR_ALL_ACCESS, GROUP_PERMISSION : PROCR_READ, OTHER_PERMISSION : PROCR_READ, USER_NAME : root, GROUP_NAME : root}

[SYSTEM.crs.e2eport.rac5]
ORATEXT : (ADDRESS=(PROTOCOL=tcp)(HOST=192.168.0.87)(PORT=32387))
SECURITY : {USER_PERMISSION : PROCR_ALL_ACCESS, GROUP_PERMISSION : PROCR_NONE, OTHER_PERMISSION : PROCR_NONE, USER_NAME : oracle, GROUP_NAME : oinstal
l}

[DATABASE.ASM.rac5.+asm2.VERSION]
ORATEXT : 11.2.0.2.0
SECURITY : {USER_PERMISSION : PROCR_ALL_ACCESS, GROUP_PERMISSION : PROCR_READ, OTHER_PERMISSION : PROCR_READ, USER_NAME : oracle, GROUP_NAME : oinstal
l}
This doesn't mean node isn't removed properly. It is possible to add the node again (with same hostname and etc) as shown on adding a node

Related Post
Deleting a 11gR1 RAC Node