The RAC setup in this case is a 2 node RAC and node named rhel12c2 will be removed from the cluster. The database is a CDB which has a single PDB.
SQL> select instance_number,instance_name,host_name from gv$instance; INSTANCE_NUMBER INSTANCE_NAME HOST_NAME --------------- ---------------- -------------------- 1 cdb12c1 rhel12c1.domain.net 2 cdb12c2 rhel12c2.domain.net # node and instance to be removed SQL> select con_id,dbid,name from gv$pdbs; CON_ID DBID NAME ---------- ---------- --------- 2 4066687628 PDB$SEED 3 476277969 PDB12C 2 4066687628 PDB$SEED 3 476277969 PDB12C1. First phase include removing the database instance from the node to be deleted. For this run the DBCA on any node except on the node that has the instance being deleted. In this case DBCA is run from node rhel12c1. Follow the instance management option to remove the instance.
Following message is shown as pdbsvc is created to connect to PDB.
2. At the end of the DBCA run the database instance is removed from the node to be deleted
SQL> select instance_number,instance_name,host_name from gv$instance; INSTANCE_NUMBER INSTANCE_NAME HOST_NAME --------------- ---------------- ------------------- 1 cdb12c1 rhel12c1.domain.net SQL> select con_id,dbid,name from gv$pdbs; CON_ID DBID NAME ---------- ---------- --------- 2 4066687628 PDB$SEED 3 476277969 PDB12C [oracle@rhel12c1 ~]$ srvctl config database -d cdb12c Database unique name: cdb12c Database name: cdb12c Oracle home: /opt/app/oracle/product/12.1.0/dbhome_1 Oracle user: oracle Spfile: +DATA/cdb12c/spfilecdb12c.ora Password file: +DATA/cdb12c/orapwcdb12c Domain: Start options: open Stop options: immediate Database role: PRIMARY Management policy: AUTOMATIC Server pools: cdb12c Database instances: cdb12c1 # only shows the remaining instance Disk Groups: DATA,FLASH Mount point paths: Services: pdbsvc Type: RAC Start concurrency: Stop concurrency: Database is administrator managed3. Check if redo log threads for the deleted instance is removed from the database
SQL> select inst_id,group#,thread# from gv$log; INST_ID GROUP# THREAD# ---------- ---------- ---------- 1 1 1 1 2 1As it only shows redo threads for instance 1 no further action is needed. If DBCA has not removed the redo log threads with alter database disable thread thread# With this step the first phase is completed
4. Second phase is to remove the Oracle database software. In 12c by default the listener runs out of the grid home. However it is possible to setup a listener to run out of the Oracle home (RAC home) as well. If this is the case then stop and disable any listeners running out of the RAC home. In this configuration there's no listeners running out of the RAC home.
5. On the node to be deleted update the node list to include only the node being deleted. Before running the node update command the inventory.xml will have all the nodes under the oracle home after the command is run this will reduce to containing only the node to be deleted. However the inventory.xml in other nodes will still have all the nodes in the cluster under oracle home.
<HOME NAME="OraDB12Home1" LOC="/opt/app/oracle/product/12.1.0/dbhome_1" TYPE="O" IDX="2"> <NODE_LIST> <NODE NAME="rhel12c1"/> <NODE NAME="rhel12c2"/> </NODE_LIST> </HOME> [oracle@rhel12c2 ~]$ /opt/app/oracle/product/12.1.0/dbhome_1/oui/bin/runInstaller.sh -updateNodeList ORACLE_HOME=/opt/app/oracle/product/12.1.0/dbhome_1 "CLUSTER_NODES={rhel12c2}" -local Starting Oracle Universal Installer... <HOME NAME="OraDB12Home1" LOC="/opt/app/oracle/product/12.1.0/dbhome_1" TYPE="O" IDX="2"> <NODE_LIST> <NODE NAME="rhel12c2"/> </NODE_LIST> </HOME>6. Run the deinstall command with local option. Without the -local option this will remove the oracle home of all the nodes!
[oracle@rhel12c2 deinstall]$ ./deinstall -local Checking for required files and bootstrapping ... Please wait ... Location of logs /opt/app/oraInventory/logs/ ############ ORACLE DEINSTALL & DECONFIG TOOL START ############ ######################### CHECK OPERATION START ######################### ## [START] Install check configuration ## Checking for existence of the Oracle home location /opt/app/oracle/product/12.1.0/dbhome_1 Oracle Home type selected for deinstall is: Oracle Real Application Cluster Database Oracle Base selected for deinstall is: /opt/app/oracle Checking for existence of central inventory location /opt/app/oraInventory Checking for existence of the Oracle Grid Infrastructure home /opt/app/12.1.0/grid The following nodes are part of this cluster: rhel12c2,rhel12c1 Checking for sufficient temp space availability on node(s) : 'rhel12c2,rhel12c1' ## [END] Install check configuration ## Network Configuration check config START Network de-configuration trace file location: /opt/app/oraInventory/logs/netdc_check2014-02-27_11-08-10-AM.log Network Configuration check config END Database Check Configuration START Database de-configuration trace file location: /opt/app/oraInventory/logs/databasedc_check2014-02-27_11-08-16-AM.log Use comma as separator when specifying list of values as input Specify the list of database names that are configured locally on this node for this Oracle home. Local configurations of the discovered databases will be removed []: Database Check Configuration END Oracle Configuration Manager check START OCM check log file location : /opt/app/oraInventory/logs//ocm_check9786.log Oracle Configuration Manager check END ######################### CHECK OPERATION END ######################### ####################### CHECK OPERATION SUMMARY ####################### Oracle Grid Infrastructure Home is: /opt/app/12.1.0/grid The cluster node(s) on which the Oracle home deinstallation will be performed are:rhel12c2,rhel12c1 Since -local option has been specified, the Oracle home will be deinstalled only on the local node, 'rhel12c2'. Oracle Home selected for deinstall is: /opt/app/oracle/product/12.1.0/dbhome_1 Inventory Location where the Oracle home registered is: /opt/app/oraInventory Checking the config status for CCR rhel12c2 : Oracle Home exists with CCR directory, but CCR is not configured rhel12c1 : Oracle Home exists and CCR is configured CCR check is finished Do you want to continue (y - yes, n - no)? [n]: y A log of this session will be written to: '/opt/app/oraInventory/logs/deinstall_deconfig2014-02-27_11-07-43-AM.out' Any error messages from this session will be written to: '/opt/app/oraInventory/logs/deinstall_deconfig2014-02-27_11-07-43-AM.err' ######################## CLEAN OPERATION START ######################## Database de-configuration trace file location: /opt/app/oraInventory/logs/databasedc_clean2014-02-27_11-08-52-AM.log Network Configuration clean config START Network de-configuration trace file location: /opt/app/oraInventory/logs/netdc_clean2014-02-27_11-08-52-AM.log Network Configuration clean config END Oracle Configuration Manager clean START OCM clean log file location : /opt/app/oraInventory/logs//ocm_clean9786.log Oracle Configuration Manager clean END Setting the force flag to false Setting the force flag to cleanup the Oracle Base Oracle Universal Installer clean START Detach Oracle home '/opt/app/oracle/product/12.1.0/dbhome_1' from the central inventory on the local node : Done Delete directory '/opt/app/oracle/product/12.1.0/dbhome_1' on the local node : Done The Oracle Base directory '/opt/app/oracle' will not be removed on local node. The directory is in use by Oracle Home '/opt/app/12.1.0/grid'. Oracle Universal Installer cleanup was successful. Oracle Universal Installer clean END ## [START] Oracle install clean ## Clean install operation removing temporary directory '/tmp/deinstall2014-02-27_11-06-59AM' on node 'rhel12c2' Clean install operation removing temporary directory '/tmp/deinstall2014-02-27_11-06-59AM' on node 'rhel12c1' ## [END] Oracle install clean ## ######################### CLEAN OPERATION END ######################### ####################### CLEAN OPERATION SUMMARY ####################### Cleaning the config for CCR Cleaning the CCR configuration by executing its binaries As CCR is not configured, so skipping the cleaning of CCR configuration CCR clean is finished Successfully detached Oracle home '/opt/app/oracle/product/12.1.0/dbhome_1' from the central inventory on the local node. Successfully deleted directory '/opt/app/oracle/product/12.1.0/dbhome_1' on the local node. Oracle Universal Installer cleanup was successful. Oracle deinstall tool successfully cleaned up temporary directories. ####################################################################### ############# ORACLE DEINSTALL & DECONFIG TOOL END #############7. After the deinstall completed run the node update command on any remaining node. This will update the node list by removing the deleted oracle home from the node list. The inventory.xml output before and after command has executed is shown. The command shown is for non shred oracle homes. For shared homes folow oracle documentation.
<HOME NAME="OraDB12Home1" LOC="/opt/app/oracle/product/12.1.0/dbhome_1" TYPE="O" IDX="2"> <NODE_LIST> <NODE NAME="rhel12c1"/> <NODE NAME="rhel12c2"/> </NODE_LIST> </HOME> [oracle@rhel12c1 ~]$ /opt/app/oracle/product/12.1.0/dbhome_1/oui/bin/runInstaller.sh -updateNodeList ORACLE_HOME=/opt/app/oracle/product/12.1.0/dbhome_1 "CLUSTER_NODES={rhel12c1}" LOCAL_NODE=rhel12c1 Starting Oracle Universal Installer... Checking swap space: must be greater than 500 MB. Actual 5119 MB Passed The inventory pointer is located at /etc/oraInst.loc 'UpdateNodeList' was successful. <HOME NAME="OraDB12Home1" LOC="/opt/app/oracle/product/12.1.0/dbhome_1" TYPE="O" IDX="2"> <NODE_LIST> <NODE NAME="rhel12c1"/> </NODE_LIST> </HOME>This conclude the second phase. Final phase is to remove the clusterware.
8. Check if the node to be deleted is active and unpinned. If the node is pinned then unpin it with crsctl unpin command. Following could be run as either grid user or root
[grid@rhel12c2 ~]$ olsnodes -t -s rhel12c1 Active Unpinned rhel12c2 Active Unpinned9. On the node to be deleted run the node update command to update the node list for the grid home such that it will include only the node being deleted. The inventory.xml output before and after the command has been executed is shown below.
<HOME NAME="OraGI12Home1" LOC="/opt/app/12.1.0/grid" TYPE="O" IDX="1" CRS="true"> <NODE_LIST> <NODE NAME="rhel12c1"/> <NODE NAME="rhel12c2"/> </NODE_LIST> </HOME> [grid@rhel12c2 ~]$ /opt/app/12.1.0/grid/oui/bin/runInstaller -updateNodeList ORACLE_HOME=/opt/app/12.1.0/grid "CLUSTER_NODES={rhel12c2}" CRS=TRUE -silent -local Starting Oracle Universal Installer... Checking swap space: must be greater than 500 MB. Actual 5119 MB Passed The inventory pointer is located at /etc/oraInst.loc 'UpdateNodeList' was successful. <HOME NAME="OraGI12Home1" LOC="/opt/app/12.1.0/grid" TYPE="O" IDX="1" CRS="true"> <NODE_LIST> <NODE NAME="rhel12c2"/> </NODE_LIST> </HOME>10. If the GI home is non shared run the deinstall with -local option. If -local option is omitted this will remove GI from all nodes.
[grid@rhel12c2 ~]$ /opt/app/12.1.0/grid/deinstall/deinstall -local Checking for required files and bootstrapping ... Please wait ... Location of logs /tmp/deinstall2014-02-27_04-47-10PM/logs/ ############ ORACLE DEINSTALL & DECONFIG TOOL START ############ ######################### CHECK OPERATION START ######################### ## [START] Install check configuration ## Checking for existence of the Oracle home location /opt/app/12.1.0/grid Oracle Home type selected for deinstall is: Oracle Grid Infrastructure for a Cluster Oracle Base selected for deinstall is: /opt/app/oracle Checking for existence of central inventory location /opt/app/oraInventory Checking for existence of the Oracle Grid Infrastructure home /opt/app/12.1.0/grid The following nodes are part of this cluster: rhel12c2,rhel12c1 Checking for sufficient temp space availability on node(s) : 'rhel12c2,rhel12c1' ## [END] Install check configuration ## Traces log file: /tmp/deinstall2014-02-27_04-47-10PM/logs//crsdc_2014-02-27_04-48-27PM.log Network Configuration check config START Network de-configuration trace file location: /tmp/deinstall2014-02-27_04-47-10PM/logs/netdc_check2014-02-27_04-48-29-PM.log Network Configuration check config END Asm Check Configuration START ASM de-configuration trace file location: /tmp/deinstall2014-02-27_04-47-10PM/logs/asmcadc_check2014-02-27_04-48-29-PM.log Database Check Configuration START Database de-configuration trace file location: /tmp/deinstall2014-02-27_04-47-10PM/logs/databasedc_check2014-02-27_04-48-29-PM.log Database Check Configuration END ######################### CHECK OPERATION END ######################### ####################### CHECK OPERATION SUMMARY ####################### Oracle Grid Infrastructure Home is: /opt/app/12.1.0/grid The cluster node(s) on which the Oracle home deinstallation will be performed are:rhel12c2,rhel12c1 Since -local option has been specified, the Oracle home will be deinstalled only on the local node, 'rhel12c2'. Oracle Home selected for deinstall is: /opt/app/12.1.0/grid Inventory Location where the Oracle home registered is: /opt/app/oraInventory Option -local will not modify any ASM configuration. Do you want to continue (y - yes, n - no)? [n]: y A log of this session will be written to: '/tmp/deinstall2014-02-27_04-47-10PM/logs/deinstall_deconfig2014-02-27_04-48-01-PM.out' Any error messages from this session will be written to: '/tmp/deinstall2014-02-27_04-47-10PM/logs/deinstall_deconfig2014-02-27_04-48-01-PM.err' ######################## CLEAN OPERATION START ######################## Database de-configuration trace file location: /tmp/deinstall2014-02-27_04-47-10PM/logs/databasedc_clean2014-02-27_04-48-33-PM.log ASM de-configuration trace file location: /tmp/deinstall2014-02-27_04-47-10PM/logs/asmcadc_clean2014-02-27_04-48-33-PM.log ASM Clean Configuration END Network Configuration clean config START Network de-configuration trace file location: /tmp/deinstall2014-02-27_04-47-10PM/logs/netdc_clean2014-02-27_04-48-34-PM.log Network Configuration clean config END ----------------------------------------> The deconfig command below can be executed in parallel on all the remote nodes. Execute the command on the local node after the execution completes on all the remote nodes. Run the following command as the root user or the administrator on node "rhel12c2". /tmp/deinstall2014-02-27_04-47-10PM/perl/bin/perl -I/tmp/deinstall2014-02-27_04-47-10PM/perl/lib -I/tmp/deinstall2014-02-27_04-47-10PM/crs/install /tmp/deinstall2014-02-27_04-47-10PM/crs/install/rootcrs.pl -force -deconfig -paramfile "/tmp/deinstall2014-02-27_04-47-10PM/response/deinstall_OraGI12Home1.rsp" Press Enter after you finish running the above commands <---------------------------------------- [root@rhel12c2 ~]# /tmp/deinstall2014-02-27_04-47-10PM/perl/bin/perl -I/tmp/deinstall2014-02-27_04-47-10PM/perl/lib -I/tmp/deinstall2014-02-27_04-47-10PM/crs/install /tmp/deinstall2014-02-27_04-47-10PM/crs/install/rootcrs.pl -force -deconfig -paramfile "/tmp/deinstall2014-02-27_04-47-10PM/response/deinstall_OraGI12Home1.rsp" Using configuration parameter file: /tmp/deinstall2014-02-27_04-47-10PM/response/deinstall_OraGI12Home1.rsp Network 1 exists Subnet IPv4: 192.168.0.0/255.255.255.0/eth0, static Subnet IPv6: VIP exists: network number 1, hosting node rhel12c1 VIP Name: rhel12c1-vip VIP IPv4 Address: 192.168.0.89 VIP IPv6 Address: VIP exists: network number 1, hosting node rhel12c2 VIP Name: rhel12c2-vip VIP IPv4 Address: 192.168.0.90 VIP IPv6 Address: ONS exists: Local port 6100, remote port 6200, EM port 2016 CRS-2791: Starting shutdown of Oracle High Availability Services-managed resources on 'rhel12c2' CRS-2673: Attempting to stop 'ora.crsd' on 'rhel12c2' CRS-2790: Starting shutdown of Cluster Ready Services-managed resources on 'rhel12c2' CRS-2673: Attempting to stop 'ora.FLASH.VOLUME1.advm' on 'rhel12c2' CRS-2673: Attempting to stop 'ora.CLUSTERDG.dg' on 'rhel12c2' CRS-2677: Stop of 'ora.FLASH.VOLUME1.advm' on 'rhel12c2' succeeded CRS-2673: Attempting to stop 'ora.FLASH.dg' on 'rhel12c2' CRS-2677: Stop of 'ora.FLASH.dg' on 'rhel12c2' succeeded CRS-2677: Stop of 'ora.CLUSTERDG.dg' on 'rhel12c2' succeeded CRS-2673: Attempting to stop 'ora.DATA.dg' on 'rhel12c2' CRS-2677: Stop of 'ora.DATA.dg' on 'rhel12c2' succeeded CRS-2673: Attempting to stop 'ora.asm' on 'rhel12c2' CRS-2677: Stop of 'ora.asm' on 'rhel12c2' succeeded CRS-2673: Attempting to stop 'ora.net1.network' on 'rhel12c2' CRS-2677: Stop of 'ora.net1.network' on 'rhel12c2' succeeded CRS-2792: Shutdown of Cluster Ready Services-managed resources on 'rhel12c2' has completed CRS-2677: Stop of 'ora.crsd' on 'rhel12c2' succeeded CRS-2673: Attempting to stop 'ora.drivers.acfs' on 'rhel12c2' CRS-2673: Attempting to stop 'ora.evmd' on 'rhel12c2' CRS-2673: Attempting to stop 'ora.storage' on 'rhel12c2' CRS-2673: Attempting to stop 'ora.mdnsd' on 'rhel12c2' CRS-2673: Attempting to stop 'ora.gpnpd' on 'rhel12c2' CRS-2677: Stop of 'ora.drivers.acfs' on 'rhel12c2' succeeded CRS-2677: Stop of 'ora.storage' on 'rhel12c2' succeeded CRS-2677: Stop of 'ora.mdnsd' on 'rhel12c2' succeeded CRS-2677: Stop of 'ora.gpnpd' on 'rhel12c2' succeeded CRS-2677: Stop of 'ora.evmd' on 'rhel12c2' succeeded CRS-2673: Attempting to stop 'ora.ctssd' on 'rhel12c2' CRS-2673: Attempting to stop 'ora.asm' on 'rhel12c2' CRS-2677: Stop of 'ora.ctssd' on 'rhel12c2' succeeded CRS-2677: Stop of 'ora.asm' on 'rhel12c2' succeeded CRS-2673: Attempting to stop 'ora.cluster_interconnect.haip' on 'rhel12c2' CRS-2677: Stop of 'ora.cluster_interconnect.haip' on 'rhel12c2' succeeded CRS-2673: Attempting to stop 'ora.cssd' on 'rhel12c2' CRS-2677: Stop of 'ora.cssd' on 'rhel12c2' succeeded CRS-2673: Attempting to stop 'ora.gipcd' on 'rhel12c2' CRS-2677: Stop of 'ora.gipcd' on 'rhel12c2' succeeded CRS-2793: Shutdown of Oracle High Availability Services-managed resources on 'rhel12c2' has completed CRS-4133: Oracle High Availability Services has been stopped. 2014/02/27 16:53:43 CLSRSC-336: Successfully deconfigured Oracle clusterware stack on this node Failed to delete the directory '/opt/app/oracle/product/12.1.0'. The directory is in use. Failed to delete the directory '/opt/app/oracle/diag/rdbms/cdb12c/cdb12c2/log/test'. The directory is in use.Removal of some of the directories failed but this had no impact on the removing of the node from the cluster. These directories could be manually cleaned up afterwards.
11. From any remaining node run the following command with reaming nodes as the node list. The inventory.xml output is given before and after the command is run
<HOME NAME="OraGI12Home1" LOC="/opt/app/12.1.0/grid" TYPE="O" IDX="1" CRS="true"> <NODE_LIST> <NODE NAME="rhel12c1"/> <NODE NAME="rhel12c2"/> </NODE_LIST> </HOME> [grid@rhel12c1 bin]$ ./runInstaller -updateNodeList ORACLE_HOME=/opt/app/12.1.0/grid "CLUSTER_NODES={rhel12c1}" CRS=TRUE -silent Starting Oracle Universal Installer... Checking swap space: must be greater than 500 MB. Actual 5119 MB Passed The inventory pointer is located at /etc/oraInst.loc 'UpdateNodeList' was successful. <HOME NAME="OraGI12Home1" LOC="/opt/app/12.1.0/grid" TYPE="O" IDX="1" CRS="true"> <NODE_LIST> <NODE NAME="rhel12c1"/> </NODE_LIST> </HOME>12. From the node remaining in the cluster run the node deletion command as root
[root@rhel12c1 bin]# crsctl delete node -n rhel12c2 CRS-4661: Node rhel12c2 successfully deleted.13. Finally use the cluster verification utility to check the node deletion has completed successfully.
[grid@rhel12c1 bin]$ cluvfy stage -post nodedel -n rhel12c2 Performing post-checks for node removal Checking CRS integrity... CRS integrity check passed Clusterware version consistency passed. Node removal check passed Post-check for node removal was successful.This conclude the deletion of node from 12cR1 RAC.
Related Post
Deleting a Node From 11gR2 RAC
Deleting a 11gR1 RAC Node