Changing Hub Node to a Leaf Node
Node rhel12c1 will be changed from a hub node to a leaf node. The current role could listed with
[root@rhel12c1 oracle]# crsctl get node role config Node 'rhel12c1' configured role is 'hub'To change the role of the node run the following command. This must be run on the node that's undergoing the role change as root
[root@rhel12c1 grid]# crsctl set node role leaf CRS-4408: Node 'rhel12c1' configured role successfully changed; restart Oracle High Availability Services for new role to take effect.Restart the node after the role change command has been executed
# crsctl stop crs # crsctl start crs -waitVerify the node started as a leaf node.
[root@rhel12c1 grid]# crsctl get node role config Node 'rhel12c1' configured role is 'leaf'The cluster now consists of a hub node and a leaf node
[root@rhel12c2 grid]# crsctl get node role config -all Node 'rhel12c1' configured role is 'leaf' Node 'rhel12c2' configured role is 'hub'The database instance that was running while the rhel12c1 was a hub node will no longer be active and is in a shutdown state
ora.rac12c1.db 1 ONLINE OFFLINE Instance Shutdown,STABLE 2 ONLINE ONLINE rhel12c2 Open,STABLEThere's no resources running on the leaf node.
[root@rhel12c1 grid]# crsctl stat res -t -c rhel12c1Finally update inventory as follows. Run the update node list command on hub nodes listing all remaining hub nodes in the cluster_nodes option. In this case only rhel12c2 is the hub node.
[grid@rhel12c2 ~]$ $GI_HOME/oui/bin/runInstaller -updateNodeList ORACLE_HOME=/opt/app/12.1.0/grid2 "CLUSTER_NODES={rhel12c2}" -silent -local CRS=TRUE Starting Oracle Universal Installer... Checking swap space: must be greater than 500 MB. Actual 4506 MB Passed The inventory pointer is located at /etc/oraInst.loc 'UpdateNodeList' was successful.On the leaf node run the update node list containing only the leaf node in the cluster_nodes option.
[grid@rhel12c1 ~]$ $GI_HOME/oui/bin/runInstaller -updateNodeList ORACLE_HOME=/opt/app/12.1.0/grid2 "CLUSTER_NODES={rhel12c1}" -silent -local CRS=TRUE Starting Oracle Universal Installer... Checking swap space: must be greater than 500 MB. Actual 5112 MB Passed The inventory pointer is located at /etc/oraInst.loc 'UpdateNodeList' was successful.This conclude converting from hub node to leaf node.
Changing Leaf Node to a Hub Node
Leaf node created in the previous step will be changed back to a hub node in this place. The current node role is
[grid@rhel12c1 ~]$ crsctl get node role config Node 'rhel12c1' configured role is 'leaf'As root change the node role to hub node with
[root@rhel12c1 grid]# crsctl set node role hub CRS-4408: Node 'rhel12c1' configured role successfully changed; restart Oracle High Availability Services for new role to take effect.Stop the cluster stack on the node undergoing the role change
# crsctl stop crsConfigure the Oracle ASM Filter Driver by running the following as root
# crsctl stop crs [root@rhel12c1 grid]# /opt/app/12.1.0/grid2/bin/asmcmd afd_configure Connected to an idle instance. AFD-627: AFD distribution files found. AFD-636: Installing requested AFD software. AFD-637: Loading installed AFD drivers. AFD-9321: Creating udev for AFD. AFD-9323: Creating module dependencies - this may take some time. AFD-9154: Loading 'oracleafd.ko' driver. AFD-649: Verifying AFD devices. AFD-9156: Detecting control device '/dev/oracleafd/admin'. AFD-638: AFD installation correctness verified. Modifying resource dependencies - this may take some time.If the node started as a leaf node then it may also require VIP to be configured on the node before changing to hub node. As this started off as a leaf node the VIP already exists and this step is not needed. Once the ASM driver is configured start cluster stack on the node and verify the node role
[root@rhel12c1 grid]# crsctl start crs -wait [grid@rhel12c1 ~]$ crsctl get node role config Node 'rhel12c1' configured role is 'hub'Resources there were down when the role was in leaf node role will be up and running again now.
[root@rhel12c1 grid]# crsctl stat res -t -c rhel12c1 -------------------------------------------------------------------------------- Name Target State Server State details -------------------------------------------------------------------------------- Local Resources -------------------------------------------------------------------------------- ora.ASMNET1LSNR_ASM.lsnr ONLINE ONLINE rhel12c1 STABLE ora.DATA.dg ONLINE ONLINE rhel12c1 STABLE ora.FRA.dg ONLINE ONLINE rhel12c1 STABLE ora.LISTENER.lsnr ONLINE ONLINE rhel12c1 STABLE ora.NEWCLUSTERDG.dg ONLINE ONLINE rhel12c1 STABLE ora.net1.network ONLINE ONLINE rhel12c1 STABLE ora.ons ONLINE ONLINE rhel12c1 STABLE -------------------------------------------------------------------------------- Cluster Resources -------------------------------------------------------------------------------- ora.LISTENER_SCAN1.lsnr 1 ONLINE ONLINE rhel12c1 STABLE ora.asm 2 ONLINE ONLINE rhel12c1 Started,STABLE ora.rac12c1.db 1 ONLINE ONLINE rhel12c1 Open,STABLE ora.rhel12c1.vip 1 ONLINE ONLINE rhel12c1 STABLE ora.scan1.vip 1 ONLINE ONLINE rhel12c1 STABLE --------------------------------------------------------------------------------Run the inventory update specifying all hub nodes in the cluster_node option
$GI_HOME/oui/bin/runInstaller -updateNodeList ORACLE_HOME=/opt/app/12.1.0/grid2 "CLUSTER_NODES={rhel12c1,rhel12c2}" -silent -local CRS=TRUEThis conclude the changing leaf node to hub node.