Friday, May 17, 2019

Adding a Node to 18c RAC

This post list the steps for adding a node to 18c RAC which is similar to that of adding a node to 12cR1 RAC. Node addition is done in three phases. Phase one is to add the clusterware to the new node. Second phase will add the database software and final phase will extend the database to the new node by creating a new instance for it.
The current cluster setup consists of a single node named rhel71 and a new node named rhel72 will be added to the cluster.
1. It is assumed that physical connections (shared storage connections, network) are made to the new node being added. The cluvfy tool could be used to check the pre-reqs for node addition are successful. In this case the failures relate to memory and shm mount point could be ignored as this is a test setup.
cluvfy stage -pre nodeadd -n rhel72

Verifying Physical Memory ...FAILED (PRVF-7530)
Verifying Available Physical Memory ...PASSED
Verifying Swap Size ...FAILED (PRVF-7573)
Verifying Free Space: rhel72:/usr,rhel72:/var,rhel72:/etc,rhel72:/opt/app/18.x.0/grid,rhel72:/sbin,rhel72:/tmp ...PASSED
Verifying Free Space: rhel71:/usr,rhel71:/var,rhel71:/etc,rhel71:/opt/app/18.x.0/grid,rhel71:/sbin,rhel71:/tmp ...FAILED (PRVF-7501)
Verifying User Existence: oracle ...
  Verifying Users With Same UID: 501 ...PASSED
Verifying User Existence: oracle ...PASSED
Verifying User Existence: grid ...
  Verifying Users With Same UID: 502 ...PASSED
Verifying User Existence: grid ...PASSED
Verifying User Existence: root ...
  Verifying Users With Same UID: 0 ...PASSED
Verifying User Existence: root ...PASSED
Verifying Group Existence: asmadmin ...PASSED
Verifying Group Existence: asmoper ...PASSED
Verifying Group Existence: asmdba ...PASSED
Verifying Group Existence: oinstall ...PASSED
Verifying Group Membership: oinstall ...PASSED
Verifying Group Membership: asmdba ...PASSED
Verifying Group Membership: asmadmin ...PASSED
Verifying Group Membership: asmoper ...PASSED
Verifying Run Level ...PASSED
Verifying Hard Limit: maximum open file descriptors ...PASSED
Verifying Soft Limit: maximum open file descriptors ...PASSED
Verifying Hard Limit: maximum user processes ...PASSED
Verifying Soft Limit: maximum user processes ...PASSED
Verifying Soft Limit: maximum stack size ...PASSED
Verifying Architecture ...PASSED
Verifying OS Kernel Version ...PASSED
Verifying OS Kernel Parameter: semmsl ...PASSED
Verifying OS Kernel Parameter: semmns ...PASSED
Verifying OS Kernel Parameter: semopm ...PASSED
Verifying OS Kernel Parameter: semmni ...PASSED
Verifying OS Kernel Parameter: shmmax ...PASSED
Verifying OS Kernel Parameter: shmmni ...PASSED
Verifying OS Kernel Parameter: shmall ...PASSED
Verifying OS Kernel Parameter: file-max ...PASSED
Verifying OS Kernel Parameter: ip_local_port_range ...PASSED
Verifying OS Kernel Parameter: rmem_default ...PASSED
Verifying OS Kernel Parameter: rmem_max ...PASSED
Verifying OS Kernel Parameter: wmem_default ...PASSED
Verifying OS Kernel Parameter: wmem_max ...PASSED
Verifying OS Kernel Parameter: aio-max-nr ...PASSED
Verifying Package: binutils-2.23.52.0.1 ...PASSED
Verifying Package: compat-libcap1-1.10 ...PASSED
Verifying Package: libgcc-4.8.2 (x86_64) ...PASSED
Verifying Package: libstdc++-4.8.2 (x86_64) ...PASSED
Verifying Package: libstdc++-devel-4.8.2 (x86_64) ...PASSED
Verifying Package: sysstat-10.1.5 ...PASSED
Verifying Package: gcc-c++-4.8.2 ...PASSED
Verifying Package: ksh ...PASSED
Verifying Package: make-3.82 ...PASSED
Verifying Package: glibc-2.17 (x86_64) ...PASSED
Verifying Package: glibc-devel-2.17 (x86_64) ...PASSED
Verifying Package: libaio-0.3.109 (x86_64) ...PASSED
Verifying Package: libaio-devel-0.3.109 (x86_64) ...PASSED
Verifying Package: nfs-utils-1.2.3-15 ...PASSED
Verifying Package: smartmontools-6.2-4 ...PASSED
Verifying Package: net-tools-2.0-0.17 ...PASSED
Verifying Package: compat-libstdc++-33-3.2.3 (x86_64) ...PASSED
Verifying Package: libxcb-1.11 (x86_64) ...PASSED
Verifying Package: libX11-1.6.3 (x86_64) ...PASSED
Verifying Package: libXau-1.0.8 (x86_64) ...PASSED
Verifying Package: libXi-1.7.4 (x86_64) ...PASSED
Verifying Package: libXtst-1.2.2 (x86_64) ...PASSED
Verifying Users With Same UID: 0 ...PASSED
Verifying Current Group ID ...PASSED
Verifying Root user consistency ...PASSED
Verifying Package: cvuqdisk-1.0.10-1 ...PASSED
Verifying Node Addition ...
  Verifying CRS Integrity ...PASSED
  Verifying Clusterware Version Consistency ...PASSED
  Verifying '/opt/app/18.x.0/grid' ...PASSED
Verifying Node Addition ...PASSED
Verifying Host name ...PASSED
Verifying Node Connectivity ...
  Verifying Hosts File ...PASSED
  Verifying Check that maximum (MTU) size packet goes through subnet ...PASSED
  Verifying subnet mask consistency for subnet "192.168.1.0" ...PASSED
  Verifying subnet mask consistency for subnet "192.168.0.0" ...PASSED
Verifying Node Connectivity ...PASSED
Verifying Multicast or broadcast check ...PASSED
Verifying ASM Integrity ...
  Verifying Node Connectivity ...
    Verifying Hosts File ...PASSED
    Verifying Check that maximum (MTU) size packet goes through subnet ...PASSED
    Verifying subnet mask consistency for subnet "192.168.1.0" ...PASSED
    Verifying subnet mask consistency for subnet "192.168.0.0" ...PASSED
  Verifying Node Connectivity ...PASSED
Verifying ASM Integrity ...PASSED
Verifying Device Checks for ASM ...
  Verifying Package: cvuqdisk-1.0.10-1 ...PASSED
  Verifying ASM device sharedness check ...
    Verifying Shared Storage Accessibility:/dev/oracleasm/data1,/dev/oracleasm/gimr,/dev/oracleasm/fra1,/dev/oracleasm/ocr1,/dev/oracleasm/ocr2,/dev/oracleasm/ocr3 ...PASSED
  Verifying ASM device sharedness check ...PASSED
  Verifying Access Control List check ...PASSED
Verifying Device Checks for ASM ...PASSED
Verifying Database home availability ...PASSED
Verifying OCR Integrity ...PASSED
Verifying Time zone consistency ...PASSED
Verifying Network Time Protocol (NTP) ...
  Verifying '/etc/ntp.conf' ...PASSED
  Verifying '/etc/chrony.conf' ...PASSED
  Verifying '/var/run/ntpd.pid' ...PASSED
  Verifying '/var/run/chronyd.pid' ...PASSED
Verifying Network Time Protocol (NTP) ...PASSED
Verifying User Not In Group "root": grid ...PASSED
Verifying resolv.conf Integrity ...PASSED
Verifying DNS/NIS name service ...PASSED
Verifying User Equivalence ...PASSED
Verifying /dev/shm mounted as temporary file system ...FAILED (PRVE-0421)
Verifying /boot mount ...PASSED
Verifying zeroconf check ...PASSED

Pre-check for node addition was unsuccessful on all the nodes.


Failures were encountered during execution of CVU verification request "stage -pre nodeadd".

Verifying Physical Memory ...FAILED
rhel72: PRVF-7530 : Sufficient physical memory is not available on node
        "rhel72" [Required physical memory = 8GB (8388608.0KB)]

rhel71: PRVF-7530 : Sufficient physical memory is not available on node
        "rhel71" [Required physical memory = 8GB (8388608.0KB)]

Verifying Swap Size ...FAILED
rhel72: PRVF-7573 : Sufficient swap size is not available on node "rhel72"
        [Required = 4.686GB (4913676.0KB) ; Found = 3.7246GB (3905532.0KB)]

rhel71: PRVF-7573 : Sufficient swap size is not available on node "rhel71"
        [Required = 4.686GB (4913676.0KB) ; Found = 3.7246GB (3905532.0KB)]

Verifying Free Space:
rhel71:/usr,rhel71:/var,rhel71:/etc,rhel71:/opt/app/18.x.0/grid,rhel71:/sbin,rhe
l71:/tmp ...FAILED
PRVF-7501 : Sufficient space is not available at location
"/opt/app/18.x.0/grid" on node "rhel71" [Required space = 6.9GB ; available
space = 6.0439GB ]
PRVF-7501 : Sufficient space is not available at location "/" on node "rhel71"
[Required space = [25MB (/usr) + 5MB (/var) + 25MB (/etc) + 6.9GB
(/opt/app/18.x.0/grid) + 10MB (/sbin) + 1GB (/tmp) = 7.9635GB ]; available
space = 6.0439GB ]

Verifying /dev/shm mounted as temporary file system ...FAILED
rhel72: PRVE-0421 : No entry exists in /etc/fstab for mounting /dev/shm


CVU operation performed:      stage -pre nodeadd
Date:                         04-Apr-2019 16:15:51
CVU home:                     /opt/app/18.x.0/grid/
User:                         grid

2. To extend the cluster by installing clusterware on the new node run the gridSetup.sh from an existing node and select add more nodes to cluster option.

3. Execute the root script when prompted.
# /opt/app/18.x.0/grid/root.sh
Performing root user operation.

The following environment variables are set as:
    ORACLE_OWNER= grid
    ORACLE_HOME=  /opt/app/18.x.0/grid

Enter the full pathname of the local bin directory: [/usr/local/bin]:
The contents of "dbhome" have not changed. No need to overwrite.
The contents of "oraenv" have not changed. No need to overwrite.
The contents of "coraenv" have not changed. No need to overwrite.


Creating /etc/oratab file...
Entries will be added to the /etc/oratab file as needed by
Database Configuration Assistant when a database is created
Finished running generic part of root script.
Now product-specific root actions will be performed.
Relinking oracle with rac_on option
Using configuration parameter file: /opt/app/18.x.0/grid/crs/install/crsconfig_params
The log of current session can be found at:
  /opt/app/oracle/crsdata/rhel72/crsconfig/rootcrs_rhel72_2019-04-08_11-08-58AM.log
2019/04/08 11:09:11 CLSRSC-594: Executing installation step 1 of 20: 'SetupTFA'.
2019/04/08 11:09:11 CLSRSC-4001: Installing Oracle Trace File Analyzer (TFA) Collector.
2019/04/08 11:09:57 CLSRSC-4002: Successfully installed Oracle Trace File Analyzer (TFA) Collector.
2019/04/08 11:09:57 CLSRSC-594: Executing installation step 2 of 20: 'ValidateEnv'.
2019/04/08 11:10:10 CLSRSC-363: User ignored prerequisites during installation
2019/04/08 11:10:10 CLSRSC-594: Executing installation step 3 of 20: 'CheckFirstNode'.
2019/04/08 11:10:11 CLSRSC-594: Executing installation step 4 of 20: 'GenSiteGUIDs'.
2019/04/08 11:10:17 CLSRSC-594: Executing installation step 5 of 20: 'SaveParamFile'.
2019/04/08 11:10:23 CLSRSC-594: Executing installation step 6 of 20: 'SetupOSD'.
2019/04/08 11:10:23 CLSRSC-594: Executing installation step 7 of 20: 'CheckCRSConfig'.
2019/04/08 11:10:32 CLSRSC-594: Executing installation step 8 of 20: 'SetupLocalGPNP'.
2019/04/08 11:10:35 CLSRSC-594: Executing installation step 9 of 20: 'CreateRootCert'.
2019/04/08 11:10:35 CLSRSC-594: Executing installation step 10 of 20: 'ConfigOLR'.
2019/04/08 11:10:45 CLSRSC-594: Executing installation step 11 of 20: 'ConfigCHMOS'.
2019/04/08 11:10:46 CLSRSC-594: Executing installation step 12 of 20: 'CreateOHASD'.
2019/04/08 11:10:48 CLSRSC-594: Executing installation step 13 of 20: 'ConfigOHASD'.
2019/04/08 11:10:49 CLSRSC-330: Adding Clusterware entries to file 'oracle-ohasd.service'
2019/04/08 11:12:25 CLSRSC-594: Executing installation step 14 of 20: 'InstallAFD'.
2019/04/08 11:12:29 CLSRSC-594: Executing installation step 15 of 20: 'InstallACFS'.
CRS-2791: Starting shutdown of Oracle High Availability Services-managed resources on 'rhel72'
CRS-2793: Shutdown of Oracle High Availability Services-managed resources on 'rhel72' has completed
CRS-4133: Oracle High Availability Services has been stopped.
CRS-4123: Oracle High Availability Services has been started.
2019/04/08 11:13:52 CLSRSC-594: Executing installation step 16 of 20: 'InstallKA'.
2019/04/08 11:13:55 CLSRSC-594: Executing installation step 17 of 20: 'InitConfig'.
CRS-2791: Starting shutdown of Oracle High Availability Services-managed resources on 'rhel72'
CRS-2793: Shutdown of Oracle High Availability Services-managed resources on 'rhel72' has completed
CRS-4133: Oracle High Availability Services has been stopped.
CRS-4123: Oracle High Availability Services has been started.
CRS-2791: Starting shutdown of Oracle High Availability Services-managed resources on 'rhel72'
CRS-2673: Attempting to stop 'ora.drivers.acfs' on 'rhel72'
CRS-2677: Stop of 'ora.drivers.acfs' on 'rhel72' succeeded
CRS-2793: Shutdown of Oracle High Availability Services-managed resources on 'rhel72' has completed
CRS-4133: Oracle High Availability Services has been stopped.
2019/04/08 11:14:10 CLSRSC-594: Executing installation step 18 of 20: 'StartCluster'.
CRS-4123: Starting Oracle High Availability Services-managed resources
CRS-2672: Attempting to start 'ora.mdnsd' on 'rhel72'
CRS-2672: Attempting to start 'ora.evmd' on 'rhel72'
CRS-2676: Start of 'ora.mdnsd' on 'rhel72' succeeded
CRS-2676: Start of 'ora.evmd' on 'rhel72' succeeded
CRS-2672: Attempting to start 'ora.gpnpd' on 'rhel72'
CRS-2676: Start of 'ora.gpnpd' on 'rhel72' succeeded
CRS-2672: Attempting to start 'ora.gipcd' on 'rhel72'
CRS-2676: Start of 'ora.gipcd' on 'rhel72' succeeded
CRS-2672: Attempting to start 'ora.crf' on 'rhel72'
CRS-2672: Attempting to start 'ora.cssdmonitor' on 'rhel72'
CRS-2676: Start of 'ora.cssdmonitor' on 'rhel72' succeeded
CRS-2672: Attempting to start 'ora.cssd' on 'rhel72'
CRS-2672: Attempting to start 'ora.diskmon' on 'rhel72'
CRS-2676: Start of 'ora.diskmon' on 'rhel72' succeeded
CRS-2676: Start of 'ora.crf' on 'rhel72' succeeded
CRS-2676: Start of 'ora.cssd' on 'rhel72' succeeded
CRS-2672: Attempting to start 'ora.cluster_interconnect.haip' on 'rhel72'
CRS-2672: Attempting to start 'ora.ctssd' on 'rhel72'
CRS-2676: Start of 'ora.ctssd' on 'rhel72' succeeded
CRS-2676: Start of 'ora.cluster_interconnect.haip' on 'rhel72' succeeded
CRS-2672: Attempting to start 'ora.asm' on 'rhel72'
CRS-2676: Start of 'ora.asm' on 'rhel72' succeeded
CRS-2672: Attempting to start 'ora.storage' on 'rhel72'
CRS-2676: Start of 'ora.storage' on 'rhel72' succeeded
CRS-2672: Attempting to start 'ora.crsd' on 'rhel72'
CRS-2676: Start of 'ora.crsd' on 'rhel72' succeeded
CRS-6017: Processing resource auto-start for servers: rhel72
CRS-2673: Attempting to stop 'ora.LISTENER_SCAN1.lsnr' on 'rhel71'
CRS-2672: Attempting to start 'ora.ASMNET1LSNR_ASM.lsnr' on 'rhel72'
CRS-2672: Attempting to start 'ora.ons' on 'rhel72'
CRS-2672: Attempting to start 'ora.chad' on 'rhel72'
CRS-2672: Attempting to start 'ora.qosmserver' on 'rhel72'
CRS-2677: Stop of 'ora.LISTENER_SCAN1.lsnr' on 'rhel71' succeeded
CRS-2673: Attempting to stop 'ora.scan1.vip' on 'rhel71'
CRS-2677: Stop of 'ora.scan1.vip' on 'rhel71' succeeded
CRS-2672: Attempting to start 'ora.scan1.vip' on 'rhel72'
CRS-2676: Start of 'ora.ASMNET1LSNR_ASM.lsnr' on 'rhel72' succeeded
CRS-2672: Attempting to start 'ora.asm' on 'rhel72'
CRS-2676: Start of 'ora.ons' on 'rhel72' succeeded
CRS-2676: Start of 'ora.chad' on 'rhel72' succeeded
CRS-2676: Start of 'ora.scan1.vip' on 'rhel72' succeeded
CRS-2672: Attempting to start 'ora.LISTENER_SCAN1.lsnr' on 'rhel72'
CRS-2676: Start of 'ora.LISTENER_SCAN1.lsnr' on 'rhel72' succeeded
CRS-2676: Start of 'ora.qosmserver' on 'rhel72' succeeded
CRS-2672: Attempting to start 'ora.chad' on 'rhel71'
CRS-2672: Attempting to start 'ora.ASMNET1LSNR_ASM.lsnr' on 'rhel71'
CRS-2676: Start of 'ora.ASMNET1LSNR_ASM.lsnr' on 'rhel71' succeeded
CRS-2676: Start of 'ora.asm' on 'rhel72' succeeded
CRS-2672: Attempting to start 'ora.DATA.dg' on 'rhel72'
CRS-2676: Start of 'ora.DATA.dg' on 'rhel72' succeeded
CRS-6016: Resource auto-start has completed for server rhel72
CRS-6024: Completed start of Oracle Cluster Ready Services-managed resources
CRS-4123: Oracle High Availability Services has been started.
2019/04/08 11:16:43 CLSRSC-343: Successfully started Oracle Clusterware stack
2019/04/08 11:16:43 CLSRSC-594: Executing installation step 19 of 20: 'ConfigNode'.
clscfg: EXISTING configuration version 5 detected.
clscfg: version 5 is 12c Release 2.
Successfully accumulated necessary OCR keys.
Creating OCR keys for user 'root', privgrp 'root'..
Operation successful.
2019/04/08 11:17:43 CLSRSC-594: Executing installation step 20 of 20: 'PostConfig'.
2019/04/08 11:17:57 CLSRSC-325: Configure Oracle Grid Infrastructure for a Cluster ... succeeded



4. Use cluvfy to verify node addition
cluvfy stage -post nodeadd -n rhel72

Verifying Node Connectivity ...
  Verifying Hosts File ...PASSED
  Verifying Check that maximum (MTU) size packet goes through subnet ...PASSED
  Verifying subnet mask consistency for subnet "192.168.1.0" ...PASSED
  Verifying subnet mask consistency for subnet "192.168.0.0" ...PASSED
Verifying Node Connectivity ...PASSED
Verifying Cluster Integrity ...PASSED
Verifying Node Addition ...
  Verifying CRS Integrity ...PASSED
  Verifying Clusterware Version Consistency ...PASSED
  Verifying '/opt/app/18.x.0/grid' ...PASSED
Verifying Node Addition ...PASSED
Verifying Multicast or broadcast check ...PASSED
Verifying Node Application Existence ...PASSED
Verifying Single Client Access Name (SCAN) ...
  Verifying DNS/NIS name service 'rac-scan.domain.net' ...
    Verifying Name Service Switch Configuration File Integrity ...PASSED
  Verifying DNS/NIS name service 'rac-scan.domain.net' ...PASSED
Verifying Single Client Access Name (SCAN) ...PASSED
Verifying User Not In Group "root": grid ...PASSED
Verifying Clock Synchronization ...PASSED

Post-check for node addition was successful.

CVU operation performed:      stage -post nodeadd
Date:                         08-Apr-2019 11:21:17
CVU home:                     /opt/app/18.x.0/grid/
User:                         grid
5. The next step is to extend the database software to new node. To add database software run addNode.sh from the $ORACLE_HOME/addnode directory as oracle user from an existing node.
cd $ORACLE_HOME/addnode
./addnode.sh "CLUSTER_NEW_NODES={rhel72}"
When the OUI starts the new node comes selected by default. When prompted run root.sh to complete the database software installation on the new node.

6. Before extending the database by adding a new instance change the permission on the admin folder on the new node.
cd $ORACLE_BASE
chmod 775 admin
7. Run DBCA and select RAC instance management and add an instance options.
Select the database to be extended to new node.
Specify new instance's details
6. Verify the database has been extended to include a new instance.
srvctl config database -d ent18c
Database unique name: ent18c
Database name: ent18c
Oracle home: /opt/app/oracle/product/18.x.0/dbhome_1
Oracle user: oracle
Spfile: +DATA/ENT18C/PARAMETERFILE/spfile.272.1003495623
Password file: +DATA/ENT18C/PASSWORD/pwdent18c.256.1003490291
Domain:
Start options: open
Stop options: immediate
Database role: PRIMARY
Management policy: AUTOMATIC
Server pools:
Disk Groups: DATA,FRA
Mount point paths:
Services: pdbsrv
Type: RAC
Start concurrency:
Stop concurrency:
OSDBA group: dba
OSOPER group: oper
Database instances: ent18c1,ent18c2
Configured nodes: rhel71,rhel72
CSS critical: no
CPU count: 0
Memory target: 0
Maximum memory: 0
Default network number for database services:
Database is administrator managed

SQL> select inst_id,instance_name,host_name from gv$instance;

   INST_ID INSTANCE_NAME    HOST_NAME
---------- ---------------- ----------------------
         1 ent18c1          rhel71.domain.net
         2 ent18c2          rhel72.domain.net
   
SQL>  select inst_id,con_id,name from gv$pdbs order by 1;

   INST_ID     CON_ID NAME
---------- ---------- --------------
         1          2 PDB$SEED
         1          3 PDB18C
         2          2 PDB$SEED
         2          3 PDB18C
7. Modify the service to include the new database instance.
srvctl modify service -db ent18c -pdb pdb18c -s pdbsrv -modifyconfig -preferred "ent18c1,ent18c2"
srvctl config service -db ent18c -service pdbsrv
Service name: pdbsrv
Server pool:
Cardinality: 2
Service role: PRIMARY
Management policy: AUTOMATIC
DTP transaction: false
AQ HA notifications: false
Global: false
Commit Outcome: false
Failover type:
Failover method:
Failover retries:
Failover delay:
Failover restore: NONE
Connection Load Balancing Goal: LONG
Runtime Load Balancing Goal: NONE
TAF policy specification: NONE
Edition:
Pluggable database name: pdb18c
Hub service:
Maximum lag time: ANY
SQL Translation Profile:
Retention: 86400 seconds
Replay Initiation Time: 300 seconds
Drain timeout:
Stop option:
Session State Consistency: DYNAMIC
GSM Flags: 0
Service is enabled
Preferred instances: ent18c1,ent18c2
Available instances:
CSS critical: no
This concludes the addition of a new node to 12cR1 RAC.

Related Posts
Adding a Node to 12cR1 RAC
Adding a Node to 11gR2 RAC
Adding a Node to 11gR1 RAC

Friday, May 3, 2019

Upgrading Oracle Restart from 18c (18.6) to 19c (19.3)

This post lists the steps for upgrading an Oracle restart environment (Single instance non-CDB on ASM) from 18.6 to 19.3. The 18c setup is on RHEL 7. The OS and kernel versions are as follows
cat /etc/redhat-release
Red Hat Enterprise Linux Server release 7.6 (Maipo)

uname -r
3.10.0-957.el7.x86_64
The current versions of the GI are
crsctl query has releaseversion
Oracle High Availability Services release version on the local node is [18.0.0.0.0]
crsctl query has softwareversion
Oracle High Availability Services version on the local node is [18.0.0.0.0]
The Oracle restart environment used is not a role separated setup, both Oracle and GI homes are installed using oracle user.
Download and run orachk -u -o pre (1268927.2) on the 18c installation to identify any required patches before the upgrade. With the above 18.6 RU applied no other patches were required be applied prior to the upgrade.
19c grid software instillation is also similar to 18c and is based on a image file. Before unzipping the GI image file create the 19c GI home directory and then unzip the GI image file into it.
mkdir -p /opt/app/oracle/product/19.x.0/grid
unzip LINUX.X64_193000_grid_home.zip -d /opt/app/oracle/product/19.x.0/grid
Use cluvfy with hacfg to verify pre-reqs. The SWAP size failure shown is ignorable on a test system.
cd /opt/app/oracle/product/19.x.0/grid/

./runcluvfy.sh stage -pre hacfg

Verifying Physical Memory ...PASSED
Verifying Available Physical Memory ...PASSED
Verifying Swap Size ...FAILED (PRVF-7573)
Verifying Free Space: ip-172-31-2-77:/usr,ip-172-31-2-77:/var,ip-172-31-2-77:/etc,ip-172-31-2-77:/sbin,ip-172-31-2-77:/tmp ...PASSED
Verifying User Existence: oracle ...
  Verifying Users With Same UID: 501 ...PASSED
Verifying User Existence: oracle ...PASSED
Verifying Group Existence: dba ...PASSED
Verifying Group Existence: oinstall ...PASSED
Verifying Group Membership: oinstall(Primary) ...PASSED
Verifying Group Membership: dba ...PASSED
Verifying Run Level ...PASSED
Verifying Architecture ...PASSED
Verifying OS Kernel Version ...PASSED
Verifying OS Kernel Parameter: semmsl ...PASSED
Verifying OS Kernel Parameter: semmns ...PASSED
Verifying OS Kernel Parameter: semopm ...PASSED
Verifying OS Kernel Parameter: semmni ...PASSED
Verifying OS Kernel Parameter: shmmax ...PASSED
Verifying OS Kernel Parameter: shmmni ...PASSED
Verifying OS Kernel Parameter: shmall ...PASSED
Verifying OS Kernel Parameter: file-max ...PASSED
Verifying OS Kernel Parameter: ip_local_port_range ...PASSED
Verifying OS Kernel Parameter: rmem_default ...PASSED
Verifying OS Kernel Parameter: rmem_max ...PASSED
Verifying OS Kernel Parameter: wmem_default ...PASSED
Verifying OS Kernel Parameter: wmem_max ...PASSED
Verifying OS Kernel Parameter: aio-max-nr ...PASSED
Verifying Package: kmod-20-21 (x86_64) ...PASSED
Verifying Package: kmod-libs-20-21 (x86_64) ...PASSED
Verifying Package: binutils-2.23.52.0.1 ...PASSED
Verifying Package: compat-libcap1-1.10 ...PASSED
Verifying Package: libgcc-4.8.2 (x86_64) ...PASSED
Verifying Package: libstdc++-4.8.2 (x86_64) ...PASSED
Verifying Package: libstdc++-devel-4.8.2 (x86_64) ...PASSED
Verifying Package: sysstat-10.1.5 ...PASSED
Verifying Package: gcc-c++-4.8.2 ...PASSED
Verifying Package: ksh ...PASSED
Verifying Package: make-3.82 ...PASSED
Verifying Package: glibc-2.17 (x86_64) ...PASSED
Verifying Package: glibc-devel-2.17 (x86_64) ...PASSED
Verifying Package: libaio-0.3.109 (x86_64) ...PASSED
Verifying Package: libaio-devel-0.3.109 (x86_64) ...PASSED
Verifying Package: nfs-utils-1.2.3-15 ...PASSED
Verifying Package: smartmontools-6.2-4 ...PASSED
Verifying Package: net-tools-2.0-0.17 ...PASSED
Verifying Package: compat-libstdc++-33-3.2.3 (x86_64) ...PASSED
Verifying Package: libxcb-1.11 (x86_64) ...PASSED
Verifying Package: libX11-1.6.3 (x86_64) ...PASSED
Verifying Package: libXau-1.0.8 (x86_64) ...PASSED
Verifying Package: libXi-1.7.4 (x86_64) ...PASSED
Verifying Package: libXtst-1.2.2 (x86_64) ...PASSED
Verifying Users With Same UID: 0 ...PASSED
Verifying Current Group ID ...PASSED
Verifying Root user consistency ...PASSED

Pre-check for Oracle Restart configuration was unsuccessful.


Failures were encountered during execution of CVU verification request "stage -pre hacfg".

Verifying Swap Size ...FAILED
ip-172-31-2-77: PRVF-7573 : Sufficient swap size is not available on node
                "ip-172-31-2-77" [Required = 16GB (1.6777216E7KB) ; Found = 0.0
                bytes]
ASM will be upgraded as part of the upgrade process.
Therefore stop the database before starting the GI upgrade.
srvctl stop database -d gold
Start the GI upgrade by running the gridSetup.sh from the grid home.
cd /opt/app/oracle/product/19.x.0/grid
./gridSetup.sh
Select upgrade GI option.
Location of the GI home cannot be changed.
Upgrade summary page.
Run root upgrade script when prompted.
Following shows the output from running the rootugprade script.
# /opt/app/oracle/product/19.x.0/grid/rootupgrade.sh
Performing root user operation.

The following environment variables are set as:
    ORACLE_OWNER= oracle
    ORACLE_HOME=  /opt/app/oracle/product/19.x.0/grid

Enter the full pathname of the local bin directory: [/usr/local/bin]:
The contents of "dbhome" have not changed. No need to overwrite.
The contents of "oraenv" have not changed. No need to overwrite.
The contents of "coraenv" have not changed. No need to overwrite.

Entries will be added to the /etc/oratab file as needed by
Database Configuration Assistant when a database is created
Finished running generic part of root script.
Now product-specific root actions will be performed.
Using configuration parameter file: /opt/app/oracle/product/19.x.0/grid/crs/install/crsconfig_params
The log of current session can be found at:
  /opt/app/oracle/crsdata/ip-172-31-2-77/crsconfig/roothas_2019-05-02_11-06-31AM.log
2019/05/02 11:06:34 CLSRSC-595: Executing upgrade step 1 of 12: 'UpgPrechecks'.
2019/05/02 11:06:37 CLSRSC-363: User ignored prerequisites during installation
2019/05/02 11:06:37 CLSRSC-595: Executing upgrade step 2 of 12: 'GetOldConfig'.
2019/05/02 11:06:41 CLSRSC-595: Executing upgrade step 3 of 12: 'GenSiteGUIDs'.
2019/05/02 11:06:41 CLSRSC-595: Executing upgrade step 4 of 12: 'SetupOSD'.
2019/05/02 11:06:41 CLSRSC-595: Executing upgrade step 5 of 12: 'PreUpgrade'.

ASM has been upgraded and started successfully.

2019/05/02 11:07:21 CLSRSC-595: Executing upgrade step 6 of 12: 'UpgradeAFD'.
2019/05/02 11:07:22 CLSRSC-595: Executing upgrade step 7 of 12: 'UpgradeOLR'.
clscfg: EXISTING configuration version 0 detected.
Creating OCR keys for user 'oracle', privgrp 'oinstall'..
Operation successful.
2019/05/02 11:07:26 CLSRSC-595: Executing upgrade step 8 of 12: 'UpgradeOCR'.
LOCAL ONLY MODE
Successfully accumulated necessary OCR keys.
Creating OCR keys for user 'root', privgrp 'root'..
Operation successful.
CRS-4664: Node ip-172-31-2-77 successfully pinned.
2019/05/02 11:07:29 CLSRSC-595: Executing upgrade step 9 of 12: 'CreateOHASD'.
2019/05/02 11:07:30 CLSRSC-595: Executing upgrade step 10 of 12: 'ConfigOHASD'.
2019/05/02 11:07:30 CLSRSC-329: Replacing Clusterware entries in file 'oracle-ohasd.service'
2019/05/02 11:07:50 CLSRSC-595: Executing upgrade step 11 of 12: 'UpgradeSIHA'.

ip-172-31-2-77     2019/05/02 07:08:28     /opt/app/oracle/crsdata/ip-172-31-2-77/olr/backup_20190502_070828.olr     724960844

ip-172-31-2-77     2019/05/02 05:23:49     /opt/app/oracle/product/18.x.0/grid/cdata/ip-172-31-2-77/backup_20190502_052349.olr     2532936542
2019/05/02 11:08:28 CLSRSC-595: Executing upgrade step 12 of 12: 'InstallACFS'.
2019/05/02 11:10:08 CLSRSC-327: Successfully configured Oracle Restart for a standalone server
Click OK on the execution configuration to proceed with the rest of the upgrade steps. Following shows the end of the upgrade page.
The HAS software is now upgraded to 19c.
crsctl query has releaseversion
Oracle High Availability Services release version on the local node is [19.0.0.0.0]

crsctl query has softwareversion
Oracle High Availability Services version on the local node is [19.0.0.0.0]
If the ASM spfile was moved to $GI_HOME/dbs (described in step 12 in an earlier post) then this need to be moved to 19c GI_HOME.
ASMCMD> spget
/opt/app/oracle/product/18.x.0/grid/dbs/spfile+ASM.ora
ASMCMD> spcopy /opt/app/oracle/product/18.x.0/grid/dbs/spfile+ASM.ora  /opt/app/oracle/product/19.x.0/grid/dbs/spfile+ASM.ora
ASMCMD> spset /opt/app/oracle/product/19.x.0/grid/dbs/spfile+ASM.ora
ASMCMD> spget
/opt/app/oracle/product/19.x.0/grid/dbs/spfile+ASM.ora
Use cluvfy post check validate as follows
cluvfy stage -post hacfg

Verifying Oracle Restart Integrity ...PASSED
Verifying OLR Integrity ...PASSED

Post-check for Oracle Restart configuration was successful.

CVU operation performed:      stage -post hacfg
Date:                         May 2, 2019 11:43:30 AM
CVU home:                     /opt/app/oracle/product/19.x.0/grid/
User:                         oracle


Next step is to upgrade the database software. Similar to GI home instillation, this too is based on image. Create the 19c oracle home location and unzip the DB software image file into it.
mkdir -p /opt/app/oracle/product/19.x.0/dbhome_1
unzip LINUX.X64_193000_db_home.zip -d /opt/app/oracle/product/19.x.0/dbhome_1
To begin the installation execute runInstaller from Oracle home location.
cd /opt/app/oracle/product/19.x.0/dbhome_1
./runInstaller
Select software only setup for the instillation option.
Select single database installation as this is an Oracle restart setup.
Select appropriate edition based on the licensing.
What is interesting in the above is that SE2 no longer mentions RAC usages where as in 18c it did. With 19c, RAC is not an option available under SE2.
Oracle home location is fixed and the location is where the DB software was unzipped.
There are no new OS groups introduced in 19c. So all the OS groups used 18c are used here as well.
DB software installation summary.
When prompted run the root script. Unlike 18c there's no prompt for installing TFA. A standalone TFA is setup as part of 19c.
# /opt/app/oracle/product/19.x.0/dbhome_1/root.sh
Performing root user operation.

The following environment variables are set as:
    ORACLE_OWNER= oracle
    ORACLE_HOME=  /opt/app/oracle/product/19.x.0/dbhome_1

Enter the full pathname of the local bin directory: [/usr/local/bin]:
The contents of "dbhome" have not changed. No need to overwrite.
The contents of "oraenv" have not changed. No need to overwrite.
The contents of "coraenv" have not changed. No need to overwrite.

Entries will be added to the /etc/oratab file as needed by
Database Configuration Assistant when a database is created
Finished running generic part of root script.
Now product-specific root actions will be performed.
Oracle Trace File Analyzer (TFA - Standalone Mode) is available at :
    /opt/app/oracle/product/19.x.0/dbhome_1/bin/tfactl

Note :
1. tfactl will use TFA Service if that service is running and user has been granted access
2. tfactl will configure TFA Standalone Mode only if user has no access to TFA Service or TFA is not installed


After the database software is installed next step is upgrade of the database. 19c has the option of autoupgrade (which is also backported to 12.2 and 18c) which automates the upgrade process. However, in this post the DB will be upgraded using the DBUA. Run preupgrade.jar available in the 19c home (19c_home/rdbms/admin) to check the upgrade readiness of the database (for more refer 2421552.1)
 $ORACLE_HOME/jdk/bin/java -jar preupgrade.jar TERMINAL
Report generated by Oracle Database Pre-Upgrade Information Tool Version
19.0.0.0.0 Build: 1 on 2019-05-02T12:16:42

Upgrade-To version: 19.0.0.0.0

=======================================
Status of the database prior to upgrade
=======================================
      Database Name:  GOLD
     Container Name:  gold
       Container ID:  0
            Version:  18.0.0.0.0
     DB Patch Level:  Database Release Update : 18.6.0.0.190416 (29301631)
         Compatible:  18.0.0
          Blocksize:  8192
           Platform:  Linux x86 64-bit
      Timezone File:  31
  Database log mode:  NOARCHIVELOG
           Readonly:  FALSE
            Edition:  EE

  Oracle Component                       Upgrade Action    Current Status
  ----------------                       --------------    --------------
  Oracle Server                          [to be upgraded]  VALID
  Real Application Clusters              [to be upgraded]  OPTION OFF
  Oracle Workspace Manager               [to be upgraded]  VALID
  Oracle Text                            [to be upgraded]  VALID
  Oracle XML Database                    [to be upgraded]  VALID

==============
BEFORE UPGRADE
==============

  REQUIRED ACTIONS
  ================
  None

  RECOMMENDED ACTIONS
  ===================
  1.  (AUTOFIXUP) Gather stale data dictionary statistics prior to database
      upgrade in off-peak time using:

        EXECUTE DBMS_STATS.GATHER_DICTIONARY_STATS;

      Dictionary statistics do not exist or are stale (not up-to-date).

      Dictionary statistics help the Oracle optimizer find efficient SQL
      execution plans and are essential for proper upgrade timing. Oracle
      recommends gathering dictionary statistics in the last 24 hours before
      database upgrade.

      For information on managing optimizer statistics, refer to the 18.0.0.0
      Oracle Database Upgrade Guide.

  2.  (AUTOFIXUP) Gather statistics on fixed objects prior the upgrade.

      None of the fixed object tables have had stats collected.

      Gathering statistics on fixed objects, if none have been gathered yet, is
      recommended prior to upgrading.

      For information on managing optimizer statistics, refer to the 18.0.0.0
      Oracle Database Upgrade Guide.

  INFORMATION ONLY
  ================
  3.  To help you keep track of your tablespace allocations, the following
      AUTOEXTEND tablespaces are expected to successfully EXTEND during the
      upgrade process.

                                                 Min Size
      Tablespace                        Size     For Upgrade
      ----------                     ----------  -----------
      TEMP                                20 MB       150 MB
      UNDOTBS1                           365 MB       412 MB

      Minimum tablespace sizes for upgrade are estimates.

  4.  Check the Oracle Backup and Recovery User's Guide for information on how
      to manage an RMAN recovery catalog schema.

      If you are using a version of the recovery catalog schema that is older
      than that required by the RMAN client version, then you must upgrade the
      catalog schema.

      It is good practice to have the catalog schema the same or higher version
      than the RMAN client version you are using.

  ORACLE GENERATED FIXUP SCRIPT
  =============================
  All of the issues in database GOLD
  which are identified above as BEFORE UPGRADE "(AUTOFIXUP)" can be resolved by
  executing the following

    SQL>@/opt/app/oracle/cfgtoollogs/gold/preupgrade/preupgrade_fixups.sql

=============
AFTER UPGRADE
=============

  REQUIRED ACTIONS
  ================
  None

  RECOMMENDED ACTIONS
  ===================
  5.  Upgrade the database time zone file using the DBMS_DST package.

      The database is using time zone file version 31 and the target 19 release
      ships with time zone file version 32.

      Oracle recommends upgrading to the desired (latest) version of the time
      zone file.  For more information, refer to "Upgrading the Time Zone File
      and Timestamp with Time Zone Data" in the 19 Oracle Database
      Globalization Support Guide.

  6.  (AUTOFIXUP) Gather dictionary statistics after the upgrade using the
      command:

        EXECUTE DBMS_STATS.GATHER_DICTIONARY_STATS;

      Oracle recommends gathering dictionary statistics after upgrade.

      Dictionary statistics provide essential information to the Oracle
      optimizer to help it find efficient SQL execution plans. After a database
      upgrade, statistics need to be re-gathered as there can now be tables
      that have significantly changed during the upgrade or new tables that do
      not have statistics gathered yet.

  7.  Gather statistics on fixed objects after the upgrade and when there is a
      representative workload on the system using the command:

        EXECUTE DBMS_STATS.GATHER_FIXED_OBJECTS_STATS;

      This recommendation is given for all preupgrade runs.

      Fixed object statistics provide essential information to the Oracle
      optimizer to help it find efficient SQL execution plans.  Those
      statistics are specific to the Oracle Database release that generates
      them, and can be stale upon database upgrade.

      For information on managing optimizer statistics, refer to the 18.0.0.0
      Oracle Database Upgrade Guide.

  ORACLE GENERATED FIXUP SCRIPT
  =============================
  All of the issues in database GOLD
  which are identified above as AFTER UPGRADE "(AUTOFIXUP)" can be resolved by
  executing the following

    SQL>@/opt/app/oracle/cfgtoollogs/gold/preupgrade/postupgrade_fixups.sql


==================
PREUPGRADE SUMMARY
==================
  /opt/app/oracle/cfgtoollogs/gold/preupgrade/preupgrade.log
  /opt/app/oracle/cfgtoollogs/gold/preupgrade/preupgrade_fixups.sql
  /opt/app/oracle/cfgtoollogs/gold/preupgrade/postupgrade_fixups.sql

Execute fixup scripts as indicated below:

Before upgrade:

Log into the database and execute the preupgrade fixups
@/opt/app/oracle/cfgtoollogs/gold/preupgrade/preupgrade_fixups.sql

After the upgrade:

Log into the database and execute the postupgrade fixups
@/opt/app/oracle/cfgtoollogs/gold/preupgrade/postupgrade_fixups.sql

Preupgrade complete: 2019-05-02T12:16:42
Run the pre-upgrade fixup script and then DBUA from the 19c home. Select the database to upgrade.
Check and resolve any validation concerns.
Parallel upgrade will speed up the upgrade process. This step also allows to specify whether timezone is also upgraded at the same time as the database.
The upgrade summary page.
Upgrade progress.
Upgrade result page.
Once the upgrade has finished execute the post-upgrade fixup script (mentioned in the output when preupgrade.jar was run). The DB components versions and status after the upgrade is as follows.
SQL> select comp_name,status,version,version_full from cdb_registry order by 1,2;

COMP_NAME                                STATUS          VERSION         VERSION_FULL
---------------------------------------- --------------- --------------- ---------------
Oracle Database Catalog Views            VALID           19.0.0.0.0      19.3.0.0.0
Oracle Database Packages and Types       VALID           19.0.0.0.0      19.3.0.0.0
Oracle Real Application Clusters         OPTION OFF      19.0.0.0.0      19.3.0.0.0
Oracle Text                              VALID           19.0.0.0.0      19.3.0.0.0
Oracle Workspace Manager                 VALID           19.0.0.0.0      19.3.0.0.0
Oracle XML Database                      VALID           19.0.0.0.0      19.3.0.0.0

6 rows selected.
The timezone file has been upgraded to 32.
SQL> select * from v$timezone_file;

FILENAME                     VERSION     CON_ID
-------------------- --------------- ----------
timezlrg_32.dat                   32          0
If satisfied with the upgrade and application testing then change the compatibility parameter on the DB and ASM disk groups.
SQL> alter system set compatible='19.0.0' scope=spfile;
shutdown immediate;
Login as grid user and then login to ASM instance as sysasm to change asm compatibility parameters.
SQL> alter diskgroup FRA SET attribute 'compatible.asm'='19.0.0.0.0';
SQL> alter diskgroup DATA  SET attribute 'compatible.asm'='19.0.0.0.0';
SQL> alter diskgroup fra set attribute 'compatible.rdbms'='19.0.0.0.0';
SQL> alter diskgroup data set attribute 'compatible.rdbms'='19.0.0.0.0';

SQL> select g.name,a.name,a.value from v$asm_diskgroup g, v$asm_attribute a where g.group_number=a.group_number and a.name like '%compat%';

NAME                 NAME                 VALUE
-------------------- -------------------- --------------------
DATA                 compatible.asm       19.0.0.0.0
DATA                 compatible.rdbms     19.0.0.0.0
FRA                  compatible.asm       19.0.0.0.0
FRA                  compatible.rdbms     19.0.0.0.0
After ASM compatibility parameters are updated start the database.
Finally run orachk -u -o post check the post upgrade state of the oracle restart setup.

Related Posts
Upgrading Oracle Restart from 12.2.0.1 to 18c
Upgrading Oracle Single Instance with ASM (Oracle Restart) from 11.2.0.4 to 12.2.0.1
Upgrading Oracle Single Instance with ASM (Oracle Restart) from 12.1.0.2 to 12.2.0.1
Upgrading Single Instance on ASM from 11.2.0.3 to 11.2.0.4
Upgrading Grid Infrastructure Used for Single Instance from 11.2.0.4 to 12.1.0.2

Wednesday, May 1, 2019

Deleting a Node From 18c RAC

Deleting a node from a 18c RAC is similar deleting a node from 12cR1 RAC. Deletion has three distinct phases, that is removing of the database instance, removing of Oracle database software and finally the clusterware.
The RAC setup in this case is a 2 node RAC and node named rhel72 will be removed from the cluster. The database is a CDB which has a single PDB.
SQL> select instance_number,instance_name,host_name from gv$instance;

INSTANCE_NUMBER INSTANCE_NAME    HOST_NAME
--------------- ---------------- --------------------
              1 ent18c1          rhel71.domain.net
              2 ent18c2          rhel72.domain.net #<-- node to be removed

SQL>  select inst_id,con_id,name from gv$pdbs order by 1;

   INST_ID     CON_ID NAME
---------- ---------- --------------
         1          2 PDB$SEED
         1          3 PDB18C
         2          2 PDB$SEED
         2          3 PDB18C
1. First phase include removing the database instance from the node to be deleted. For this run the DBCA on any node except on the node that has the instance being deleted. In this case DBCA is run from node rhel71. Follow the instance management option to remove the instance.
Select RAC database instance management and delete instance options.
Following message is shown with regard to the service associated with the PDB running on the instance being deleted. DBCA will modify the service to include only the remaining instances.

2. At the end of the DBCA run the database instance is removed from the node to be deleted
SQL>  select instance_number,instance_name,host_name from gv$instance;

INSTANCE_NUMBER INSTANCE_NAME    HOST_NAME
--------------- ---------------- -------------------
              1 ent18c1          rhel71.domain.net

SQL>  select inst_id,con_id,name from gv$pdbs order by 1;

   INST_ID     CON_ID NAME
---------- ---------- --------------
         1          2 PDB$SEED
         1          3 PDB18C
Oracle RAC configuration is updated to reflect the change instances along with the database service.
srvctl config database -db ent18c
Database unique name: ent18c
Database name: ent18c
Oracle home: /opt/app/oracle/product/18.x.0/dbhome_1
Oracle user: oracle
Spfile: +DATA/ENT18C/PARAMETERFILE/spfile.272.1003495623
Password file: +DATA/ENT18C/PASSWORD/pwdent18c.256.1003490291
Domain:
Start options: open
Stop options: immediate
Database role: PRIMARY
Management policy: AUTOMATIC
Server pools:
Disk Groups: DATA,FRA
Mount point paths:
Services: pdbsrv
Type: RAC
Start concurrency:
Stop concurrency:
OSDBA group: dba
OSOPER group: oper
Database instances: ent18c1
Configured nodes: rhel71
CSS critical: no
CPU count: 0
Memory target: 0
Maximum memory: 0
Default network number for database services:
Database is administrator managed

srvctl config service -db ent18c -service pdbsrv
Service name: pdbsrv
Server pool:
Cardinality: 1
Service role: PRIMARY
Management policy: AUTOMATIC
DTP transaction: false
AQ HA notifications: false
Global: false
Commit Outcome: false
Failover type:
Failover method:
Failover retries:
Failover delay:
Failover restore: NONE
Connection Load Balancing Goal: LONG
Runtime Load Balancing Goal: NONE
TAF policy specification: NONE
Edition:
Pluggable database name: pdb18c
Hub service:
Maximum lag time: ANY
SQL Translation Profile:
Retention: 86400 seconds
Replay Initiation Time: 300 seconds
Drain timeout:
Stop option:
Session State Consistency: DYNAMIC
GSM Flags: 0
Service is enabled
Preferred instances: ent18c1
Available instances:
CSS critical: no
3. Check if redo log threads for the deleted instance is removed from the database. If not remove database thread associated with the deleted instance.
SQL> select inst_id,group#,thread# from gv$log;

   INST_ID     GROUP#    THREAD#
---------- ---------- ----------
         1          1          1
         1          2          1
4. Once the instance removal is complete next step is to remove the database software. On 18c there's a difference to removing a node compared 12cR1. Updating the inventory file is not needed to remove the database software. Simply execute the deinstall with the local option on the node to be deleted. It's important to include the local option, without it the oracle database software on all nodes will get uninstalled.
cd /opt/app/oracle/product/18.x.0/dbhome_1/deinstall/

./deinstall -local
Checking for required files and bootstrapping ...
Please wait ...
Location of logs /opt/app/oraInventory/logs/

############ ORACLE DECONFIG TOOL START ############


######################### DECONFIG CHECK OPERATION START #########################
## [START] Install check configuration ##


Checking for existence of the Oracle home location /opt/app/oracle/product/18.x.0/dbhome_1
Oracle Home type selected for deinstall is: Oracle Real Application Cluster Database
Oracle Base selected for deinstall is: /opt/app/oracle
Checking for existence of central inventory location /opt/app/oraInventory
Checking for existence of the Oracle Grid Infrastructure home /opt/app/18.x.0/grid
The following nodes are part of this cluster: rhel72,rhel71
Checking for sufficient temp space availability on node(s) : 'rhel72'

## [END] Install check configuration ##


Network Configuration check config START

Network de-configuration trace file location: /opt/app/oraInventory/logs/netdc_check2019-04-04_02-52-54PM.log

Network Configuration check config END

Database Check Configuration START

Database de-configuration trace file location: /opt/app/oraInventory/logs/databasedc_check2019-04-04_02-52-54PM.log

Use comma as separator when specifying list of values as input

Specify the list of database names that are configured locally on this node for this Oracle home. Local configurations of the discovered databases will be removed [ent18c2]:
Database Check Configuration END

######################### DECONFIG CHECK OPERATION END #########################


####################### DECONFIG CHECK OPERATION SUMMARY #######################
Oracle Grid Infrastructure Home is: /opt/app/18.x.0/grid
The following nodes are part of this cluster: rhel72,rhel71
The cluster node(s) on which the Oracle home deinstallation will be performed are:rhel72
Oracle Home selected for deinstall is: /opt/app/oracle/product/18.x.0/dbhome_1
Inventory Location where the Oracle home registered is: /opt/app/oraInventory
The option -local will not modify any database configuration for this Oracle home.
Do you want to continue (y - yes, n - no)? [n]: y
A log of this session will be written to: '/opt/app/oraInventory/logs/deinstall_deconfig2019-04-04_02-51-47-PM.out'
Any error messages from this session will be written to: '/opt/app/oraInventory/logs/deinstall_deconfig2019-04-04_02-51-47-PM.err'

######################## DECONFIG CLEAN OPERATION START ########################
Database de-configuration trace file location: /opt/app/oraInventory/logs/databasedc_clean2019-04-04_02-52-54PM.log

Network Configuration clean config START

Network de-configuration trace file location: /opt/app/oraInventory/logs/netdc_clean2019-04-04_02-52-54PM.log

Network Configuration clean config END


######################### DECONFIG CLEAN OPERATION END #########################


####################### DECONFIG CLEAN OPERATION SUMMARY #######################
#######################################################################


############# ORACLE DECONFIG TOOL END #############

Using properties file /tmp/deinstall2019-04-04_02-50-37PM/response/deinstall_2019-04-04_02-51-47-PM.rsp
Location of logs /opt/app/oraInventory/logs/

############ ORACLE DEINSTALL TOOL START ############





####################### DEINSTALL CHECK OPERATION SUMMARY #######################
A log of this session will be written to: '/opt/app/oraInventory/logs/deinstall_deconfig2019-04-04_02-51-47-PM.out'
Any error messages from this session will be written to: '/opt/app/oraInventory/logs/deinstall_deconfig2019-04-04_02-51-47-PM.err'

######################## DEINSTALL CLEAN OPERATION START ########################
## [START] Preparing for Deinstall ##
Setting LOCAL_NODE to rhel72
Setting CLUSTER_NODES to rhel72
Setting CRS_HOME to false
Setting oracle.installer.invPtrLoc to /tmp/deinstall2019-04-04_02-50-37PM/oraInst.loc
Setting oracle.installer.local to true

## [END] Preparing for Deinstall ##

Setting the force flag to false
Setting the force flag to cleanup the Oracle Base
Oracle Universal Installer clean START

Detach Oracle home '/opt/app/oracle/product/18.x.0/dbhome_1' from the central inventory on the local node : Done

Delete directory '/opt/app/oracle/product/18.x.0/dbhome_1' on the local node : Done

The Oracle Base directory '/opt/app/oracle' will not be removed on local node. The directory is in use by Oracle Home '/opt/app/18.x.0/grid'.

Oracle Universal Installer cleanup was successful.

Oracle Universal Installer clean END


## [START] Oracle install clean ##


## [END] Oracle install clean ##


######################### DEINSTALL CLEAN OPERATION END #########################


####################### DEINSTALL CLEAN OPERATION SUMMARY #######################
Successfully detached Oracle home '/opt/app/oracle/product/18.x.0/dbhome_1' from the central inventory on the local node.
Successfully deleted directory '/opt/app/oracle/product/18.x.0/dbhome_1' on the local node.
Oracle Universal Installer cleanup was successful.

Review the permissions and contents of '/opt/app/oracle' on nodes(s) 'rhel72'.
If there are no Oracle home(s) associated with '/opt/app/oracle', manually delete '/opt/app/oracle' and its contents.
Oracle deinstall tool successfully cleaned up temporary directories.
#######################################################################


############# ORACLE DEINSTALL TOOL END #############



5. The last step is to remove the clusterware from the node. Check if the node to be deleted is active and unpinned. If the node is pinned then unpin it with crsctl unpin command. Following could be run as either grid user or root.
olsnodes -s -t
rhel71  Active  Unpinned
rhel72  Active  Unpinned
6. Similar to database software removal, there's no inventory update to remove the clusterware. Run deinstall with local option on the node to be deleted.
cd $GI_HOME/deinstall
./deinstall -local
Checking for required files and bootstrapping ...
Please wait ...
Location of logs /tmp/deinstall2019-04-04_03-00-47PM/logs/

############ ORACLE DECONFIG TOOL START ############


######################### DECONFIG CHECK OPERATION START #########################
## [START] Install check configuration ##


Checking for existence of the Oracle home location /opt/app/18.x.0/grid
Oracle Home type selected for deinstall is: Oracle Grid Infrastructure for a Cluster
Oracle Base selected for deinstall is: /opt/app/oracle
Checking for existence of central inventory location /opt/app/oraInventory
Checking for existence of the Oracle Grid Infrastructure home /opt/app/18.x.0/grid
The following nodes are part of this cluster: rhel72,rhel71
Checking for sufficient temp space availability on node(s) : 'rhel72'

## [END] Install check configuration ##

Traces log file: /tmp/deinstall2019-04-04_03-00-47PM/logs//crsdc_2019-04-04_03-02-05-PM.log

Network Configuration check config START

Network de-configuration trace file location: /tmp/deinstall2019-04-04_03-00-47PM/logs/netdc_check2019-04-04_03-02-11PM.log

Network Configuration check config END

Asm Check Configuration START

ASM de-configuration trace file location: /tmp/deinstall2019-04-04_03-00-47PM/logs/asmcadc_check2019-04-04_03-02-11PM.log

Database Check Configuration START

Database de-configuration trace file location: /tmp/deinstall2019-04-04_03-00-47PM/logs/databasedc_check2019-04-04_03-02-11PM.log

Oracle Grid Management database was found in this Grid Infrastructure home

Database Check Configuration END

######################### DECONFIG CHECK OPERATION END #########################


####################### DECONFIG CHECK OPERATION SUMMARY #######################
Oracle Grid Infrastructure Home is: /opt/app/18.x.0/grid
The following nodes are part of this cluster: rhel72,rhel71
The cluster node(s) on which the Oracle home deinstallation will be performed are:rhel72
Oracle Home selected for deinstall is: /opt/app/18.x.0/grid
Inventory Location where the Oracle home registered is: /opt/app/oraInventory
Option -local will not modify any ASM configuration.
Oracle Grid Management database was found in this Grid Infrastructure home
Oracle Grid Management database will be relocated to another node during deconfiguration of local node
Do you want to continue (y - yes, n - no)? [n]: y
A log of this session will be written to: '/tmp/deinstall2019-04-04_03-00-47PM/logs/deinstall_deconfig2019-04-04_03-01-58-PM.out'
Any error messages from this session will be written to: '/tmp/deinstall2019-04-04_03-00-47PM/logs/deinstall_deconfig2019-04-04_03-01-58-PM.err'

######################## DECONFIG CLEAN OPERATION START ########################
Database de-configuration trace file location: /tmp/deinstall2019-04-04_03-00-47PM/logs/databasedc_clean2019-04-04_03-02-11PM.log
ASM de-configuration trace file location: /tmp/deinstall2019-04-04_03-00-47PM/logs/asmcadc_clean2019-04-04_03-02-11PM.log
ASM Clean Configuration END

Network Configuration clean config START

Network de-configuration trace file location: /tmp/deinstall2019-04-04_03-00-47PM/logs/netdc_clean2019-04-04_03-02-11PM.log

Network Configuration clean config END


Run the following command as the root user or the administrator on node "rhel72".

/opt/app/18.x.0/grid/crs/install/rootcrs.sh -force  -deconfig -paramfile "/tmp/deinstall2019-04-04_03-00-47PM/response/deinstall_OraGI18Home1.rsp"

Press Enter after you finish running the above commands

<----------------------------------------
Output from the command to be run as root.
# /opt/app/18.x.0/grid/crs/install/rootcrs.sh -force  -deconfig -paramfile "/tmp/deinstall2019-04-04_03-00-47PM/response/deinstall_OraGI18Home1.rsp"
Using configuration parameter file: /tmp/deinstall2019-04-04_03-00-47PM/response/deinstall_OraGI18Home1.rsp
The log of current session can be found at:
  /tmp/deinstall2019-04-04_03-00-47PM/logs/crsdeconfig_rhel72_2019-04-04_03-04-5                                                                                                              7PM.log
CRS-2791: Starting shutdown of Oracle High Availability Services-managed resources on 'rhel72'
CRS-2673: Attempting to stop 'ora.crsd' on 'rhel72'
CRS-2790: Starting shutdown of Cluster Ready Services-managed resources on server 'rhel72'
CRS-2673: Attempting to stop 'ora.chad' on 'rhel72'
CRS-2673: Attempting to stop 'ora.DATA.dg' on 'rhel72'
CRS-2673: Attempting to stop 'ora.OCRDG.dg' on 'rhel72'
CRS-2673: Attempting to stop 'ora.FRA.dg' on 'rhel72'
CRS-2677: Stop of 'ora.DATA.dg' on 'rhel72' succeeded
CRS-2677: Stop of 'ora.OCRDG.dg' on 'rhel72' succeeded
CRS-2677: Stop of 'ora.FRA.dg' on 'rhel72' succeeded
CRS-2673: Attempting to stop 'ora.GIMRDG.dg' on 'rhel72'
CRS-2677: Stop of 'ora.GIMRDG.dg' on 'rhel72' succeeded
CRS-2673: Attempting to stop 'ora.asm' on 'rhel72'
CRS-2677: Stop of 'ora.asm' on 'rhel72' succeeded
CRS-2673: Attempting to stop 'ora.ASMNET1LSNR_ASM.lsnr' on 'rhel72'
CRS-2677: Stop of 'ora.chad' on 'rhel72' succeeded
CRS-2677: Stop of 'ora.ASMNET1LSNR_ASM.lsnr' on 'rhel72' succeeded
CRS-2792: Shutdown of Cluster Ready Services-managed resources on 'rhel72' has completed
CRS-2677: Stop of 'ora.crsd' on 'rhel72' succeeded
CRS-2673: Attempting to stop 'ora.asm' on 'rhel72'
CRS-2673: Attempting to stop 'ora.crf' on 'rhel72'
CRS-2673: Attempting to stop 'ora.drivers.acfs' on 'rhel72'
CRS-2673: Attempting to stop 'ora.mdnsd' on 'rhel72'
CRS-2677: Stop of 'ora.drivers.acfs' on 'rhel72' succeeded
CRS-2677: Stop of 'ora.crf' on 'rhel72' succeeded
CRS-2677: Stop of 'ora.mdnsd' on 'rhel72' succeeded
CRS-2677: Stop of 'ora.asm' on 'rhel72' succeeded
CRS-2673: Attempting to stop 'ora.cluster_interconnect.haip' on 'rhel72'
CRS-2677: Stop of 'ora.cluster_interconnect.haip' on 'rhel72' succeeded
CRS-2673: Attempting to stop 'ora.ctssd' on 'rhel72'
CRS-2673: Attempting to stop 'ora.evmd' on 'rhel72'
CRS-2677: Stop of 'ora.evmd' on 'rhel72' succeeded
CRS-2677: Stop of 'ora.ctssd' on 'rhel72' succeeded
CRS-2673: Attempting to stop 'ora.cssd' on 'rhel72'
CRS-2677: Stop of 'ora.cssd' on 'rhel72' succeeded
CRS-2673: Attempting to stop 'ora.gipcd' on 'rhel72'
CRS-2673: Attempting to stop 'ora.gpnpd' on 'rhel72'
CRS-2677: Stop of 'ora.gpnpd' on 'rhel72' succeeded
CRS-2677: Stop of 'ora.gipcd' on 'rhel72' succeeded
CRS-2793: Shutdown of Oracle High Availability Services-managed resources on 'rhel72' has completed
CRS-4133: Oracle High Availability Services has been stopped.
2019/04/04 15:08:05 CLSRSC-4006: Removing Oracle Trace File Analyzer (TFA) Collector.
2019/04/04 15:08:43 CLSRSC-4007: Successfully removed Oracle Trace File Analyzer (TFA) Collector.
2019/04/04 15:09:11 CLSRSC-336: Successfully deconfigured Oracle Clusterware stack on this node

7. From the node remaining in the cluster run the node deletion command as root
# crsctl delete node -n rhel72
CRS-4661: Node rhel72 successfully deleted.
8. Use cluvfy to verify node deletion.
cluvfy stage -post nodedel -n rhel72

Verifying Node Removal ...
  Verifying CRS Integrity ...PASSED
  Verifying Clusterware Version Consistency ...PASSED
Verifying Node Removal ...PASSED

Post-check for node removal was successful.

CVU operation performed:      stage -post nodedel
Date:                         04-Apr-2019 15:12:44
CVU home:                     /opt/app/18.x.0/grid/
User:                         grid
This conclude the node deletion on 18c RAC.

Related Post
Deleting a Node From 12cR1 RAC
Deleting a Node From 11gR2 RAC
Deleting a 11gR1 RAC Node