Friday, June 28, 2019

Root.sh Fails with CLSRSC-119: Start of the exclusive mode cluster failed

On a two node cluster setup on 19c, running root.sh on the first node fails as below.
Using configuration parameter file: /opt/app/19.x.0/grid/crs/install/crsconfig_params
The log of current session can be found at:
/opt/app/oracle/crsdata/rhel71/crsconfig/rootcrs_rhel71_2019-06-03_05-55-22AM.log
2019/06/03 05:55:34 CLSRSC-594: Executing installation step 1 of 19: 'SetupTFA'.
2019/06/03 05:55:34 CLSRSC-594: Executing installation step 2 of 19: 'ValidateEnv'.
2019/06/03 05:55:34 CLSRSC-363: User ignored prerequisites during installation
2019/06/03 05:55:34 CLSRSC-594: Executing installation step 3 of 19: 'CheckFirstNode'.
2019/06/03 05:55:34 CLSRSC-4002: Successfully installed Oracle Trace File Analyzer (TFA) Collector.
2019/06/03 05:55:38 CLSRSC-594: Executing installation step 4 of 19: 'GenSiteGUIDs'.
2019/06/03 05:55:39 CLSRSC-594: Executing installation step 5 of 19: 'SetupOSD'.
2019/06/03 05:55:39 CLSRSC-594: Executing installation step 6 of 19: 'CheckCRSConfig'.
2019/06/03 05:55:40 CLSRSC-594: Executing installation step 7 of 19: 'SetupLocalGPNP'.
2019/06/03 05:55:42 CLSRSC-594: Executing installation step 8 of 19: 'CreateRootCert'.
2019/06/03 05:55:48 CLSRSC-594: Executing installation step 9 of 19: 'ConfigOLR'.
2019/06/03 05:55:49 CLSRSC-594: Executing installation step 10 of 19: 'ConfigCHMOS'.
2019/06/03 05:56:22 CLSRSC-594: Executing installation step 11 of 19: 'CreateOHASD'.
2019/06/03 05:56:24 CLSRSC-594: Executing installation step 12 of 19: 'ConfigOHASD'.
2019/06/03 05:56:24 CLSRSC-330: Adding Clusterware entries to file 'oracle-ohasd.service'
2019/06/03 05:56:47 CLSRSC-594: Executing installation step 13 of 19: 'InstallAFD'.
2019/06/03 05:56:49 CLSRSC-594: Executing installation step 14 of 19: 'InstallACFS'.
2019/06/03 05:56:52 CLSRSC-594: Executing installation step 15 of 19: 'InstallKA'.
2019/06/03 05:56:57 CLSRSC-594: Executing installation step 16 of 19: 'InitConfig'.
CRS-2672: Attempting to start 'ora.evmd' on 'rhel71'
CRS-2672: Attempting to start 'ora.mdnsd' on 'rhel71'
CRS-2676: Start of 'ora.mdnsd' on 'rhel71' succeeded
CRS-2676: Start of 'ora.evmd' on 'rhel71' succeeded
CRS-2672: Attempting to start 'ora.gpnpd' on 'rhel71'
CRS-2676: Start of 'ora.gpnpd' on 'rhel71' succeeded
CRS-2672: Attempting to start 'ora.cssdmonitor' on 'rhel71'
CRS-2672: Attempting to start 'ora.gipcd' on 'rhel71'
CRS-2676: Start of 'ora.cssdmonitor' on 'rhel71' succeeded
CRS-2674: Start of 'ora.gipcd' on 'rhel71' failed
CRS-2673: Attempting to stop 'ora.cssdmonitor' on 'rhel71'
CRS-2677: Stop of 'ora.cssdmonitor' on 'rhel71' succeeded
CRS-2673: Attempting to stop 'ora.gpnpd' on 'rhel71'
CRS-2677: Stop of 'ora.gpnpd' on 'rhel71' succeeded
CRS-2673: Attempting to stop 'ora.mdnsd' on 'rhel71'
CRS-2677: Stop of 'ora.mdnsd' on 'rhel71' succeeded
CRS-2673: Attempting to stop 'ora.evmd' on 'rhel71'
CRS-2677: Stop of 'ora.evmd' on 'rhel71' succeeded
CRS-4000: Command Start failed, or completed with errors.
2019/06/03 05:58:40 CLSRSC-119: Start of the exclusive mode cluster failed
Died at /opt/app/19.x.0/grid/crs/install/crsinstall.pm line 2439. 
There are multiple MOS notes on root.sh failures as there are many different reasons for root.sh failures. But in this case MOS nor any of the logs generated during the installation revealed the real reason, hence this post.



Reason for this failure was the length of the cluster name. It seems the cluster name, which should be between 1 - 15 characters in length is never checked during the response file parsing (field oracle.install.crs.config.clusterName) or when installing via OUI to see if it is over the 15 character limit. Therefore it's possible to begin an installation with a cluster name longer than 15 characters as shown below (where the cluster name is 17 char long).

However, as shown earlier the installation will fail at root.sh execution time. Therefore, if root.sh fails on the first node check the length of the cluster name conform to the limit set by Oracle.

Related Post
Changing The Cluster Name

Friday, June 21, 2019

Adding a Node to 18c RAC Using Cloning

This post shows steps of adding a node by way of cloning on 18c. On 19c, extending RAC by way of cloning is depreciated. However, the steps are still listed on 19c clusterware.
The post assumes all other pre-reqs related to adding a node have been completed. The new node is called rhel72.
1. As the first step create an image from an existing node. Steps in creating a gold image could be used for this.
2. Unzip gold image in the new node on the same directory structure as existing nodes.
3. Removing following files from the new node.
rm -rf $GI_HOME/network/admin/*.ora
rm -rf $GI_HOMEroot.sh*
4. Run gridsetup and select software only option.
Following is promoted about missing root.sh files. Ignore and click yes to continue.
Summary of software only setup.
When prompted execute the root scripts.
Root script output is shown below.
# /opt/app/18.x.0/grid/root.sh
Performing root user operation.

The following environment variables are set as:
    ORACLE_OWNER= grid
    ORACLE_HOME=  /opt/app/18.x.0/grid

Creating /etc/oratab file...
Entries will be added to the /etc/oratab file as needed by
Database Configuration Assistant when a database is created
Finished running generic part of root script.
Now product-specific root actions will be performed.

To configure Grid Infrastructure for a Cluster or Grid Infrastructure for a Stand-Alone Server execute the following command as grid user:
/opt/app/18.x.0/grid/gridSetup.sh
This command launches the Grid Infrastructure Setup Wizard. The wizard also supports silent operation, and the parameters can be passed through the response file that is available in the installation media.
Complete the GI software only install.

5. From an existing node run gridSetup with noCopy option and select add node.
on exisitng node run
gridsetup.sh -noCopy
The noCopy option will prevent GI binaries being copied to remote node being added, since it's already has software installed. Rest of the steps for adding the node is same as previous add node post.Run the root.sh on the new node when prompted. Output from root.sh on the new node is shown below.
# /opt/app/18.x.0/grid/root.sh
Performing root user operation.

The following environment variables are set as:
    ORACLE_OWNER= grid
    ORACLE_HOME=  /opt/app/18.x.0/grid

Enter the full pathname of the local bin directory: [/usr/local/bin]:
The contents of "dbhome" have not changed. No need to overwrite.
The contents of "oraenv" have not changed. No need to overwrite.
The contents of "coraenv" have not changed. No need to overwrite.

Entries will be added to the /etc/oratab file as needed by
Database Configuration Assistant when a database is created
Finished running generic part of root script.
Now product-specific root actions will be performed.
Relinking oracle with rac_on option
Using configuration parameter file: /opt/app/18.x.0/grid/crs/install/crsconfig_params
The log of current session can be found at:
  /opt/app/oracle/crsdata/rhel72/crsconfig/rootcrs_rhel72_2019-04-09_11-10-41AM.log
2019/04/09 11:10:51 CLSRSC-594: Executing installation step 1 of 20: 'SetupTFA'.
2019/04/09 11:10:51 CLSRSC-4001: Installing Oracle Trace File Analyzer (TFA) Collector.
2019/04/09 11:11:32 CLSRSC-4002: Successfully installed Oracle Trace File Analyzer (TFA) Collector.
2019/04/09 11:11:32 CLSRSC-594: Executing installation step 2 of 20: 'ValidateEnv'.
2019/04/09 11:11:43 CLSRSC-363: User ignored prerequisites during installation
2019/04/09 11:11:44 CLSRSC-594: Executing installation step 3 of 20: 'CheckFirstNode'.
2019/04/09 11:11:45 CLSRSC-594: Executing installation step 4 of 20: 'GenSiteGUIDs'.
2019/04/09 11:11:50 CLSRSC-594: Executing installation step 5 of 20: 'SaveParamFile'.
2019/04/09 11:11:57 CLSRSC-594: Executing installation step 6 of 20: 'SetupOSD'.
2019/04/09 11:11:57 CLSRSC-594: Executing installation step 7 of 20: 'CheckCRSConfig'.
2019/04/09 11:12:34 CLSRSC-17: Invalid GPnP setup
2019/04/09 11:12:43 CLSRSC-594: Executing installation step 8 of 20: 'SetupLocalGPNP'.
2019/04/09 11:12:45 CLSRSC-594: Executing installation step 9 of 20: 'CreateRootCert'.
2019/04/09 11:12:45 CLSRSC-594: Executing installation step 10 of 20: 'ConfigOLR'.
2019/04/09 11:12:56 CLSRSC-594: Executing installation step 11 of 20: 'ConfigCHMOS'.
2019/04/09 11:12:56 CLSRSC-594: Executing installation step 12 of 20: 'CreateOHASD'.
2019/04/09 11:12:59 CLSRSC-594: Executing installation step 13 of 20: 'ConfigOHASD'.
2019/04/09 11:12:59 CLSRSC-330: Adding Clusterware entries to file 'oracle-ohasd.service'
2019/04/09 11:14:46 CLSRSC-594: Executing installation step 14 of 20: 'InstallAFD'.
2019/04/09 11:14:50 CLSRSC-594: Executing installation step 15 of 20: 'InstallACFS'.
CRS-2791: Starting shutdown of Oracle High Availability Services-managed resources on 'rhel72'
CRS-2793: Shutdown of Oracle High Availability Services-managed resources on 'rhel72' has completed
CRS-4133: Oracle High Availability Services has been stopped.
CRS-4123: Oracle High Availability Services has been started.
2019/04/09 11:16:15 CLSRSC-594: Executing installation step 16 of 20: 'InstallKA'.
2019/04/09 11:16:17 CLSRSC-594: Executing installation step 17 of 20: 'InitConfig'.
CRS-2791: Starting shutdown of Oracle High Availability Services-managed resources on 'rhel72'
CRS-2793: Shutdown of Oracle High Availability Services-managed resources on 'rhel72' has completed
CRS-4133: Oracle High Availability Services has been stopped.
CRS-4123: Oracle High Availability Services has been started.
CRS-2791: Starting shutdown of Oracle High Availability Services-managed resources on 'rhel72'
CRS-2673: Attempting to stop 'ora.drivers.acfs' on 'rhel72'
CRS-2677: Stop of 'ora.drivers.acfs' on 'rhel72' succeeded
CRS-2793: Shutdown of Oracle High Availability Services-managed resources on 'rhel72' has completed
CRS-4133: Oracle High Availability Services has been stopped.
2019/04/09 11:16:31 CLSRSC-594: Executing installation step 18 of 20: 'StartCluster'.
CRS-4123: Starting Oracle High Availability Services-managed resources
CRS-2672: Attempting to start 'ora.mdnsd' on 'rhel72'
CRS-2672: Attempting to start 'ora.evmd' on 'rhel72'
CRS-2676: Start of 'ora.mdnsd' on 'rhel72' succeeded
CRS-2676: Start of 'ora.evmd' on 'rhel72' succeeded
CRS-2672: Attempting to start 'ora.gpnpd' on 'rhel72'
CRS-2676: Start of 'ora.gpnpd' on 'rhel72' succeeded
CRS-2672: Attempting to start 'ora.gipcd' on 'rhel72'
CRS-2676: Start of 'ora.gipcd' on 'rhel72' succeeded
CRS-2672: Attempting to start 'ora.crf' on 'rhel72'
CRS-2672: Attempting to start 'ora.cssdmonitor' on 'rhel72'
CRS-2676: Start of 'ora.cssdmonitor' on 'rhel72' succeeded
CRS-2672: Attempting to start 'ora.cssd' on 'rhel72'
CRS-2672: Attempting to start 'ora.diskmon' on 'rhel72'
CRS-2676: Start of 'ora.diskmon' on 'rhel72' succeeded
CRS-2676: Start of 'ora.crf' on 'rhel72' succeeded
CRS-2676: Start of 'ora.cssd' on 'rhel72' succeeded
CRS-2672: Attempting to start 'ora.cluster_interconnect.haip' on 'rhel72'
CRS-2672: Attempting to start 'ora.ctssd' on 'rhel72'
CRS-2676: Start of 'ora.ctssd' on 'rhel72' succeeded
CRS-2676: Start of 'ora.cluster_interconnect.haip' on 'rhel72' succeeded
CRS-2672: Attempting to start 'ora.asm' on 'rhel72'
CRS-2676: Start of 'ora.asm' on 'rhel72' succeeded
CRS-2672: Attempting to start 'ora.storage' on 'rhel72'
CRS-2676: Start of 'ora.storage' on 'rhel72' succeeded
CRS-2672: Attempting to start 'ora.crsd' on 'rhel72'
CRS-2676: Start of 'ora.crsd' on 'rhel72' succeeded
CRS-6017: Processing resource auto-start for servers: rhel72
CRS-2673: Attempting to stop 'ora.LISTENER_SCAN1.lsnr' on 'rhel71'
CRS-2672: Attempting to start 'ora.ASMNET1LSNR_ASM.lsnr' on 'rhel72'
CRS-2672: Attempting to start 'ora.ons' on 'rhel72'
CRS-2672: Attempting to start 'ora.chad' on 'rhel72'
CRS-2676: Start of 'ora.ASMNET1LSNR_ASM.lsnr' on 'rhel72' succeeded
CRS-2672: Attempting to start 'ora.asm' on 'rhel72'
CRS-2676: Start of 'ora.chad' on 'rhel72' succeeded
CRS-2677: Stop of 'ora.LISTENER_SCAN1.lsnr' on 'rhel71' succeeded
CRS-2673: Attempting to stop 'ora.scan1.vip' on 'rhel71'
CRS-2677: Stop of 'ora.scan1.vip' on 'rhel71' succeeded
CRS-2672: Attempting to start 'ora.scan1.vip' on 'rhel72'
CRS-2676: Start of 'ora.ons' on 'rhel72' succeeded
CRS-2676: Start of 'ora.scan1.vip' on 'rhel72' succeeded
CRS-2672: Attempting to start 'ora.LISTENER_SCAN1.lsnr' on 'rhel72'
CRS-2676: Start of 'ora.LISTENER_SCAN1.lsnr' on 'rhel72' succeeded
CRS-2676: Start of 'ora.asm' on 'rhel72' succeeded
CRS-2672: Attempting to start 'ora.DATA.dg' on 'rhel72'
CRS-2676: Start of 'ora.DATA.dg' on 'rhel72' succeeded
CRS-6016: Resource auto-start has completed for server rhel72
CRS-6024: Completed start of Oracle Cluster Ready Services-managed resources
CRS-4123: Oracle High Availability Services has been started.
2019/04/09 11:18:29 CLSRSC-343: Successfully started Oracle Clusterware stack
2019/04/09 11:18:29 CLSRSC-594: Executing installation step 19 of 20: 'ConfigNode'.
clscfg: EXISTING configuration version 5 detected.
clscfg: version 5 is 12c Release 2.
Successfully accumulated necessary OCR keys.
Creating OCR keys for user 'root', privgrp 'root'..
Operation successful.
2019/04/09 11:19:15 CLSRSC-594: Executing installation step 20 of 20: 'PostConfig'.
2019/04/09 11:19:28 CLSRSC-325: Configure Oracle Grid Infrastructure for a Cluster ... succeeded




6. Once the GI is cloned and node added, next is to clone the Oracle database software. For this too a gold image of the OH could be used.
7. Unzip the OH gold image on the new node following the same directory structure as existing nodes.
8. Run the clone script to configure and OH. On oracle documentation -noConfig is option is listed as part of the clone command. But on 18c this was not supported.
$ORACLE_HOME/perl/bin/perl clone.pl -silent -O 'CLUSTER_NODES={rhel71,rhel72}' -O LOCAL_NODE=rhel72 ORACLE_BASE=$ORACLE_BASE ORACLE_HOME=$ORACLE_HOME ORACLE_HOME_NAME=OraDB18Home1 -O -noConfig
Starting Oracle Universal Installer...

Checking Temp space: must be greater than 500 MB.   Actual 2595 MB    Passed
Checking swap space: must be greater than 500 MB.   Actual 3006 MB    Passed
Preparing to launch Oracle Universal Installer from /tmp/OraInstall2019-04-09_02-42-13PM. Please wait ...
[INS-04009] The argument [-noconfig] passed is not supported for the current context Clone
Run the clone command without the noConfig and cloning of OH completes without issue.
perl $ORACLE_HOME/clone/bin/clone.pl -silent ORACLE_HOME="/opt/app/oracle/product/18.x.0/dbhome_1"  ORACLE_HOME_NAME="OraDB18Home1" ORACLE_BASE="/opt/app/oracle" "CLUSTER_NODES={rhel71,rhel72}" LOCAL_NODE=rhel72
Starting Oracle Universal Installer...

Checking Temp space: must be greater than 500 MB.   Actual 3918 MB    Passed
Checking swap space: must be greater than 500 MB.   Actual 2974 MB    Passed
Preparing to launch Oracle Universal Installer from /tmp/OraInstall2019-04-09_03-26-55PM. Please wait ...You can find the log of this install session at:
 /opt/app/oraInventory/logs/cloneActions2019-04-09_03-26-55PM.log
..................................................   5% Done.
..................................................   10% Done.
..................................................   15% Done.
..................................................   20% Done.
..................................................   25% Done.
..................................................   30% Done.
..................................................   35% Done.
..................................................   40% Done.
..................................................   45% Done.
..................................................   50% Done.
..................................................   55% Done.
..................................................   60% Done.
..................................................   65% Done.
..................................................   70% Done.
..................................................   75% Done.
..................................................   80% Done.
..................................................   85% Done.
..........
Copy files in progress.

Copy files successful.

Link binaries in progress.
..........
Link binaries successful.

Setup files in progress.
..........
Setup files successful.

Setup Inventory in progress.

Setup Inventory successful.
..........
Finish Setup successful.
The cloning of OraDB18Home1 was successful.
Please check '/opt/app/oraInventory/logs/cloneActions2019-04-09_03-26-55PM.log' for more details.

Setup Oracle Base in progress.

Setup Oracle Base successful.
..................................................   95% Done.

As a root user, execute the following script(s):
        1. /opt/app/oracle/product/18.x.0/dbhome_1/root.sh

Execute /opt/app/oracle/product/18.x.0/dbhome_1/root.sh on the following nodes:
[rhel72]


..................................................   100% Done.

9. Run DBCA from an exiting node and select add instance option.

Friday, June 14, 2019

ORA-03186: Cannot start Oracle ADG recovery on a non-Oracle Cloud database on a server that is not a primary server

ORA-03186 occurs on a data guard configuration if the standby (or one of the standbys in a multiple standby configuration) was open for read only when the primary is restarted (or switchover happens between primary and another standby in a multiple standby configuration).
The setup is same setup used in postupgrading from 12.2 to 18c. The exact version of 18c is 18.6. All the servers are hosted on AWS. The current setup is as follows.
DGMGRL> show configuration

Configuration - dg12c2

  Protection Mode: MaxAvailability
  Members:
  colombo - Primary database
    london  - Physical standby database

Fast-Start Failover: DISABLED

Configuration Status:
SUCCESS   (status updated 51 seconds ago)
When the primary is restarted
[oracle@ip-172-31-20-117 trace]$ srvctl stop database -d colombo
[oracle@ip-172-31-20-117 trace]$ srvctl start database -d colombo
redo shipping to standby stops.
Following message could be seen on the alert log of the standby instance (instance open in read only mode, london).
2019-05-29T05:38:20.414031-04:00
 rfs (PID:9340): Client is running on host ip-172-31-20-117.eu-west-1.compute.internal, not the current host ip-172-31-15-199.eu-west-1.compute.internal
2019-05-29T05:38:21.974469-04:00
 rfs (PID:9348): Client is running on host ip-172-31-20-117.eu-west-1.compute.internal, not the current host ip-172-31-15-199.eu-west-1.compute.internal
On the alert log of the primary (colombo) following could be seen.
 2019-05-29T05:38:22.283282-04:00
Errors in file /opt/app/oracle/diag/rdbms/colombo/colombo/trace/colombo_tt00_10673.trc:
ORA-03186: Cannot start Oracle ADG recovery on a non-Oracle Cloud database on a server that is not a primary server.
If the error message shown is
ORA-03816: Message 3816 not found;  product=RDBMS; facility=ORA
then apply patch 27539475 (on 18c) to get the above error message.
Inside the trace file following lines shows that attempts were made to ship redo to the instance open in read only mode.
*** 2019-05-29T05:38:22.282064-04:00
krsu_upi_status: Error 3186 attaching RFS server to standby instance at host 'londontns'
krsi_verify_network: Error 3186 attaching to LOG_ARCHIVE_DEST_2 standby host londontns
 at 0x7fffa4dc23c8 placed krsg.c@4998
ORA-03186: Cannot start Oracle ADG recovery on a non-Oracle Cloud database on a server that is not a primary server.
Data guard configuration shows error status
DGMGRL> show configuration

Configuration - dg12c2

  Protection Mode: MaxAvailability
  Members:
  colombo - Primary database
    Error: ORA-16810: multiple errors or warnings detected for the member

    london  - Physical standby database

Fast-Start Failover: DISABLED

Configuration Status:
ERROR   (status updated 10 seconds ago)
Primary database configuration and status is shown below.
DGMGRL> show database verbose colombo

Database - colombo

  Role:               PRIMARY
  Intended State:     TRANSPORT-ON
  Instance(s):
    colombo
      Error: ORA-16737: the redo transport service for member "london" has an error

  Database Warning(s):
    ORA-16629: database reports a different protection level from the protection mode

  Properties:
    DGConnectIdentifier             = 'colombotns'
    ObserverConnectIdentifier       = ''
    LogXptMode                      = 'SYNC'
    RedoRoutes                      = ''
    DelayMins                       = '0'
    Binding                         = 'optional'
    MaxFailure                      = '0'
    MaxConnections                  = '1'
    ReopenSecs                      = '300'
    NetTimeout                      = '30'
    RedoCompression                 = 'DISABLE'
    LogShipping                     = 'ON'
    PreferredApplyInstance          = ''
    ApplyInstanceTimeout            = '0'
    ApplyLagThreshold               = '30'
    TransportLagThreshold           = '30'
    TransportDisconnectedThreshold  = '30'
    ApplyParallel                   = 'AUTO'
    ApplyInstances                  = '0'
    StandbyFileManagement           = 'AUTO'
    ArchiveLagTarget                = '0'
    LogArchiveMaxProcesses          = '10'
    LogArchiveMinSucceedDest        = '1'
    DataGuardSyncLatency            = '0'
    DbFileNameConvert               = '/london/, /colombo/'
    LogFileNameConvert              = '/london/, /colombo/'
    FastStartFailoverTarget         = ''
    InconsistentProperties          = '(monitor)'
    InconsistentLogXptProps         = '(monitor)'
    SendQEntries                    = '(monitor)'
    LogXptStatus                    = '(monitor)'
    RecvQEntries                    = '(monitor)'
    PreferredObserverHosts          = ''
    HostName                        = 'ip-172-31-20-117.eu-west-1.compute.internal'
    StaticConnectIdentifier         = '(DESCRIPTION=(ADDRESS=(PROTOCOL=TCP)(HOST=ip-172-31-20-117.eu-west-1.compute.internal)(PORT=1521))(CONNECT_DATA=(SERVICE_NAME=colombo_DGMGRL.domain.net)(INSTANCE_NAME=colombo)(SERVER=DEDICATED)))'
    OnlineArchiveLocation           = ''
    OnlineAlternateLocation         = ''
    StandbyArchiveLocation          = ''
    StandbyAlternateLocation        = ''
    LogArchiveTrace                 = '2049'
    LogArchiveFormat                = '%t_%s_%r.dbf'
    TopWaitEvents                   = '(monitor)'
    SidName                         = '(monitor)'

  Log file locations:
    Alert log               : /opt/app/oracle/diag/rdbms/colombo/colombo/trace/alert_colombo.log
    Data Guard Broker log   : /opt/app/oracle/diag/rdbms/colombo/colombo/trace/drccolombo.log

Database Status:
ERROR
Standby database configuration
DGMGRL> show database verbose london

Database - london

  Role:               PHYSICAL STANDBY
  Intended State:     APPLY-ON
  Transport Lag:      0 seconds (computed 203 seconds ago)
  Apply Lag:          0 seconds (computed 203 seconds ago)
  Average Apply Rate: 62.00 KByte/s
  Active Apply Rate:  0 Byte/s
  Maximum Apply Rate: 0 Byte/s
  Real Time Query:    ON
  Instance(s):
    london

  Database Warning(s):
    ORA-16857: member disconnected from redo source for longer than specified threshold

  Properties:
    DGConnectIdentifier             = 'londontns'
    ObserverConnectIdentifier       = ''
    LogXptMode                      = 'SYNC'
    RedoRoutes                      = ''
    DelayMins                       = '0'
    Binding                         = 'OPTIONAL'
    MaxFailure                      = '0'
    MaxConnections                  = '1'
    ReopenSecs                      = '300'
    NetTimeout                      = '30'
    RedoCompression                 = 'DISABLE'
    LogShipping                     = 'ON'
    PreferredApplyInstance          = ''
    ApplyInstanceTimeout            = '0'
    ApplyLagThreshold               = '30'
    TransportLagThreshold           = '30'
    TransportDisconnectedThreshold  = '30'
    ApplyParallel                   = 'AUTO'
    ApplyInstances                  = '0'
    StandbyFileManagement           = 'AUTO'
    ArchiveLagTarget                = '0'
    LogArchiveMaxProcesses          = '10'
    LogArchiveMinSucceedDest        = '1'
    DataGuardSyncLatency            = '0'
    DbFileNameConvert               = '/colombo/, /london/'
    LogFileNameConvert              = '/colombo/, /london/'
    FastStartFailoverTarget         = ''
    InconsistentProperties          = '(monitor)'
    InconsistentLogXptProps         = '(monitor)'
    SendQEntries                    = '(monitor)'
    LogXptStatus                    = '(monitor)'
    RecvQEntries                    = '(monitor)'
    PreferredObserverHosts          = ''
    HostName                        = 'ip-172-31-15-199.eu-west-1.compute.internal'
    StaticConnectIdentifier         = '(DESCRIPTION=(ADDRESS=(PROTOCOL=TCP)(HOST=ip-172-31-15-199.eu-west-1.compute.internal)(PORT=1521))(CONNECT_DATA=(SERVICE_NAME=london_DGMGRL.domain.net)(INSTANCE_NAME=london)(SERVER=DEDICATED)))'
    OnlineArchiveLocation           = ''
    OnlineAlternateLocation         = ''
    StandbyArchiveLocation          = ''
    StandbyAlternateLocation        = ''
    LogArchiveTrace                 = '2049'
    LogArchiveFormat                = '%t_%s_%r.dbf'
    TopWaitEvents                   = '(monitor)'
    SidName                         = '(monitor)'

  Log file locations:
    Alert log               : /opt/app/oracle/diag/rdbms/london/london/trace/alert_london.log
    Data Guard Broker log   : /opt/app/oracle/diag/rdbms/london/london/trace/drclondon.log

Database Status:
WARNING




There are no inconsistent properties in any of the databases.
DGMGRL> show database colombo 'InconsistentProperties';
INCONSISTENT PROPERTIES
   INSTANCE_NAME        PROPERTY_NAME         MEMORY_VALUE         SPFILE_VALUE         BROKER_VALUE

DGMGRL>  show database london  'InconsistentProperties';
INCONSISTENT PROPERTIES
   INSTANCE_NAME        PROPERTY_NAME         MEMORY_VALUE         SPFILE_VALUE         BROKER_VALUE
The archive dest status on the primary shows the same error as alert log.
SQL> select dest_id id,dest_name name, status, database_mode db_mode,recovery_mode,         protection_mode,standby_logfile_count "SRLs",        standby_logfile_active ACTIVE,         archived_seq#,error  from v$archive_dest_status where dest_id=2;

        ID NAME                 STATUS    DB_MODE         RECOVERY_MODE                      PROTECTION_MODE            SRLs     ACTIVE ARCHIVED_SEQ# ERROR
---------- -------------------- --------- --------------- ---------------------------------- -------------------- ---------- ---------- ------------- ----------------------------------------
         2 LOG_ARCHIVE_DEST_2   ERROR     UNKNOWN         IDLE                               RESYNCHRONIZATION             0          0             0 ORA-03186: Cannot start Oracle ADG
                                                                                                                                                      recovery on a non-Oracle Cloud database
                                                                                                                                                      on a server that is not a primary
                                                                                                                                                      server.

To resolve the redo shipping issue, restart the instance open in read only mode.
[oracle@ip-172-31-15-199 trace]$ srvctl stop database -d london
[oracle@ip-172-31-15-199 trace]$ srvctl start database -d london
Once it is started the redo shipping begins and log archive dest error clears.
DGMGRL> show configuration

Configuration - dg12c2

  Protection Mode: MaxAvailability
  Members:
  colombo - Primary database
    london  - Physical standby database

Fast-Start Failover: DISABLED

Configuration Status:
SUCCESS   (status updated 11 seconds ago)

The other workaround is to put the read only instance into mount mode before the primary is stopped or switchover happens and open it in read only mode afterwards. Either method would result in clients being disconnected to from the read only instance.
The same issue was observed on a 19c multiple data guard configuration hosted on AWS.
DGMGRL> show configuration
Configuration - fcdg
Protection Mode: MaxPerformance
Members:
gold - Primary database
silver - Physical standby database
bronze - Physical standby database

Fast-Start Failover: DISABLED

Configuration Status:
SUCCESS. 
The silver instance is in mount only mode and bronze is open read only mode. When there was a switchover to between gold and silver, redo transport to bronze stopped from the new primary (silver) with the same error as in 18c. Only way to resolve the issue was to restart the instance open in read only mode.
However, 11.2.0.4 data guard configuration hosted on AWS did not have this issue. The redo shipping was working fine across primary restarts when the standby was open in read only mode. Seems whatever issue or removal of capabilities on non-oracle clouds happened in later versions.

Update on 2019-09-24
The issue is fixed with 18.7 RU and for 19c with the 19.4 RU. For other versions apply patch 30289758 ( 30289758: MERGE ON DATABASE RU 18.5.0.0.0 OF 27539475 29430524 ) if available.