Friday, March 29, 2019

Backup and Recover Using DBCLI on OCI PaaS DB

OCI Paas DBs could be configured with automatic backup in which case OCI would take care of the backup. The database gets backed up to an object storage. But the backup files or the object storage bucket they are stored in is not visible from the console. This post shows the steps for backing up a PaaS DB to a user created object storage bucket using DBCLI. It is assumed a object store bucket is already created. For this post an object store bucket called "orabaks" was created. This will be used as the backup storage.

Backup using DBCLI
1. In order to use this object storage bucket it must be associated with an object store swift end point. Create object store swift end point as shown below. This uses tenancy and username used to login to the tenancy and the authentication token as the password for the "object store swift password". The end point URL must be changed to reflect the region where the DB resides. Refer previous post on dbcli command to find the region.
# dbcli create-objectstoreswift -n orabackupstore -t tenancyName -u asanga@email.com -e https://swiftobjectstorage.eu-frankfurt-1.oraclecloud.com/v1 -p
object store swift password:
{
  "jobId" : "3fa9e4fd-9da1-413d-9bca-e2434afb9229",
  "status" : "Created",
  "message" : "Create object store swift",
  "reports" : [ ],
  "createTimestamp" : "March 13, 2019 11:12:26 AM UTC",
  "resourceList" : [ {
    "resourceId" : "cb78ef29-a447-4b2d-af17-5b110a6717cb",
    "resourceType" : "ObjectStoreSwift",
    "jobId" : "3fa9e4fd-9da1-413d-9bca-e2434afb9229",
    "updatedTime" : "March 13, 2019 11:12:26 AM UTC"
  } ],
  "description" : "create object store:orabackupstore",
  "updatedTime" : "March 13, 2019 11:12:26 AM UTC"
}
# dbcli list-objectstoreswifts

ID                                       Name                 UserName                       TenantName                Url
---------------------------------------- -------------------- ------------------------------ ------------------------- ---------------------------------------------------------------------------
c9ffa569-7f15-4220-8524-0e6f294871a4     bnCGRmStQJ1DnEMR1p4a bnCGRmStQJ1DnEMR1p4a           dbbackupfra               https://swiftobjectstorage.eu-frankfurt-1.oraclecloud.com/v1
cb78ef29-a447-4b2d-af17-5b110a6717cb     orabackupstore       asanga@email.com               tenancyName               https://swiftobjectstorage.eu-frankfurt-1.oraclecloud.com/v1

2. Create a backup configuration using the object store swift ID created in the previous step and the object store name (orabaks). The backup destination is listed as object store and this could be changed to disk for disk backups, in which case backups will be created in the FRA.
# dbcli create-backupconfig -d OBJECTSTORE -c orabaks -o cb78ef29-a447-4b2d-af17-5b110a6717cb -w 1 -n mybakconfig1
{
  "jobId" : "0ecd1ed3-658a-4374-ad42-105f80e5480d",
  "status" : "Created",
  "message" : "backup config creation",
  "reports" : [ ],
  "createTimestamp" : "March 13, 2019 11:21:05 AM UTC",
  "resourceList" : [ {
    "resourceId" : "09c01611-6a58-49d1-b9de-553594b73cfc",
    "resourceType" : "BackupConfig",
    "jobId" : "0ecd1ed3-658a-4374-ad42-105f80e5480d",
    "updatedTime" : "March 13, 2019 11:21:05 AM UTC"
  } ],
  "description" : "create backup config:mybakconfig1",
  "updatedTime" : "March 13, 2019 11:21:05 AM UTC"
}

dbcli list-backupconfigs

ID                                       Name                 RecoveryWindow   CrosscheckEnabled   BackupDestination
---------------------------------------- -------------------- ---------------- ------------------- --------------------
2e6e46e9-9dca-4906-a2e1-27b8df727152     bnCGRmStQJ1DnEMR1p4a_BC 30               true                ObjectStore
09c01611-6a58-49d1-b9de-553594b73cfc     mybakconfig1         1                   true                ObjectStore


dbcli describe-backupconfig -i 09c01611-6a58-49d1-b9de-553594b73cfc
Backup Config details
----------------------------------------------------------------
                     ID: 09c01611-6a58-49d1-b9de-553594b73cfc
                   Name: mybakconfig1
      CrosscheckEnabled: true
         RecoveryWindow: 1
      BackupDestination: ObjectStore
         BackupLocation: orabaks
          ObjectStoreId: cb78ef29-a447-4b2d-af17-5b110a6717cb
            CreatedTime: March 13, 2019 11:21:05 AM UTC
            UpdatedTime: March 13, 2019 11:21:34 AM UTC

3. Next step is to associate the newly created database configuration with the database. For this the database id is needed. Use the following command to find the DB id and the ID of the current backup configuration associated with the DB.
dbcli list-databases

ID                                       DB Name    DB Type  DB Version           CDB        Class    Shape    Storage    Status        DbHomeID
---------------------------------------- ---------- -------- -------------------- ---------- -------- -------- ---------- ------------ ----------------------------------------
690f6c09-7c48-4332-ab65-b86f932c1a33     apcdb      Si       18.5.0.0.190115      true       Oltp              ASM        Configured   0f59f70a-167b-4d8a-84e3-6e7d7529b093

dbcli describe-database -i 690f6c09-7c48-4332-ab65-b86f932c1a33

Database details
----------------------------------------------------------------
                     ID: 690f6c09-7c48-4332-ab65-b86f932c1a33
            Description: apcdb
                DB Name: apcdb
             DB Version: 18.5.0.0.190115
                DB Type: Si
             DB Edition: EE
                   DBID: 1068299060
 Instance Only Database: false
                    CDB: true
               PDB Name: cgpdb
    PDB Admin User Name: pdbuser
                  Class: Oltp
                  Shape:
                Storage: ASM
          DB Redundancy:
           CharacterSet: AL32UTF8
  National CharacterSet: AL16UTF16
               Language: AMERICAN
              Territory: AMERICA
                Home ID: 0f59f70a-167b-4d8a-84e3-6e7d7529b093
        Console Enabled: false
     Level 0 Backup Day: Sunday
     AutoBackup Enabled: false
         BackupConfigId: 2e6e46e9-9dca-4906-a2e1-27b8df727152
                Created: March 11, 2019 9:06:44 AM UTC
         DB Domain Name: dbsubnet.fravcn.oraclevcn.com

4. To associate the new backup configuration with the database use the update database command.
dbcli update-database -i 690f6c09-7c48-4332-ab65-b86f932c1a33 -bi 09c01611-6a58-49d1-b9de-553594b73cfc
{
  "jobId" : "597671db-4db5-4a1e-91af-325d8c9ae39e",
  "status" : "Created",
  "message" : "update database",
  "reports" : [ ],
  "createTimestamp" : "March 13, 2019 11:26:50 AM UTC",
  "resourceList" : [ {
    "resourceId" : "690f6c09-7c48-4332-ab65-b86f932c1a33",
    "resourceType" : "DB",
    "jobId" : "597671db-4db5-4a1e-91af-325d8c9ae39e",
    "updatedTime" : "March 13, 2019 11:26:50 AM UTC"
  } ],
  "description" : "update database : apcdb",
  "updatedTime" : "March 13, 2019 11:26:50 AM UTC"
}

 dbcli describe-database -i 690f6c09-7c48-4332-ab65-b86f932c1a33

Database details
----------------------------------------------------------------
                     ID: 690f6c09-7c48-4332-ab65-b86f932c1a33
            Description: apcdb
                DB Name: apcdb
             DB Version: 18.5.0.0.190115
                DB Type: Si
             DB Edition: EE
                   DBID: 1068299060
 Instance Only Database: false
                    CDB: true
               PDB Name: cgpdb
    PDB Admin User Name: pdbuser
                  Class: Oltp
                  Shape:
                Storage: ASM
          DB Redundancy:
           CharacterSet: AL32UTF8
  National CharacterSet: AL16UTF16
               Language: AMERICAN
              Territory: AMERICA
                Home ID: 0f59f70a-167b-4d8a-84e3-6e7d7529b093
        Console Enabled: false
     Level 0 Backup Day: Sunday
     AutoBackup Enabled: false
         BackupConfigId: 09c01611-6a58-49d1-b9de-553594b73cfc
                Created: March 11, 2019 9:06:44 AM UTC
         DB Domain Name: dbsubnet.fravcn.oraclevcn.com
5. Create a database backup by specifying the DB id. The backups files will be stored in the user created object store created (orabaks). The following command shows creating an archive log backup.
dbcli create-backup -bt ArchiveLog -i 690f6c09-7c48-4332-ab65-b86f932c1a33
{
  "jobId" : "047bbf1e-2ef2-4efd-a0dd-663d0b2c4456",
  "status" : "Created",
  "message" : null,
  "reports" : [ ],
  "createTimestamp" : "March 13, 2019 11:34:01 AM UTC",
  "resourceList" : [ ],
  "description" : "Create ArchiveLog Backup with TAG-auto for Db:apcdb in OSS:orabaks",
  "updatedTime" : "March 13, 2019 11:34:01 AM UTC"
}

dbcli describe-job -i 047bbf1e-2ef2-4efd-a0dd-663d0b2c4456

Job details
----------------------------------------------------------------
                     ID:  047bbf1e-2ef2-4efd-a0dd-663d0b2c4456
            Description:  Create ArchiveLog Backup with TAG-auto for Db:apcdb in OSS:orabaks
                 Status:  Success
                Created:  March 13, 2019 11:34:01 AM UTC
                Message:

Task Name                                Start Time                          End Time                            Status
---------------------------------------- ----------------------------------- ----------------------------------- ----------
Validate backup config                   March 13, 2019 11:34:04 AM UTC      March 13, 2019 11:34:04 AM UTC      Success
Database container validation            March 13, 2019 11:34:04 AM UTC      March 13, 2019 11:34:05 AM UTC      Success
libopc existence check                   March 13, 2019 11:34:05 AM UTC      March 13, 2019 11:34:05 AM UTC      Success
Backup Validations                       March 13, 2019 11:34:05 AM UTC      March 13, 2019 11:34:54 AM UTC      Success
Recovery Window validation               March 13, 2019 11:34:54 AM UTC      March 13, 2019 11:35:17 AM UTC      Success
Archivelog deletion policy configuration March 13, 2019 11:35:17 AM UTC      March 13, 2019 11:35:37 AM UTC      Success
Archive log backup                       March 13, 2019 11:35:37 AM UTC      March 13, 2019 11:38:41 AM UTC      Success
Backup TDE Wallet files to ObjectStore   March 13, 2019 11:38:41 AM UTC      March 13, 2019 11:38:42 AM UTC      Success
The created backup files are visible inside the object storage bucket.

6. A RMAN backup report could be created using the DB id. The rman report will list the log files which would have finer details of the backup.
dbcli create-rmanbackupreport -i 690f6c09-7c48-4332-ab65-b86f932c1a33 -w detailed -rn archlogreport
{
  "jobId" : "20f74edf-dcbf-4bd7-89cd-51358d2ab7d6",
  "status" : "Created",
  "message" : "Rman BackupReport creation.",
  "reports" : [ ],
  "createTimestamp" : "March 13, 2019 11:44:45 AM UTC",
  "resourceList" : [ {
    "resourceId" : "34dbed6a-4182-4fb5-a1b8-8918a799485f",
    "resourceType" : "Report",
    "jobId" : "20f74edf-dcbf-4bd7-89cd-51358d2ab7d6",
    "updatedTime" : "March 13, 2019 11:44:45 AM UTC"
  } ],
  "description" : "Create detailed Backup Report ",
  "updatedTime" : "March 13, 2019 11:44:45 AM UTC"
}


dbcli list-rmanbackupreports

ID                                       Name                 ReportType DbId                                     createTime                          updatedTime
---------------------------------------- -------------------- ---------- ---------------------------------------- ----------------------------------- -----------------------------------
34dbed6a-4182-4fb5-a1b8-8918a799485f     archlogreport        detailed   690f6c09-7c48-4332-ab65-b86f932c1a33     March 13, 2019 11:44:45 AM UTC      March 13, 2019 11:45:36 AM UTC

describe-rmanbackupreport -i 34dbed6a-4182-4fb5-a1b8-8918a799485f
Backup Report details
----------------------------------------------------------------
                     ID: 34dbed6a-4182-4fb5-a1b8-8918a799485f
            Report Type: detailed
               Location: Node apdb: /opt/oracle/dcs/log/apdb/rman/bkup/apcdb_fra2g6/rman_list_backup_detail/2019-03-13/rman_list_backup_detail_2019-03-13_11-45-20.0508.log
            Database ID: 690f6c09-7c48-4332-ab65-b86f932c1a33
            CreatedTime: March 13, 2019 11:44:45 AM UTC

ID                                       Description                                                                 Created                             Status
---------------------------------------- --------------------------------------------------------------------------- ----------------------------------- ----------
b5d96197-91ec-4319-abf7-09b39d70bafb     Create Regular-L0 Backup with TAG-auto for Db:apcdb in OSS:orabaks          March 13, 2019 12:01:36 PM UTC      Success
8bc91a59-2bee-4b9e-aa59-b73048adca38     Create Archivelog Backup with TAG-auto for Db:apcdb in OSS:orabaks          March 13, 2019 12:53:59 PM UTC      Success

Backup report could be deleted using the following command.
dbcli delete-rmanbackupreport -i 34dbed6a-4182-4fb5-a1b8-8918a799485f
To remove a backup configuration, if its assign to the DB then update the DB with a different backup configuration. Use the following command to delete the backup configuration by specifying backup configuration id.
dbcli delete-backupconfig -i 09c01611-6a58-49d1-b9de-553594b73cfc
To delete a object store swift use the following command, specifying object store swift id.
dbcli delete-objectstoreswift -i cb78ef29-a447-4b2d-af17-5b110a6717cb



Recover using DBCLI
Following steps shows point in time recovery using DBCLI to simulate a recovery from accidental drop of a table. DBCLI also provides other recovery types as well (Latest|SCN).
1. The table is droped using purge option.
SQL> conn asanga/asanga#1234$ABC@pdbsrvx
Connected.
SQL> ! date
Wed Mar 13 12:58:37 UTC 2019

SQL> drop table x purge;
Table dropped.
2. Use recover database command to execute a PITR. The recovery time stamp (specified by -r) is just before the table was dropped.
dbcli recover-database -i 690f6c09-7c48-4332-ab65-b86f932c1a33 -r 03/13/2019 12:58:35 -t PITR
{
  "jobId" : "037b6bfd-1bc3-4c2e-af5e-1a22dbc05189",
  "status" : "Created",
  "message" : null,
  "reports" : [ ],
  "createTimestamp" : "March 13, 2019 13:01:57 PM UTC",
  "resourceList" : [ ],
  "description" : "Create recovery-pitr : time '03/13/2019 12:58:35' for db : apcdb",
  "updatedTime" : "March 13, 2019 13:01:57 PM UTC"
}
Monitor the recovery using the recovery job id given above.
dbcli describe-job -i 037b6bfd-1bc3-4c2e-af5e-1a22dbc05189

Job details
----------------------------------------------------------------
                     ID:  037b6bfd-1bc3-4c2e-af5e-1a22dbc05189
            Description:  Create recovery-pitr : time '03/13/2019 12:58:35' for db : apcdb
                 Status:  Success
                Created:  March 13, 2019 1:01:57 PM UTC
                Message:

Task Name                                Start Time                          End Time                            Status
---------------------------------------- ----------------------------------- ----------------------------------- ----------
Database recovery validation             March 13, 2019 1:02:08 PM UTC       March 13, 2019 1:03:05 PM UTC       Success
Database recovery                        March 13, 2019 1:03:05 PM UTC       March 13, 2019 1:05:54 PM UTC       Success
Enable block change tracking             March 13, 2019 1:05:54 PM UTC       March 13, 2019 1:05:56 PM UTC       Success
Database opening                         March 13, 2019 1:05:56 PM UTC       March 13, 2019 1:06:53 PM UTC       Success
Database restart                         March 13, 2019 1:06:53 PM UTC       March 13, 2019 1:08:55 PM UTC       Success
Recovery metadata persistance            March 13, 2019 1:08:55 PM UTC       March 13, 2019 1:08:55 PM UTC       Success
Verify table is recovered
SQL> desc x
 Name     Null?    Type
 -------- ------- ----------------------------
 A                NUMBER

Tuesday, March 26, 2019

Updating the OS and Patching OCI PaaS DB Using DBCLI

OCI provide a command line interface for managing the PaaS DB (DBCLI) beside the console. This post shows the steps for updating the OS and patching the DB System and the DB using DBCLI.

Updating the OS
To update the OS the database server needs access to the YUM repository. If the DB system is in a private subnet then configure a NAT gateway to access the YUM repository. It is assumed that NAT gateway is configured and DB server can access the internet or egress traffic is allowed only to YUM repository location.
1. Check the region where the DB server resides.
# curl -s http://169.254.169.254/opc/v1/instance/ |grep region
  "region" : "eu-frankfurt-1",
2. In this case the region is eu-frankfurt-1. Replace the region name in below URL with actual region and get the yum repos file.
https://swiftobjectstorage.region_name.oraclecloud.com/v1/dbaaspatchstore/DBaaSOSPatches/oci_dbaas_ol6repo

# wget https://swiftobjectstorage.eu-frankfurt-1.oraclecloud.com/v1/dbaaspatchstore/DBaaSOSPatches/oci_dbaas_ol6repo -O /tmp/oci_dbaas_ol6repo
--2019-03-12 10:14:05--  https://swiftobjectstorage.eu-frankfurt-1.oraclecloud.com/v1/dbaaspatchstore/DBaaSOSPatches/oci_dbaas_ol6repo
...
Saving to: `/tmp/oci_dbaas_ol6repo'
3. Get the version lock file. Version lock file is needed to control the update. An update will occur only if a versionlock file has a valid update available to apply to the DB system.
# wget https://swiftobjectstorage.eu-frankfurt-1.oraclecloud.com/v1/dbaaspatchstore/DBaaSOSPatches/versionlock.list -O /tmp/versionlock.list
--2019-03-12 10:14:37--  https://swiftobjectstorage.eu-frankfurt-1.oraclecloud.com/v1/dbaaspatchstore/DBaaSOSPatches/versionlock.list
...
Saving to: `/tmp/versionlock.list'
4. Copy the downloaded files to their respective locations in the DB server
cp /tmp/oci_dbaas_ol6repo /etc/yum.repos.d/ol6.repo

cp /tmp/versionlock.list /etc/yum/pluginconf.d/versionlock.list
5. Open the yum repository file and enable the repository for the region. During this testing it was found that Frankfurt region was enabled by default.

6. Note the current kernel version and run the update command.
uname -r
4.1.12-124.20.3.el6uek.x86_64

yum update
Any warning similar to following could be ignored.
Warning: RPMDB altered outside of yum.
** Found 3 pre-existing rpmdb problem(s), 'yum check' output follows:
oda-hw-mgmt-18.3.1.0.0_LINUX.X64_181212-1.x86_64 has missing requires of libnfsodm18.so()(64bit)
oda-hw-mgmt-18.3.1.0.0_LINUX.X64_181212-1.x86_64 has missing requires of perl(GridDefParams)
oda-hw-mgmt-18.3.1.0.0_LINUX.X64_181212-1.x86_64 has missing requires of perl(s_GridSteps)
Also towards the end of the update following error.
ol6_UEKR4/filelists_db                                                                                                                                                                                                | 1.6 MB     00:00
Uploading /boot/vmlinuz-4.1.12-124.25.1.el6uek.x86_64 to http://169.254.0.3/kernel

Error uploading /kernel: 502 - <html>
<head><title>502 Bad Gateway</title></head>
<body bgcolor="white">
<center><h1>502 Bad Gateway</h1></center>
</body>
</html>
In spite of this error the kernel gets updated as shown below

6. Once the update is complete reboot the server and check the kernel version.
# reboot
# uname -r
4.1.12-124.25.1.el6uek.x86_64



Patching the PaaS DB
Patching the DB is of two parts similar to patching using console. First is to patch the DB system (Grid Infrastructure) and then the DB.

Updating the DBCLI
Before the patching could begin check if there's a newer version of DBCLI and update DBCLI.
cliadm update-dbcli

Job details
----------------------------------------------------------------
                     ID:  d0a0e701-3c5a-4f72-a97e-455efd02cefd
            Description:  DcsCli patching
                 Status:  Created
                Created:  March 12, 2019 10:59:31 AM UTC
                Message:  Dcs cli will be updated
Wait couple of minutes and check the outcome of the update, (use the job id given)
dbcli describe-job -i d0a0e701-3c5a-4f72-a97e-455efd02cefd

Job details
----------------------------------------------------------------
                     ID:  d0a0e701-3c5a-4f72-a97e-455efd02cefd
            Description:  DcsCli patching
                 Status:  Success
                Created:  March 12, 2019 10:59:31 AM UTC
                Message:

Task Name           Start Time                          End Time                            Status
------------------- -------------------------------     ----------------------------------- ----------
dcs-cli upgrade     March 12, 2019 10:59:32 AM UTC      March 12, 2019 10:59:34 AM UTC      Success
Check the currently available patches. This command list available patches for all versions.
# dbcli describe-latestpatch
componentType   availableVersion
--------------- --------------------
gi              12.2.0.1.190115
gi              12.1.0.2.190115
gi              18.5.0.0.190115
db              11.2.0.4.190115
db              12.2.0.1.190115
db              12.1.0.2.190115
db              18.5.0.0.190115
To list the patching available for installed GI and DB run the following command.
# dbcli describe-component
System Version
---------------
18.3.3.0.0

Component                                Installed Version    Available Version
---------------------------------------- -------------------- --------------------
GI                                        18.3.0.0.180717       18.5.0.0.190115
DB                                        18.3.0.0.180717       18.5.0.0.190115
Patching the DB System
To patch the DB system run update server command.
# dbcli update-server
{
  "jobId" : "058707c8-a9ec-48f0-ba47-47e5aa4f65c6",
  "status" : "Created",
  "message" : null,
  "reports" : [ ],
  "createTimestamp" : "March 12, 2019 11:05:00 AM UTC",
  "resourceList" : [ ],
  "description" : "Server Patching",
  "updatedTime" : "March 12, 2019 11:05:01 AM UTC"
}
The dcs agent log could be used to monitor the patch apply in progress
cd /opt/oracle/dcs/log
tail -100f dcs-agent.log
Another method is to use the describe job option with the job id.
dbcli describe-job -i "058707c8-a9ec-48f0-ba47-47e5aa4f65c6"

Job details
----------------------------------------------------------------
                     ID:  058707c8-a9ec-48f0-ba47-47e5aa4f65c6
            Description:  Server Patching
                 Status:  Running
                Created:  March 12, 2019 11:05:00 AM UTC
                Message:

Task Name                                Start Time                          End Time                            Status
---------------------------------------- ----------------------------------- ----------------------------------- ----------
Create Patching Repository Directories   March 12, 2019 11:05:01 AM UTC      March 12, 2019 11:05:01 AM UTC      Success
Download latest patch metadata           March 12, 2019 11:05:01 AM UTC      March 12, 2019 11:05:01 AM UTC      Success
Update Patching Repository               March 12, 2019 11:05:02 AM UTC      March 12, 2019 11:05:02 AM UTC      Running
Once the patching is complete the status will change to success.
dbcli describe-job -i "058707c8-a9ec-48f0-ba47-47e5aa4f65c6"

Job details
----------------------------------------------------------------
                     ID:  058707c8-a9ec-48f0-ba47-47e5aa4f65c6
            Description:  Server Patching
                 Status:  Success
                Created:  March 12, 2019 11:05:00 AM UTC
                Message:

Task Name                                Start Time                          End Time                            Status
---------------------------------------- ----------------------------------- ----------------------------------- ----------
Create Patching Repository Directories   March 12, 2019 11:05:01 AM UTC      March 12, 2019 11:05:01 AM UTC      Success
Download latest patch metadata           March 12, 2019 11:05:01 AM UTC      March 12, 2019 11:05:01 AM UTC      Success
Update Patching Repository               March 12, 2019 11:05:02 AM UTC      March 12, 2019 11:08:24 AM UTC      Success
Opatch updation                          March 12, 2019 11:08:25 AM UTC      March 12, 2019 11:08:30 AM UTC      Success
Patch conflict check                     March 12, 2019 11:08:31 AM UTC      March 12, 2019 11:10:31 AM UTC      Success
apply clusterware patch                  March 12, 2019 11:10:31 AM UTC      March 12, 2019 11:32:08 AM UTC      Success
Updating GiHome version                  March 12, 2019 11:32:08 AM UTC      March 12, 2019 11:32:35 AM UTC      Success
Update System version                    March 12, 2019 11:32:52 AM UTC      March 12, 2019 11:32:52 AM UTC      Success
Use the component describe command again to see the patched version. Since the latest patch was applied the available column will state "up-to-date"
dbcli describe-component
System Version
---------------
18.3.3.0.0

Component                                Installed Version    Available Version
---------------------------------------- -------------------- --------------------
GI                                        18.5.0.0.190115       up-to-date
DB                                         18.3.0.0.180717       18.5.0.0.190115
Patching the DB
Patching of the DB happens by specifying the DB home to patch. For this the DB home ID is needed. To find the DB home ID use the list home command.
dbcli list-dbhomes

ID                                       Name                 DB Version             Home Location                                 Status
---------------------------------------- -------------------- ---------------------- --------------------------------------------- ----------
0f59f70a-167b-4d8a-84e3-6e7d7529b093     OraDB18000_home1     18.3.0.0.180717        /u01/app/oracle/product/18.0.0.0/dbhome_1     Configured
Run the home patching command specifying the DB Home ID.
# dbcli update-dbhome -i 0f59f70a-167b-4d8a-84e3-6e7d7529b093
{
  "jobId" : "dae78453-e3a4-4b6e-ba6b-baaeb8902f2f",
  "status" : "Created",
  "message" : null,
  "reports" : [ ],
  "createTimestamp" : "March 12, 2019 11:37:09 AM UTC",
  "resourceList" : [ ],
  "description" : "DB Home Patching: Home Id is 0f59f70a-167b-4d8a-84e3-6e7d7529b093",
  "updatedTime" : "March 12, 2019 11:37:09 AM UTC"
}
Use the job describe command to find the status of the job.
# dbcli describe-job -i "dae78453-e3a4-4b6e-ba6b-baaeb8902f2f"

Job details
----------------------------------------------------------------
                     ID:  dae78453-e3a4-4b6e-ba6b-baaeb8902f2f
            Description:  DB Home Patching: Home Id is 0f59f70a-167b-4d8a-84e3-6e7d7529b093
                 Status:  Success
                Created:  March 12, 2019 11:37:09 AM UTC
                Message:

Task Name                                Start Time                          End Time                            Status
---------------------------------------- ----------------------------------- ----------------------------------- ----------
Create Patching Repository Directories   March 12, 2019 11:37:09 AM UTC      March 12, 2019 11:37:09 AM UTC      Success
Download latest patch metadata           March 12, 2019 11:37:09 AM UTC      March 12, 2019 11:37:09 AM UTC      Success
checking GiHome version                  March 12, 2019 11:37:09 AM UTC      March 12, 2019 11:37:09 AM UTC      Success
Update System version                    March 12, 2019 11:37:09 AM UTC      March 12, 2019 11:37:09 AM UTC      Success
Update Patching Repository               March 12, 2019 11:37:09 AM UTC      March 12, 2019 11:37:55 AM UTC      Success
Validating dbHome available space        March 12, 2019 11:37:55 AM UTC      March 12, 2019 11:37:55 AM UTC      Success
Opatch updation                          March 12, 2019 11:37:56 AM UTC      March 12, 2019 11:38:00 AM UTC      Success
Patch conflict check                     March 12, 2019 11:38:00 AM UTC      March 12, 2019 11:41:00 AM UTC      Success
db upgrade                               March 12, 2019 11:41:00 AM UTC      March 12, 2019 11:59:06 AM UTC      Success
SqlPatch upgrade                         March 12, 2019 11:59:06 AM UTC      March 12, 2019 12:02:14 PM UTC      Success
updating the Database version            March 12, 2019 12:02:29 PM UTC      March 12, 2019 12:02:43 PM UTC      Success
The list component will show the DB is also up-to-date.
# dbcli describe-component
System Version
---------------
18.3.3.0.0

Component                                Installed Version    Available Version
---------------------------------------- -------------------- --------------------
GI                                        18.5.0.0.190115       up-to-date
DB                                        18.5.0.0.190115       up-to-date
Querying the DB will show that sql patch registry is updated.
select PATCH_ID,PATCH_TYPE,ACTION,STATUS,SOURCE_VERSION,TARGET_VERSION,ACTION_TIME from dba_registry_sqlpatch;

  PATCH_ID PATCH_TYPE ACTION          STATUS                    SOURCE_VERSION  TARGET_VERSION  ACTION_TIME
---------- ---------- --------------- ------------------------- --------------- --------------- ------------------------------
  28090523 RU         APPLY           SUCCESS                   18.1.0.0.0      18.3.0.0.0      06-AUG-18 07.10.59.424957 PM
  27923415 INTERIM    APPLY           SUCCESS                   18.1.0.0.0      18.1.0.0.0      06-AUG-18 07.10.59.428991 PM
  28822489 RU         APPLY           SUCCESS                   18.3.0.0.0      18.5.0.0.0      12-MAR-19 11.55.48.034138 AM
  29143230 INTERIM    APPLY           SUCCESS                   18.5.0.0.0      18.5.0.0.0      12-MAR-19 12.00.42.612755 PM
The patched version of the DB System and DB was not updated on the console view. The console showed the pre-patch version for the DB system and DB. One way to rectify this was to run the patch from the console. This will not reapply the patch. But during the pre-patch check the applied patch will be detected and console will update itself with the current patch information.

Monday, March 18, 2019

RAC DB on OCI PaaS

Unlike other cloud vendors, in OCI it is possible to create a 2 node RAC without any third party software. The minimum shape needed for creating a 2 node RAC for virtual machine type is VM.Standard(x).2. This will automatically enables the node count spinner allowing it to be changed to 2 nodes (currently the max number of nodes). Additionally cluster name could be specified. Rest of the steps are similar to creating a single instance DB.
For this post 18.3 RAC DB was created in private subnet. The Database specifications are shown below.
The database details are shown below.
An app server on a public subnet on the same VCN is able to resolve the SCAN without any other user configurations.
nslookup rac-scan.londbsubnet.lonvcn.oraclevcn.com
Server:         169.254.169.254
Address:        169.254.169.254#53

Non-authoritative answer:
Name:   rac-scan.londbsubnet.lonvcn.oraclevcn.com
Address: 172.21.0.7
Name:   rac-scan.londbsubnet.lonvcn.oraclevcn.com
Address: 172.21.0.8
Name:   rac-scan.londbsubnet.lonvcn.oraclevcn.com
Address: 172.21.0.6
The hostnames will be suffixed with node number when the cluster is created.
The subnet CIDR block is treated as the public network and second 192.168.* network is created for private interconnect.
oifcfg getif
eth0  172.21.0.0  global  public
eth1  192.168.16.0  global  cluster_interconnect,asm
The /etc/oratab files on all nodes are empty of any ASM or DB instance informations. These need to be added manually if needed.

The RAC configuration is role separated. Following roles exists for oracle and grid user. There are no separate groups for backupdba, kmdba, dgdba or racdba.
[opc@rac1 ~]$ id oracle
uid=101(oracle) gid=1001(oinstall) groups=1001(oinstall),1006(asmdba),1003(dba),1002(dbaoper)
[opc@rac1 ~]$ id grid
uid=102(grid) gid=1001(oinstall) groups=1001(oinstall),1006(asmdba),1005(asmoper),1004(asmadmin),1002(dbaoper)
There's no user equivalence between nodes for oracle or grid user.
cluvfy stage -post crsinst -n all

Verifying User Equivalence ...FAILED (PRVG-2019, PRKC-1191)
PRVF-4009 : User equivalence is not set for nodes: rac2

[oracle@rac1 ~]$ ssh rac2
Permission denied (publickey,gssapi-keyex,gssapi-with-mic).

[oracle@rac2 ~]$ ssh rac1
Permission denied (publickey,gssapi-keyex,gssapi-with-mic).

[grid@rac1 ~]$ ssh rac2
Permission denied (publickey,gssapi-keyex,gssapi-with-mic).

[grid@rac2 ~]$ ssh rac1
Permission denied (publickey,gssapi-keyex,gssapi-with-mic).
Oracle documentation mentions (this documentation may not be relevant any more) that user equivalence exists for opc user. But that too didn't exists.
[opc@rac1 ~]$ ssh rac2
Permission denied (publickey,gssapi-keyex,gssapi-with-mic).

[opc@rac2 ~]$ ssh rac1
Permission denied (publickey,gssapi-keyex,gssapi-with-mic).


The cluster does not use GNS. Below is the full list of resources at the end of the cluster creation.
Resource Name                       Type                           Target             State              Host
-------------                       ------                         -------            --------           ----------
ora.ASMNET1LSNR_ASM.lsnr            ora.asm_listener.type          ONLINE             ONLINE             rac1
ora.ASMNET1LSNR_ASM.lsnr            ora.asm_listener.type          ONLINE             ONLINE             rac2
ora.DATA.COMMONSTORE.advm           ora.volume.type                ONLINE             ONLINE             rac1
ora.DATA.COMMONSTORE.advm           ora.volume.type                ONLINE             ONLINE             rac2
ora.DATA.dg                         ora.diskgroup.type             ONLINE             ONLINE             rac1
ora.DATA.dg                         ora.diskgroup.type             ONLINE             ONLINE             rac2
ora.LISTENER.lsnr                   ora.listener.type              ONLINE             ONLINE             rac1
ora.LISTENER.lsnr                   ora.listener.type              ONLINE             ONLINE             rac2
ora.LISTENER_SCAN1.lsnr             ora.scan_listener.type         ONLINE             ONLINE             rac1
ora.LISTENER_SCAN2.lsnr             ora.scan_listener.type         ONLINE             ONLINE             rac2
ora.LISTENER_SCAN3.lsnr             ora.scan_listener.type         ONLINE             ONLINE             rac2
ora.RECO.dg                         ora.diskgroup.type             ONLINE             ONLINE             rac1
ora.RECO.dg                         ora.diskgroup.type             ONLINE             ONLINE             rac2
ora.asm                             ora.asm.type                   ONLINE             ONLINE             rac1
ora.asm                             ora.asm.type                   ONLINE             ONLINE             rac2
ora.cdbrac_lhr1fq.db                ora.database.type              ONLINE             ONLINE             rac1
ora.cdbrac_lhr1fq.db                ora.database.type              ONLINE             ONLINE             rac2
ora.cvu                             ora.cvu.type                   ONLINE             ONLINE             rac2
ora.data.commonstore.acfs           ora.acfs.type                  ONLINE             ONLINE             rac1
ora.data.commonstore.acfs           ora.acfs.type                  ONLINE             ONLINE             rac2
ora.net1.network                    ora.network.type               ONLINE             ONLINE             rac1
ora.net1.network                    ora.network.type               ONLINE             ONLINE             rac2
ora.ons                             ora.ons.type                   ONLINE             ONLINE             rac1
ora.ons                             ora.ons.type                   ONLINE             ONLINE             rac2
ora.proxy_advm                      ora.proxy_advm.type            ONLINE             ONLINE             rac1
ora.proxy_advm                      ora.proxy_advm.type            ONLINE             ONLINE             rac2
ora.qosmserver                      ora.qosmserver.type            OFFLINE            OFFLINE
ora.rac1.vip                        ora.cluster_vip_net1.type      ONLINE             ONLINE             rac1
ora.rac2.vip                        ora.cluster_vip_net1.type      ONLINE             ONLINE             rac2
ora.scan1.vip                       ora.scan_vip.type              ONLINE             ONLINE             rac1
ora.scan2.vip                       ora.scan_vip.type              ONLINE             ONLINE             rac2
ora.scan3.vip                       ora.scan_vip.type              ONLINE             ONLINE             rac2
Similar to single instance DB RAC DB too consists of two ASM disk groups. OCR and vote is stored in the DATA disk group. The backup location for OCR is also the DATA disk group.
SQL> select name,total_mb,free_Mb from v$asm_diskgroup;

NAME                             TOTAL_MB    FREE_MB
------------------------------ ---------- ----------
DATA                               262144     253408
RECO                               262144     257380

cat /etc/oracle/ocr.loc
#Device/file +DATA getting replaced by device +DATA/dbSyslw4gbgsa/OCRFILE/registry.255.1002113311
ocrconfig_loc=+DATA/dbSyslw4gbgsa/OCRFILE/registry.255.1002113311


[grid@rac1 ~]$ crsctl query css votedisk
##  STATE    File Universal Id                File Name Disk group
--  -----    -----------------                --------- ---------
 1. ONLINE   1e4fd3b7b9364fdfbf4dbf41ffde36d9 (/dev/DATADISK3) [DATA]
 
ocrconfig -showbackuploc
The Oracle Cluster Registry backup location is [+DATA].
ASM disks are SCSI attached and permissions are set with udev rules.
SQL> select name,path from v$asm_disk;

NAME                           PATH
------------------------------ ------------------------------
RECODISK4                      /dev/RECODISK4
RECODISK3                      /dev/RECODISK3
RECODISK2                      /dev/RECODISK2
RECODISK1                      /dev/RECODISK1
DATA_0003                      /dev/DATADISK4
DATA_0000                      /dev/DATADISK3
DATA_0001                      /dev/DATADISK2
DATA_0002                      /dev/DATADISK1


[grid@rac1 admin]$ ls -l /dev/DATA*
lrwxrwxrwx 1 root root 3 Mar  5 14:58 /dev/DATADISK1 -> sdb
lrwxrwxrwx 1 root root 3 Mar  5 14:58 /dev/DATADISK2 -> sdc
lrwxrwxrwx 1 root root 3 Mar  5 14:58 /dev/DATADISK3 -> sdd
lrwxrwxrwx 1 root root 3 Mar  5 14:58 /dev/DATADISK4 -> sde


[grid@rac1 admin]$ ls -l /dev/disk/by-id/ | egrep "sdb|sdc|sdd|sde"
lrwxrwxrwx 1 root root  9 Mar  5 14:59 scsi-36012159faf2d4794be5ab57d8a5b7498 -> ../../sdc
lrwxrwxrwx 1 root root 10 Mar  5 12:32 scsi-36012159faf2d4794be5ab57d8a5b7498-part1 -> ../../sdc1
lrwxrwxrwx 1 root root  9 Mar  5 14:59 scsi-36022a3f8dffa46b4a57049af3a2dd275 -> ../../sdb
lrwxrwxrwx 1 root root 10 Mar  5 12:32 scsi-36022a3f8dffa46b4a57049af3a2dd275-part1 -> ../../sdb1
lrwxrwxrwx 1 root root  9 Mar  5 14:59 scsi-360748b62328c4ff6bb8fe3e9e4f04826 -> ../../sdd
lrwxrwxrwx 1 root root 10 Mar  5 12:32 scsi-360748b62328c4ff6bb8fe3e9e4f04826-part1 -> ../../sdd1
lrwxrwxrwx 1 root root  9 Mar  5 14:59 scsi-360923cd17c134686829d791e2b4ff82f -> ../../sde
lrwxrwxrwx 1 root root 10 Mar  5 12:32 scsi-360923cd17c134686829d791e2b4ff82f-part1 -> ../../sde1

cat /etc/udev/rules.d/70-names.rules
KERNEL=="sd*", ENV{DEVTYPE}=="disk", ENV{ID_SERIAL}=="36022a3f8dffa46b4a57049af3a2dd275", SYMLINK+="DATADISK1",  OWNER="grid",  GROUP="asmadmin",  MODE="0660"
KERNEL=="sd*", ENV{DEVTYPE}=="disk", ENV{ID_SERIAL}=="36012159faf2d4794be5ab57d8a5b7498", SYMLINK+="DATADISK2",  OWNER="grid",  GROUP="asmadmin",  MODE="0660"
KERNEL=="sd*", ENV{DEVTYPE}=="disk", ENV{ID_SERIAL}=="360748b62328c4ff6bb8fe3e9e4f04826", SYMLINK+="DATADISK3",  OWNER="grid",  GROUP="asmadmin",  MODE="0660"
KERNEL=="sd*", ENV{DEVTYPE}=="disk", ENV{ID_SERIAL}=="360923cd17c134686829d791e2b4ff82f", SYMLINK+="DATADISK4",  OWNER="grid",  GROUP="asmadmin",  MODE="0660"
KERNEL=="sd*", ENV{DEVTYPE}=="disk", ENV{ID_SERIAL}=="360bc2c20ddc145ceafa98bc1bb1532d3", SYMLINK+="RECODISK1",  OWNER="grid",  GROUP="asmadmin",  MODE="0660"
KERNEL=="sd*", ENV{DEVTYPE}=="disk", ENV{ID_SERIAL}=="3606feb49caf045598f66042c2831ad38", SYMLINK+="RECODISK2",  OWNER="grid",  GROUP="asmadmin",  MODE="0660"
KERNEL=="sd*", ENV{DEVTYPE}=="disk", ENV{ID_SERIAL}=="360ad6aca673e4d0a8d2d52aded3ee466", SYMLINK+="RECODISK3",  OWNER="grid",  GROUP="asmadmin",  MODE="0660"
KERNEL=="sd*", ENV{DEVTYPE}=="disk", ENV{ID_SERIAL}=="360f2aa11901e4736b65d54f96f000dcf", SYMLINK+="RECODISK4",  OWNER="grid",  GROUP="asmadmin",  MODE="0660"
KERNEL=="sd*", ENV{DEVTYPE}=="disk", ENV{ID_SERIAL}=="3606e328f14b14ddf9042d0eca9038593", SYMLINK+="localdisk",  OWNER="grid",  GROUP="asmadmin",  MODE="0660"
Manual intervention is needed to create the time synchronization between the two nodes. Although NTP service is running, it is not synchronized with an external time source.
Verifying Clock Synchronization ...FAILED
  Verifying Network Time Protocol (NTP) ...FAILED
    Verifying NTP daemon is synchronized with at least one external time source
    ...FAILED
    rac1: PRVG-13602 : NTP daemon is not synchronized with any external time
          source on node "rac1".
As the ntp.conf file is present, the ctss runs in observe mode.
[grid@rac1 ~]$ crsctl check ctss
CRS-4700: The Cluster Time Synchronization Service is in Observer mode.
This also mentioned in oracle documentation where it is recommended "that you configure NTP on both nodes in a 2-node RAC DB System to synchronize time across the nodes. If you do not configure NTP, then Oracle Clusterware configures and uses the Cluster Time Synchronization Service (CTSS), and the cluster time might be out-of-sync with applications that use NTP for time synchronization".
The ntpd file is missing the -x option which prevents setting the clock backwards.
cat /etc/sysconfig/ntpd
# Drop root to id 'ntp:ntp' by default.
OPTIONS="-u ntp:ntp -p /var/run/ntpd.pid -g"
Once the time synchronization is established it could be verified with
cluvfy comp clocksync
Though the cluster was 18c, the oracle home is a read/write home. Not a read only home which was introduced in 18c.
echo $ORACLE_HOME
/u01/app/oracle/product/18.0.0.0/dbhome_1
[oracle@rac2 bin]$ ./orabasehome
/u01/app/oracle/product/18.0.0.0/dbhome_1
The RAC DB created has two log groups per thread.
SQL> select * from v$log;

    GROUP#    THREAD#  SEQUENCE#      BYTES  BLOCKSIZE    MEMBERS ARC STATUS           FIRST_CHANGE# FIRST_TIM NEXT_CHANGE# NEXT_TIME     CON_ID
---------- ---------- ---------- ---------- ---------- ---------- --- ---------------- ------------- --------- ------------ --------- ----------
         1          1          3 1073741824        512          1 YES INACTIVE               1262776 05-MAR-19      1264208 05-MAR-19          0
         2          1          4 1073741824        512          1 NO  CURRENT                1264244 05-MAR-19   1.8447E+19 05-MAR-19          0
         3          2          5 1073741824        512          1 NO  CURRENT                1262779 05-MAR-19   1.8447E+19                    0
         4          2          4 1073741824        512          1 YES INACTIVE               1242884 05-MAR-19      1262779 05-MAR-19          0
The RAC DB is a administrator managed DB.
srvctl config database -db cdbrac_lhr1fq -a
Database unique name: cdbrac_lhr1fq
Database name: cdbrac
Oracle home: /u01/app/oracle/product/18.0.0.0/dbhome_1
Oracle user: oracle
Spfile: +DATA/CDBRAC_LHR1FQ/PARAMETERFILE/spfile.269.1002115587
Password file: +DATA/CDBRAC_LHR1FQ/PASSWORD/pwdcdbrac_lhr1fq.259.1002114845
Domain: londbsubnet.lonvcn.oraclevcn.com
Start options: open
Stop options: immediate
Database role: PRIMARY
Management policy: AUTOMATIC
Server pools:
Disk Groups: RECO,DATA
Mount point paths:
Services:
Type: RAC
Start concurrency:
Stop concurrency:
Database is enabled
Database is individually enabled on nodes:
Database is individually disabled on nodes:
OSDBA group: dba
OSOPER group: dbaoper
Database instances: cdbrac1,cdbrac2
Configured nodes: rac1,rac2
CSS critical: no
CPU count: 0
Memory target: 0
Maximum memory: 0
Default network number for database services:
Database is administrator managed
Both DB System patching and DB patching happens in a rolling fashion. Other DB versions where OJVM patch is needed (non-rolling patch) may require total outage. But at least in this case with 18c the patch applying happened rolling fashion.


Similar to single instance DB, RAC DB can also be associated with a data guard configuration. The London region where the initial RAC DB was created had an issue creating a data guard association. Oracle support also confirmed that there are intermittent issues when creating a data guard for RAC DB. However, it was possible to create a RAC DB with data guard association in the Frankfurt region. Following screenshot is of this new RAC DB (as such some names are different).
The DG protection mode is maximum performance.
DGMGRL> show configuration

Configuration - raccdb_fra2bg_raccdb_fra359

  Protection Mode: MaxPerformance
  Members:
  raccdb_fra2bg - Primary database
    raccdb_fra359 - Physical standby database

Fast-Start Failover: DISABLED

Configuration Status:
SUCCESS   (status updated 19 seconds ago)
Primary Database
DGMGRL> show database raccdb_fra2bg

Database - raccdb_fra2bg

  Role:               PRIMARY
  Intended State:     TRANSPORT-ON
  Instance(s):
    raccdb1
    raccdb2

Database Status:
SUCCESS
Standby Database has both instance up and running, with one being the apply instance.
DGMGRL> show database raccdb_fra359

Database - raccdb_fra359

  Role:               PHYSICAL STANDBY
  Intended State:     APPLY-ON
  Transport Lag:      0 seconds (computed 1 second ago)
  Apply Lag:          0 seconds (computed 1 second ago)
  Average Apply Rate: 1.00 KByte/s
  Real Time Query:    ON
  Instance(s):
    raccdb1 (apply instance)
    raccdb2

Database Status:
SUCCESS
Switchover could be performed from the console.
DGMGRL> show configuration

Configuration - raccdb_fra2bg_raccdb_fra359

  Protection Mode: MaxPerformance
  Members:
  raccdb_fra359 - Primary database
    raccdb_fra2bg - Physical standby database

Fast-Start Failover: DISABLED

Configuration Status:
SUCCESS   (status updated 58 seconds ago)

Related Posts
Data Guard on OCI PaaS DB

Thursday, March 7, 2019

Changing ORACLE_BASE, ORACLE_HOME and GI_HOME in a Oracle Restart Setup

This post gives the steps for changing the location of ORACLE_BASE, ORACLE_HOME, oracle Inventory and Grid Infrastructure in a Oracle restart setup. The current and new paths for these items are given in the table below.

ItemCurrent LocationFuture Location
ORACLE_BASE/opt/app/oracle/u01/app/oracle
ORACLE_HOME/opt/app/oracle/product/11.2.0/dbhome_1/u01/app/oracle/product/11.2.0/dbhome_1
GI_HOME/opt/app/oracle/product/12.1.0/grid/u01/app/oracle/product/12.1.0/grid
Oracle Inventory/opt/app/oraInventory/u01/app/oraInventory

As could be seen from the locations the GI home is a 12.1 while the DB runs out of a 11.2 (11.2.0.4) home.

1. It is assumed the mount point /u01 exists. Create the two base directories first, that is the Oracle base and oracle inventory directories and set the necessary permissions.
# cd /u01/

# mkdir -p app/oracle
# mkdir -p app/oraInventory

# chmod 775 oracle
# chmod 770 oraInventory

# chown oracle:oinstall oracle
# chown grid:oinstall oraInventory
2. Stop the database and the HA service.
srvctl stop database -d westdb
crsctl stop has
3. Detach the current GI home from the inventory.
[grid@west bin]$ ./runInstaller -silent -waitforcompletion -detachHome ORACLE_HOME='/opt/app/oracle/product/12.1.0/grid'
Starting Oracle Universal Installer...

Checking swap space: must be greater than 500 MB.   Actual 4097 MB    Passed
The inventory pointer is located at /etc/oraInst.loc
'DetachHome' was successful.
Check the GI home was removed from the inventory by checking in the inventory.xml
<HOME NAME="OraGI12Home1" LOC="/opt/app/oracle/product/12.1.0/grid" TYPE="O" IDX="1" REMOVED="T"/>
4. Create the future GI Home location.
mkdir -p /u01/app/oracle/product/12.1.0/
Copy the current grid folder to the future location.
cd /opt/app/oracle/product/12.1.0
cp -pR grid /u01/app/oracle/product/12.1.0/
5. Clone the GI Home in the new location. Pass the new Oracle base, GI home and oracle inventory locations to the clone script.
cd /u01/app/oracle/product/12.1.0/grid/clone/bin
$ perl clone.pl -silent ORACLE_BASE=/u01/app/oracle ORACLE_HOME=/u01/app/oracle/product/12.1.0/grid 
ORACLE_HOME_NAME=OraGI12Home1 INVENTORY_LOCATION=/u01/app/oraInventory CRS=true

./runInstaller -clone -waitForCompletion  "ORACLE_BASE=/u01/app/oracle" "ORACLE_HOME=/u01/app/oracle/product/12.1.0/grid" "ORACLE_HOME_NAME=OraGI12Home1" "INVENTORY_LOCATION=/u01/app/oraInventory" -silent -paramFile /u01/app/oracle/product/12.1.0/grid/clone/clone_oraparam.ini
Starting Oracle Universal Installer...

Checking Temp space: must be greater than 500 MB.   Actual 17853 MB    Passed
Checking swap space: must be greater than 500 MB.   Actual 4097 MB    Passed
Preparing to launch Oracle Universal Installer from /tmp/OraInstall2019-03-07_06-42-57PM. Please wait ...You can find the log of this install session at:
 /u01/app/oraInventory/logs/cloneActions2019-03-07_06-42-57PM.log
..................................................   5% Done.
..................................................   10% Done.
..................................................   15% Done.
..................................................   20% Done.
..................................................   25% Done.
..................................................   30% Done.
..................................................   35% Done.
..................................................   40% Done.
..................................................   45% Done.
..................................................   50% Done.
..................................................   55% Done.
..................................................   60% Done.
..................................................   65% Done.
..................................................   70% Done.
..................................................   75% Done.
..................................................   80% Done.
..................................................   85% Done.
..........Could not backup file /u01/app/oracle/product/12.1.0/grid/root.sh to /u01/app/oracle/product/12.1.0/grid/root.sh.ouibak
Could not backup file /u01/app/oracle/product/12.1.0/grid/rootupgrade.sh to /u01/app/oracle/product/12.1.0/grid/rootupgrade.sh.ouibak

Copy files in progress.

Copy files successful.

Link binaries in progress.

Link binaries successful.

Setup files in progress.

Setup files successful.

Setup Inventory in progress.

Setup Inventory successful.

Finish Setup successful.
The cloning of OraGI12Home1 was successful.
Please check '/u01/app/oraInventory/logs/cloneActions2019-03-07_06-42-57PM.log' for more details.

Setup Oracle Base in progress.

Setup Oracle Base successful.
..................................................   95% Done.

As a root user, execute the following script(s):
        1. /u01/app/oraInventory/orainstRoot.sh
        2. /u01/app/oracle/product/12.1.0/grid/root.sh



..................................................   100% Done.
You have new mail in /var/spool/mail/grid
6. After running the orainstRoot.sh the inventory location gets updated in the /etc/oraInst.loc
/u01/app/oraInventory/orainstRoot.sh

cat /etc/oraInst.loc
inventory_loc=/u01/app/oraInventory
inst_group=oinstall
7. Running the root.sh will generate a log file which will have commands to run to create a Oracle restart setup or a cluster setup.
Check /u01/app/oracle/product/12.1.0/grid/install/root_west.domain.net_2019-03-07_18-45-23.log for the output of root script

# tail -f /u01/app/oracle/product/12.1.0/grid/install/root_west.domain.net_2019-03-07_18-45-23.log
Now product-specific root actions will be performed.

To configure Grid Infrastructure for a Stand-Alone Server run the following command as the root user:
/u01/app/oracle/product/12.1.0/grid/perl/bin/perl -I/u01/app/oracle/product/12.1.0/grid/perl/lib -I/u01/app/oracle/product/12.1.0/grid/crs/install /u01/app/oracle/product/12.1.0/grid/crs/install/roothas.pl
8. Before running the command mentioned in the log file, the existing Oracle restart configuration need to be de-configured. If not following error will occur.
Using configuration parameter file: /u01/app/oracle/product/12.1.0/grid/crs/install/crsconfig_params
2019/03/07 18:46:12 CLSRSC-350: Cannot configure two CRS instances on the same cluster

2019/03/07 18:46:14 CLSRSC-352: CRS is already configured on this node for the CRS home location /opt/app/oracle/product/12.1.0/grid
To de-configure run the following command.
# /u01/app/oracle/product/12.1.0/grid/perl/bin/perl roothas.pl -deconfig -force
Using configuration parameter file: ./crsconfig_params
2019/03/07 18:47:50 CLSRSC-337: Successfully deconfigured Oracle Restart stack
9. Run the command to create the Oracle restart setup.
# /u01/app/oracle/product/12.1.0/grid/perl/bin/perl -I/u01/app/oracle/product/12.1.0/grid/perl/lib -I/u01/app/oracle/product/12.1.0/grid/crs/install /u01/app/oracle/product/12.1.0/grid/crs/install/roothas.pl
Using configuration parameter file: /u01/app/oracle/product/12.1.0/grid/crs/install/crsconfig_params
LOCAL ADD MODE
Creating OCR keys for user 'grid', privgrp 'oinstall'..
Operation successful.
LOCAL ONLY MODE
Successfully accumulated necessary OCR keys.
Creating OCR keys for user 'root', privgrp 'root'..
Operation successful.
CRS-4664: Node west successfully pinned.
2019/03/07 18:48:26 CLSRSC-330: Adding Clusterware entries to file 'oracle-ohasd.service'

west     2019/03/07 13:18:58     /u01/app/oracle/product/12.1.0/grid/cdata/west/backup_20190307_131858.olr     459864538
CRS-2791: Starting shutdown of Oracle High Availability Services-managed resources on 'west'
CRS-2673: Attempting to stop 'ora.evmd' on 'west'
CRS-2677: Stop of 'ora.evmd' on 'west' succeeded
CRS-2793: Shutdown of Oracle High Availability Services-managed resources on 'west' has completed
CRS-4133: Oracle High Availability Services has been stopped.
CRS-4123: Oracle High Availability Services has been started.
2019/03/07 18:50:11 CLSRSC-327: Successfully configured Oracle Restart for a standalone server
10. Check the GI home added to inventory in new inventory location.
<HOME NAME="OraGI12Home1" LOC="/u01/app/oracle/product/12.1.0/grid" TYPE="O" IDX="1" CRS="true"/>
11. Add listener and ASM to the Oracle restart config.
srvctl add listener -l listener -o /u01/app/oracle/product/12.1.0/grid -p 1521
srvctl start listener -l listener
srvctl add asm -l listener -p +data/asm/ASMPARAMETERFILE/REGISTRY.253.881942589 -d "/dev/sd*"
srvctl start asm

 crsctl stat res -t
--------------------------------------------------------------------------------
Name           Target  State        Server                   State details
--------------------------------------------------------------------------------
Local Resources
--------------------------------------------------------------------------------
ora.DATA.dg
               ONLINE  ONLINE       west                     STABLE
ora.FLASH.dg
               ONLINE  ONLINE       west                     STABLE
ora.LISTENER.lsnr
               ONLINE  ONLINE       west                     STABLE
ora.asm
               ONLINE  ONLINE       west                     Started,STABLE
ora.ons
               OFFLINE OFFLINE      west                     STABLE
--------------------------------------------------------------------------------
Cluster Resources
--------------------------------------------------------------------------------
ora.cssd
      1        ONLINE  ONLINE       west                     STABLE
ora.diskmon
      1        OFFLINE OFFLINE                               STABLE
ora.evmd
      1        ONLINE  ONLINE       west                     STABLE
--------------------------------------------------------------------------------


12. Next step is to move the Oracle home to new location. As this is a role separated setup, write permission must be granted to oracle user for admin and diag directories inside Oracle base.
cd $ORACLE_BASE
chmod 775 admin diag
Create the new Oracle home location and copy the current Oracle home (dbhome_1) to new location.
# cd /u01/app/oracle/product
# mkdir -p 11.2.0

# cd /opt/app/oracle/product/11.2.0
# cp -pR dbhome_1 /u01/app/oracle/product/11.2.0/
13. Clone the DB Home
$ cd /u01/app/oracle/product/11.2.0/dbhome_1/clone/

/u01/app/oracle/product/11.2.0/dbhome_1/perl/bin/perl clone.pl ORACLE_BASE="/u01/app/oracle/" ORACLE_HOME="/u01/app/oracle/product/11.2.0/dbhome_1" OSDBA_GROUP=dba OSOPER_GROUP=oper -defaultHomeName

./runInstaller -clone -waitForCompletion  "ORACLE_BASE=/u01/app/oracle/" "ORACLE_HOME=/u01/app/oracle/product/11.2.0/dbhome_1" "oracle_install_OSDBA=dba" "oracle_install_OSOPER=oper" -defaultHomeName  -defaultHomeName "CRS=true" -silent -noConfig -nowait
Starting Oracle Universal Installer...

Checking swap space: must be greater than 500 MB.   Actual 4097 MB    Passed
Preparing to launch Oracle Universal Installer from /tmp/OraInstall2019-03-07_07-21-14PM. Please wait ...Oracle Universal Installer, Version 11.2.0.4.0 Production
Copyright (C) 1999, 2013, Oracle. All rights reserved.

You can find the log of this install session at:
 /u01/app/oraInventory/logs/cloneActions2019-03-07_07-21-14PM.log
.................................................................................................... 100% Done.

Installation in progress (Thursday, 7 March 2019 19:21:31 o'clock IST)
...........................................................................                                                     75% Done.
Install successful

Linking in progress (Thursday, 7 March 2019 19:21:43 o'clock IST)
Link successful

Setup in progress (Thursday, 7 March 2019 19:22:43 o'clock IST)
Setup successful

End of install phases.(Thursday, 7 March 2019 19:23:08 o'clock IST)
WARNING:
The following configuration scripts need to be executed as the "root" user.
/u01/app/oracle/product/11.2.0/dbhome_1/root.sh
To execute the configuration scripts:
    1. Open a terminal window
    2. Log in as "root"
    3. Run the scripts

The cloning of OraHome1 was successful.
Please check '/u01/app/oraInventory/logs/cloneActions2019-03-07_07-21-14PM.log' for more details.
Check the inventory is updated with new DB home location.
<HOME NAME="OraDb11g_home1" LOC="/u01/app/oracle/product/11.2.0/dbhome_1" TYPE="O" IDX="2"/>
14. Update the /etc/oratab with the new Oracle home location. Add the database to the Oracle restart configuration.
srvctl add database -d westdb -o /u01/app/oracle/product/11.2.0/dbhome_1 -p +DATA/westdb/spfilewestdb.ora -a "data,flash"
srvctl start database -d westdb
Certain init parameters will refer to paths under previous Oracle base. Change them to reflect the current Oracle base.
show parameter diagnostic_dest
diagnostic_dest                      string      /opt/app/oracle

alter system set diagnostic_dest='/u01/app/oracle' scope=both;
alter system set audit_file_dest='/u01/app/oracle/admin/westdb/adump' scope=spfile;

mkdir -p u01/app/oracle/admin/westdb/adump
15. Stop the database and the Oracle restart stack. Rename the old base location (/opt/app/) to something temporary (/opt/appx/) and start the database. If all steps are followed there shouldn't be any references to previous location. Current oracle base directory could also be found out with orabase.
$ orabase
/u01/app/oracle
Once certain no references to old locations remain those could be removed.