Sunday, June 24, 2018

Using Oracle Connection Pools with ActiveMQ

Following bean entries could be used for creating pooled JDBC connections from an activemq conflagration.
To use Oracle data source add the following to activemq.xml.
 <bean id="oracle-ds" class="oracle.jdbc.pool.OracleDataSource" destroy-method="close">
    <property name="uRL" value="jdbc:oracle:thin:@192.168.0.86:1521/testdb"/>
    <property name="user" value="asanga"/>
    <property name="password" value="asa"/>
    <property name="connectionCachingEnabled" value="true"/>
<!--    <property name="fastConnectionFailoverEnabled" value="true"/> -->
<!--    <property name="oNSConfiguration" value="nodes=xxx:6200"/> -->
    <property name="connectionCacheProperties">
         <props>
              <prop key="InitialLimit">5</prop>
              <prop key="MinLimit">5</prop>
              <prop key="MaxLimit">10</prop>
        </props>
    </property>
  </bean>
Any other data source property could be added by following the camel casing the method name without "set". For example "setONSConfiguration" becomes "oNSConfiguration" as shown above.



To use Oracle Universal Connection Pool use the following
  <bean id="oracle-ds" class="oracle.ucp.jdbc.PoolDataSourceFactory" factory-method="getPoolDataSource">
    <property name="connectionFactoryClassName" value="oracle.jdbc.pool.OracleDataSource"/>
    <property name="uRL" value="jdbc:oracle:thin:@192.168.0.86:1521/testdb"/>
    <property name="user" value="asanga"/>
    <property name="password" value="asa"/>
<!--     <property name="fastConnectionFailoverEnabled" value="true"/> -->
<!--     <property name="oNSConfiguration" value="nodes=xxxx:6200"/> -->
    <property name="minPoolSize" value="5"/>
    <property name="initialPoolSize" value="5"/>
    <property name="maxPoolSize" value="10"/>
    <property name="connectionProperties">
         <props>
              <prop key="oracle.jdbc.thinForceDNSLoadBalancing">true</prop>
        </props>
    </property>
  </bean>
Enable the persistency with
        <persistenceAdapter>
            <jdbcPersistenceAdapter dataSource="#oracle-ds" lockKeepAlivePeriod="5000">
                 <locker>
                     <lease-database-locker lockAcquireSleepInterval="10000"/>
                </locker>
            </jdbcPersistenceAdapter>
         </persistenceAdapter>

Thursday, June 14, 2018

Remote Cloning a PDB with Encrypted Data

This post list the steps for remote cloning a PDB that has encrypted data. What's difference in this case, to that of a remote cloning a PDB without the use of 12c TDE is the use of the encryption key. The master key of the source PDB must be available to cloned PDB. There are multiple ways of achieving this. This post shows two convenient ways to use when remote cloning PDBs with encrypted data.

Update 2018/07/24 : Method 1 shown below doesn't work on Oracle cloud where TDE is enabled by default. This is due to bug 24763954 which is closed as not a bug. If remote cloning is done on Oracle cloud then use the method two mention in the post. For more refer MOS notes 2228673.1, 2208792.1, 2415131.1.

Method 1. Using one_step_plugin_for_pdb_with_tde parameter
According to advance security guide "when ONE_STEP_PLUGIN_FOR_PDB_WITH_TDE is set to TRUE, the database caches the keystore password in memory, obfuscated at the system level, and then uses it for the import operation. The default for ONE_STEP_PLUGIN_FOR_PDB_WITH_TDE is FALSE".
So in order to clone a PDB with encrypted data simply set the ONE_STEP_PLUGIN_FOR_PDB_WITH_TDE to true and run the cloning operation.

1. The remote PDB has a encrypted tablespace
SQL> select tablespace_name,encrypted from dba_tablespaces where encrypted='YES' ;

TABLESPACE_NAME                ENC
------------------------------ ---
ENCTEST                        YES

SQL> select t.name,ENCRYPTIONALG,STATUS FROM V$ENCRYPTED_TABLESPACES  e, v$tablespace t where e.ts#=t.ts#;

NAME       ENCRYPT STATUS
---------- ------- ----------
ENCTEST    AES128  NORMAL
2. Create a key store (encryption wallet) at the CDB root where the clone will be created. Without this the cloning will fail. Creating wallet is shown in a previous post.

3. Set the ONE_STEP_PLUGIN_FOR_PDB_WITH_TDE to true
ALTER SYSTEM SET one_step_plugin_for_pdb_with_tde=TRUE SCOPE=BOTH;
4. Run the remote clone operation. Steps for remote cloning is available in a previous post.
create pluggable database mypdb from cxpdb@PDB1K_LINK 
file_name_convert=('/opt/oracle/oradata/cxcdb/cxpdb/','/opt/oracle/oradata/oracdb/mypdb/') ;

Pluggable database created.
5. Finally open the cloned PDB.
SQL> show pdbs

    CON_ID CON_NAME                       OPEN MODE  RESTRICTED
---------- ------------------------------ ---------- ----------
         2 PDB$SEED                       READ ONLY  NO
         3 ORAPDB                         READ WRITE NO
         4 MYPDB                          MOUNTED


SQL> alter pluggable database mypdb open;

Pluggable database altered.


SQL> show pdbs

    CON_ID CON_NAME                       OPEN MODE  RESTRICTED
---------- ------------------------------ ---------- ----------
         2 PDB$SEED                       READ ONLY  NO
         3 ORAPDB                         READ WRITE NO
         4 MYPDB                          READ WRITE NO
6. If no longer used then set the ONE_STEP_PLUGIN_FOR_PDB_WITH_TDE to default value of false.
ALTER SYSTEM SET one_step_plugin_for_pdb_with_tde=FALSE SCOPE=BOTH;


Method 2. Using Key Store Password of the Local CDB
In this method the key store password of the local CDB (CDB where the clone PDB is created) is used during the clone command. As per security guide the encrypted data is still accessible because during the cloning the master key of the remote PDB is copied over. However it's best to re-key after the cloning as the original key information is not shown in the PDB's v$ views.

1. The same remote PDB is used for this example as well. It's also assumed the local CDB has wallet already created.

2. Execute the remote cloning command on the CDB root specifying the key store password.
create pluggable database mypdb from cxpdb@PDB1K_LINK 
file_name_convert=('/opt/oracle/oradata/cxcdb/cxpdb/','/opt/oracle/oradata/oracdb/mypdb/') 
KEYSTORE IDENTIFIED BY  asanga123;

Pluggable database created.
3. Open the PDB and check the encryption key on the clone PDB's v$view. As mentioned in the security guide this return no rows.
SQL> show pdbs

    CON_ID CON_NAME                       OPEN MODE  RESTRICTED
---------- ------------------------------ ---------- ----------
         2 PDB$SEED                       READ ONLY  NO
         3 ORAPDB                         READ WRITE NO
         5 MYPDB                          READ WRITE NO

SQL> alter session set container=mypdb;

Session altered.

SQL> select CON_ID,KEY_ID,KEYSTORE_TYPE,CREATOR_DBNAME,CREATOR_PDBNAME from v$encryption_keys order by 1;

no rows selected
4. Run the below to re-key. The force option is used due to bug 22826718. Refer 1944507.1 for more.
ADMINISTER KEY MANAGEMENT SET KEY FORCE KEYSTORE IDENTIFIED BY asanga123 with backup;
 
 SQL> select CON_ID,KEY_ID,KEYSTORE_TYPE,CREATOR_DBNAME,CREATOR_PDBNAME from v$encryption_keys order by 1;

    CON_ID KEY_ID                                                  KEYSTORE_TYPE     CREATOR_DB CREATOR_PD
---------- ------------------------------------------------------- ----------------- ---------- ----------
         5 AXj5300QAE8Kv7cOn6U0xJ8AAAAAAAAAAAAAAAAAAAAAAAAAAAAA    SOFTWARE KEYSTORE oracdb     MYPDB

Friday, June 1, 2018

Create, Plug/Unplug, Patch, Export/Import and Backup Oracle DB in Docker

An earlier post showed the steps for installing docker in RHEL 7. This post shows steps for creating an Oracle image and how to create a database container plus few other maintenance work such as exporting/importing, patching and backing up the DB. To carry out the steps mentioned here it is assumed that docker is already installed and running.
Oracle has official support (although limited to Severity 2) for running Oracle single instances in Docker. For more on this refer MOS 1921163.1 and 2216342.1.

    Creating an Oracle Database Image in Docker
    Creating the Oracle DB as Docker Container
    Patching Oracle DB in Docker
    Creating a new PDB in Docker
    Plug/Unplug PDB in Docker
    Export and Import From DB in Docker
    Using RMAN Backup in Docker
    Miscellaneous

Creating an Oracle Database Image in Docker
Docker files to create an Oracle image is available from GitHub. These could be directly downloaded using wget. In this case the docker files are downloaded to /opt/git location.
mkdir -p /opt/git
cd /opt/git
wget https://github.com/oracle/docker-images/archive/master.zip
Unzip the downloaded master.zip file. This will extract number of folders relating to various Oracle products. It it possible to remove all other products and leave only the OracleDatabase folder to save space.
unzip master.zip
cd /opt/git/docker-images-master
ls
CODEOWNERS  ContainerCloud  CONTRIBUTING.md  LICENSE  OracleDatabase  README.md
OracleDatabase folder containers two sub folders, one containing docker files for single instance and other for RAC. Single instance folder has docker files for 3 versions. 11.2.0.2 (for XE) and 12.1 and 12.2
cd /opt/git/docker-images-master/OracleDatabase/SingleInstance/dockerfiles
11.2.0.2  12.1.0.2  12.2.0.1  buildDockerImage.sh
The docker files folders do not contain Oracle installers. In order to create a 12.2 DB a 12.2 docker image must be built for which a Oracle linux 64 bit installer file is needed. Download this from Oracle site and copy to 12.2.0.1 folder.
cp linuxx64_12201_database.zip /opt/git/docker-images-master/OracleDatabase/SingleInstance/dockerfiles/12.2.0.1/
Only make below changes if OMF is preferred. If not there's no need to make any of these changes. Proceed to final step of building the image. The dbca template file (dbca.rsp.tmpl) that comes with GitHub download is not setup for OMF. If OMF is desired for database then modify the dbca template file at this point by adding db_create_file_dest (this could be done later as well. But would require rebuilding the image and the DB).
initParams=audit_trail=none,audit_sys_operations=false,db_create_file_dest=/opt/oracle/oradata
When the use of OMF used, then comment out the set control_files line in the createDB.sh file. From
sqlplus / as sysdba << EOF
   ALTER SYSTEM SET control_files='$ORACLE_BASE/oradata/$ORACLE_SID/control01.ctl' scope=spfile;
   ALTER PLUGGABLE DATABASE $ORACLE_PDB SAVE STATE;
   exit;
EOF
To
#sqlplus / as sysdba << EOF
#   ALTER SYSTEM SET control_files='$ORACLE_BASE/oradata/$ORACLE_SID/control01.ctl' scope=spfile;
sqlplus / as sysdba << EOF
   ALTER PLUGGABLE DATABASE $ORACLE_PDB SAVE STATE;
   exit;
EOF
If not control file name is set to control01.ctl and container will not bring up the DB after a restart.
Final step is to run the buildDockerImage file. This is run with minimum options that is the version of the image (specified by -v) and the edition of the database (specified by -e).
cd /opt/git/docker-images-master/OracleDatabase/SingleInstance/dockerfiles/

./buildDockerImage.sh -v 12.2.0.1 -e
At the end of the build there will be two docker images. One with OEL 7-slim and another with the 12.2 EE.
#docker images
REPOSITORY          TAG                    IMAGE ID            CREATED             SIZE
oracle/database     12.2.0.1-ee            e4ac99149312        12 days ago         13.2GB
oraclelinux         7-slim                 c94cc930790a        5 weeks ago         117MB
The next step is to create a docker container using the Oracle database image.

Creating the Oracle DB as Docker Container
Anything created inside a docker container lives until the that docker container is removed. This means any database data files created inside a docker container will be lost when that docker container is deleted. To over come this docker volumes are used. When used, volumes will store the data files and configuration files such as spfile, tnsnames.ora, listener.ora outside of the container. These files are preserved even when the docker container is removed.
In this case a database is created using docker volume so that data is preserved when container is removed. First create a directory on the external host (not inside the docker container) to store the data files. It is also important that this directory has read, write and execute permission in the other group. Without this permission to the other group the DB creation will fail. Here the path chose is as below
mkdir -p /opt/app/oracle/oradata/docker/oracdb
chmod 777 /opt/app/oracle/oradata/docker/oracdb
This external location is passed to docker run command with -v parameter. By default data files are created inside the container in /opt/oracle/oradata.
Next the listener port and express port could be mapped to ports on the external host. This way it is possible to remotely connect to the database inside the docker.
Finally the default database names are ORCLCDB and ORCLPDB1. But it possible to pass custom CDB and PDB names at the creation time using the -e option and specify the parameter name.
If an admin password isn't specified then a password is auto generated which could be changed later using setPassword.sh file. However in this case the password is provided at creation time.
Below is the full command to create the database (CDB and PDB). The container is also named oracdb (specified by --name) same as the CDB name.
docker run --name oracdb -p 1521:1521 -p 5500:5500
 -v /opt/app/oracle/oradata/docker/oracdb:/opt/oracle/oradata 
-e "ORACLE_SID=oracdb" 
-e "ORACLE_PWD=dockerOra278" 
-e "ORACLE_PDB=orapdb" oracle/database:12.2.0.1-ee

ORACLE PASSWORD FOR SYS, SYSTEM AND PDBADMIN: dockerOra278

LSNRCTL for Linux: Version 12.2.0.1.0 - Production on 22-MAY-2018 10:52:09

Copyright (c) 1991, 2016, Oracle.  All rights reserved.

Starting /opt/oracle/product/12.2.0.1/dbhome_1/bin/tnslsnr: please wait...

TNSLSNR for Linux: Version 12.2.0.1.0 - Production
System parameter file is /opt/oracle/product/12.2.0.1/dbhome_1/network/admin/listener.ora
Log messages written to /opt/oracle/diag/tnslsnr/fffb64768822/listener/alert/log.xml
Listening on: (DESCRIPTION=(ADDRESS=(PROTOCOL=ipc)(KEY=EXTPROC1)))
Listening on: (DESCRIPTION=(ADDRESS=(PROTOCOL=tcp)(HOST=0.0.0.0)(PORT=1521)))

Connecting to (DESCRIPTION=(ADDRESS=(PROTOCOL=IPC)(KEY=EXTPROC1)))
STATUS of the LISTENER
------------------------
Alias                     LISTENER
Version                   TNSLSNR for Linux: Version 12.2.0.1.0 - Production
Start Date                22-MAY-2018 10:52:10
Uptime                    0 days 0 hr. 0 min. 0 sec
Trace Level               off
Security                  ON: Local OS Authentication
SNMP                      OFF
Listener Parameter File   /opt/oracle/product/12.2.0.1/dbhome_1/network/admin/listener.ora
Listener Log File         /opt/oracle/diag/tnslsnr/fffb64768822/listener/alert/log.xml
Listening Endpoints Summary...
  (DESCRIPTION=(ADDRESS=(PROTOCOL=ipc)(KEY=EXTPROC1)))
  (DESCRIPTION=(ADDRESS=(PROTOCOL=tcp)(HOST=0.0.0.0)(PORT=1521)))
The listener supports no services
The command completed successfully
[WARNING] [DBT-10102] The listener configuration is not selected for the database. EM DB Express URL will not be accessible.
   CAUSE: The database should be registered with a listener in order to access the EM DB Express URL.
   ACTION: Select a listener to be registered or created with the database.
Copying database files
1% complete
13% complete
25% complete
Creating and starting Oracle instance
26% complete
30% complete
31% complete
35% complete
38% complete
39% complete
41% complete
Completing Database Creation
42% complete
43% complete
44% complete
46% complete
47% complete
50% complete
Creating Pluggable Databases
55% complete
75% complete
Executing Post Configuration Actions
100% complete
Look at the log file "/opt/oracle/cfgtoollogs/dbca/oracdb/oracdb.log" for further details.

SQL*Plus: Release 12.2.0.1.0 Production on Tue May 22 11:00:23 2018

Copyright (c) 1982, 2016, Oracle.  All rights reserved.

Connected to:
Oracle Database 12c Enterprise Edition Release 12.2.0.1.0 - 64bit Production

SQL>
System altered.

SQL>
Pluggable database altered.

SQL> Disconnected from Oracle Database 12c Enterprise Edition Release 12.2.0.1.0 - 64bit Production
The Oracle base remains unchanged with value /opt/oracle
#########################
DATABASE IS READY TO USE!
#########################
The following output is now a tail of the alert.log:
Completed: alter pluggable database orapdb open
2018-05-22T11:00:22.727309+00:00
ORAPDB(3):CREATE SMALLFILE TABLESPACE "USERS" LOGGING  DATAFILE  '/opt/oracle/oradata/oracdb/orapdb/users01.dbf' SIZE 5M REUSE AUTOEXTEND ON NEXT  1280K MAXSIZE UNLIMITED  EXTENT MANAGEMENT LOCAL  SEGMENT SPACE MANAGEMENT  AUTO
ORAPDB(3):Completed: CREATE SMALLFILE TABLESPACE "USERS" LOGGING  DATAFILE  '/opt/oracle/oradata/oracdb/orapdb/users01.dbf' SIZE 5M REUSE AUTOEXTEND ON NEXT  1280K MAXSIZE UNLIMITED  EXTENT MANAGEMENT LOCAL  SEGMENT SPACE MANAGEMENT  AUTO
ORAPDB(3):ALTER DATABASE DEFAULT TABLESPACE "USERS"
ORAPDB(3):Completed: ALTER DATABASE DEFAULT TABLESPACE "USERS"
The existing container(s) could be view with
docker container ls -a
CONTAINER ID        IMAGE                                  COMMAND                  CREATED             STATUS                            PORTS                                            NAMES
4252ebebab2f        oracle/database:12.2.0.1-ee     "/bin/sh -c 'exec $Oâ¦"  6 days ago          Up 3 minutes (health: starting)   0.0.0.0:1521->1521/tcp, 0.0.0.0:5500->5500/tcp   oracdb
Since the listener port was exposed on the external host it is possible to connect remotely to the DB.
sqlplus  sys/dockerOra278@localhost:1521/oracdb as sysdba

SQL> show pdbs

    CON_ID CON_NAME                       OPEN MODE  RESTRICTED
---------- ------------------------------ ---------- ----------
         2 PDB$SEED                       READ ONLY  NO
         3 ORAPDB                         READ WRITE NO
The datafiles paths are based on container directory structure (OMF wasn't used in this case)
SQL> select con_id,name from v$datafile order by 1;

    CON_ID NAME
---------- --------------------------------------------------
         1 /opt/oracle/oradata/oracdb/system01.dbf
         1 /opt/oracle/oradata/oracdb/sysaux01.dbf
         1 /opt/oracle/oradata/oracdb/undotbs01.dbf
         1 /opt/oracle/oradata/oracdb/users01.dbf
         2 /opt/oracle/oradata/oracdb/pdbseed/sysaux01.dbf
         2 /opt/oracle/oradata/oracdb/pdbseed/undotbs01.dbf
         2 /opt/oracle/oradata/oracdb/pdbseed/system01.dbf
         3 /opt/oracle/oradata/oracdb/orapdb/users01.dbf
         3 /opt/oracle/oradata/oracdb/orapdb/system01.dbf
         3 /opt/oracle/oradata/oracdb/orapdb/sysaux01.dbf
         3 /opt/oracle/oradata/oracdb/orapdb/undotbs01.dbf
However, files will appear in the external host directory.
 pwd
/opt/app/oracle/oradata/docker/oracdb
ls -l
drwxr-xr-x. 3 54321 54321  20 May 22 12:00 dbconfig
drwxr-x---. 6 54321 54321 255 May 24 11:30 oracdb
As seen from above output, the owners of the directories don't exists on the external host. This is the reason to allow permission for the other group.
To stop the container following commands could be used.
docker stop -t 30 oracdb
The -t option makes the container wait 30 (configurable value) seconds before exiting regardless of the state of the DB. It's expected that database is cleanly shutdown (immediate) within 30 seconds. If the DB shutdowns before 30 seconds is up then container also exit soon after and doesn't wait full 30 seconds.
To start an existing container use
docker start oracdb
This will output the container name and return to prompt. To get the log output run the following
docker logs oracdb -f
If another docker container is to be created while the first one is running then this require different port numbers to be used for listener and express ports.
 docker run --name mycdb -p 1522:1521 -p 5501:5500 ...
In this case to access the DB from outside the container should use port 1522.
 sqlplus  sys/dockerOra278@localhost:1522/mycdb as sysdba
Patching Oracle DB in Docker
Patching an Oracle DB in docker involves creating a new image with the patched binaries and then running the database (=docker container) using the images with the patches. For this to work the database must be created using volumes. If not when the container is removed the database will be lost.
The GitHub files include docker files for creating the patched image. These are available inside the samples folder which is in SingleInstance folder (GitHut master.zip unziped in /opt/git).
/opt/git/docker-images-master/OracleDatabase/SingleInstance/samples/applypatch
The applypatch folder will have two folders for 12.1 and 12.2. Inside the 12.2 there's a patch folder to which patch files (including OPatch) must be copied to.
/opt/git/docker-images-master/OracleDatabase/SingleInstance/samples/applypatch/12.2.0.1/patches
But this has to be done in a specific way with an predefined directory structure. The zipped Opatch files must be copied to the patch folder.
cp p6880880_122010_Linux-x86-64.zip /opt/git/docker-images-master/OracleDatabase/SingleInstance/samples/applypatch/12.2.0.1/patches
The zipped patch files must be copied into individual directories named in a number sequence. In this case there's only one patch (27105253) and this will be copied to directory named 001.
cp p27105253_122010_Linux-x86-64.zip /opt/git/docker-images-master/OracleDatabase/SingleInstance/samples/applypatch/12.2.0.1/patches/001
If there's a second patch then this will be copied to 002 and so on. The final status after files are copied is as below
pwd
docker-images-master/OracleDatabase/SingleInstance/samples/applypatch/12.2.0.1/patches

 ls *
applyPatches.sh  p6880880_122010_Linux-x86-64.zip

001:
p27105253_122010_Linux-x86-64.zip
Finally run the script to build the patched image. This script is available in applypatch folder. It takes three arguments. One for specifying the version (-v). Second for the edition (-e) and finally a patch label which will be used for tagging the image (-p).
applypatch]$ ./buildPatchedDockerImage.sh -v 12.2.0.1 -e -p 27105253
==========================
DOCKER version:
Client:
 Version:      18.03.1-ce
 API version:  1.37
 Go version:   go1.9.5
 Git commit:   9ee9f40
 Built:        Thu Apr 26 07:20:16 2018
 OS/Arch:      linux/amd64
 Experimental: false
 Orchestrator: swarm

Server:
 Engine:
  Version:      18.03.1-ce
  API version:  1.37 (minimum version 1.12)
  Go version:   go1.9.5
  Git commit:   9ee9f40
  Built:        Thu Apr 26 07:23:58 2018
  OS/Arch:      linux/amd64
  Experimental: false
==========================
Building image 'oracle/database:12.2.0.1-ee-27105253' ...
Sending build context to Docker daemon  210.8MB
Step 1/9 : FROM oracle/database:12.2.0.1-ee
 ---> e4ac99149312
Step 2/9 : MAINTAINER Gerald Venzl 
 ---> Running in 5773808d04c7
Removing intermediate container 5773808d04c7
 ---> 6ec9899a292d
Step 3/9 : ENV PATCH_DIR="patches"     PATCH_FILE="applyPatches.sh"
 ---> Running in a778d8cef721
Removing intermediate container a778d8cef721
 ---> 8c6b0eb65d8c
Step 4/9 : ENV PATCH_INSTALL_DIR=$ORACLE_BASE/patches
 ---> Running in 3fa7d705380c
Removing intermediate container 3fa7d705380c
 ---> 969d170c1be8
Step 5/9 : COPY $PATCH_DIR $PATCH_INSTALL_DIR/
 ---> 7e7e8dc9c7ef
Step 6/9 : USER root
 ---> Running in 820d27e29c1e
Removing intermediate container 820d27e29c1e
 ---> 80b218c84e9a
Step 7/9 : RUN chown -R oracle:dba $PATCH_INSTALL_DIR
 ---> Running in e60c1a21b24a
Removing intermediate container e60c1a21b24a
 ---> 0a24db9544d6
Step 8/9 : USER oracle
 ---> Running in bbd09c131108
Removing intermediate container bbd09c131108
 ---> 03eae9504bea
Step 9/9 : RUN chmod ug+x $PATCH_INSTALL_DIR/*.sh &&     sync &&     $PATCH_INSTALL_DIR/$PATCH_FILE
 ---> Running in a9f988e2f99d
At the end of the build query the docker images to view the new patched image.
docker images
REPOSITORY          TAG                    IMAGE ID            CREATED             SIZE
oracle/database     12.2.0.1-ee-27105253   5cb7a1afe293        7 seconds ago       15.7GB
oracle/database     12.2.0.1-ee            e4ac99149312        12 days ago         13.2GB
oraclelinux         7-slim                 c94cc930790a        4 weeks ago         117MB
Next step is to make the Oracle database (=docker container) run using the patch image. For this remove the docker container and run it specifying the patched image. Since the database configuration files and datafiles exists, instead of a new DB being created the existing database will be started
docker rm oracdb
ocker run --name oracdb -p 1521:1521 -p 5500:5500 -v /opt/app/oracle/oradata/docker/oracdg:/opt/oracle/oradata 
-e "ORACLE_SID=oracdb" 
-e "ORACLE_PWD=dockerOra278" 
-e "ORACLE_PDB=orapdb" 
oracle/database:12.2.0.1-ee-27105253
LSNRCTL for Linux: Version 12.2.0.1.0 - Production on 22-MAY-2018 13:13:19

Copyright (c) 1991, 2016, Oracle.  All rights reserved.

Starting /opt/oracle/product/12.2.0.1/dbhome_1/bin/tnslsnr: please wait...

TNSLSNR for Linux: Version 12.2.0.1.0 - Production
System parameter file is /opt/oracle/product/12.2.0.1/dbhome_1/network/admin/listener.ora
Log messages written to /opt/oracle/diag/tnslsnr/0f66ae60fa4f/listener/alert/log.xml
Listening on: (DESCRIPTION=(ADDRESS=(PROTOCOL=ipc)(KEY=EXTPROC1)))
Listening on: (DESCRIPTION=(ADDRESS=(PROTOCOL=tcp)(HOST=0.0.0.0)(PORT=1521)))

Connecting to (DESCRIPTION=(ADDRESS=(PROTOCOL=IPC)(KEY=EXTPROC1)))
STATUS of the LISTENER
------------------------
Alias                     LISTENER
Version                   TNSLSNR for Linux: Version 12.2.0.1.0 - Production
Start Date                22-MAY-2018 13:13:20
Uptime                    0 days 0 hr. 0 min. 0 sec
Trace Level               off
Security                  ON: Local OS Authentication
SNMP                      OFF
Listener Parameter File   /opt/oracle/product/12.2.0.1/dbhome_1/network/admin/listener.ora
Listener Log File         /opt/oracle/diag/tnslsnr/0f66ae60fa4f/listener/alert/log.xml
Listening Endpoints Summary...
  (DESCRIPTION=(ADDRESS=(PROTOCOL=ipc)(KEY=EXTPROC1)))
  (DESCRIPTION=(ADDRESS=(PROTOCOL=tcp)(HOST=0.0.0.0)(PORT=1521)))
The listener supports no services
The command completed successfully

SQL*Plus: Release 12.2.0.1.0 Production on Tue May 22 13:13:20 2018

Copyright (c) 1982, 2016, Oracle.  All rights reserved.

Connected to an idle instance.

SQL> ORACLE instance started.

Total System Global Area 1610612736 bytes
Fixed Size                  8793304 bytes
Variable Size             671089448 bytes
Database Buffers          922746880 bytes
Redo Buffers                7983104 bytes
Database mounted.
Database opened.
SQL> Disconnected from Oracle Database 12c Enterprise Edition Release 12.2.0.1.0 - 64bit Production
The Oracle base remains unchanged with value /opt/oracle
#########################
DATABASE IS READY TO USE!
#########################
The following output is now a tail of the alert.log:
ORAPDB(3):Database Characterset for ORAPDB is AL32UTF8
2018-05-22T13:13:43.413019+00:00
ORAPDB(3):Opatch validation is skipped for PDB ORAPDB (con_id=0)
ORAPDB(3):Opening pdb with no Resource Manager plan active
Pluggable database ORAPDB opened read write
2018-05-22T13:13:44.369108+00:00
Starting background process CJQ0
Completed: ALTER DATABASE OPEN
2018-05-22T13:13:44.385359+00:00
CJQ0 started with pid=40, OS id=346
2018-05-22T13:13:45.487152+00:00
Shared IO Pool defaulting to 64MB. Trying to get it from Buffer Cache for process 89.
===========================================================
Dumping current patch information
===========================================================
Patch Id: 27105253
Patch Description: Database Release Update : 12.2.0.1.180116 (27105253)
Patch Apply Time: 2018-05-22T12:14:02Z
Next step is to run the post patch installation scripts which is run using datapatch. To run this inside the docker container use docker exec. Open all PDBs before running datapatch.
docker exec -ti oracdb /opt/oracle/product/12.2.0.1/dbhome_1/OPatch/datapatch -verbose

SQL Patching tool version 12.2.0.1.0 Production on Tue May 22 13:18:06 2018
Copyright (c) 2012, 2017, Oracle.  All rights reserved.

Log file for this invocation: /opt/oracle/cfgtoollogs/sqlpatch/sqlpatch_538_2018_05_22_13_18_06/sqlpatch_invocation.log

Connecting to database...OK
Note:  Datapatch will only apply or rollback SQL fixes for PDBs
       that are in an open state, no patches will be applied to closed PDBs.
       Please refer to Note: Datapatch: Database 12c Post Patch SQL Automation
       (Doc ID 1585822.1)
Bootstrapping registry and package to current versions...done
Determining current state...done

Current state of SQL patches:
Bundle series DBRU:
  ID 180116 in the binary registry and not installed in any PDB

Adding patches to installation queue and performing prereq checks...
Installation queue:
  For the following PDBs: CDB$ROOT PDB$SEED ORAPDB
    Nothing to roll back
    The following patches will be applied:
      27105253 (DATABASE RELEASE UPDATE 12.2.0.1.180116)

Installing patches...
Patch installation complete.  Total patches installed: 3

Validating logfiles...
Patch 27105253 apply (pdb CDB$ROOT): SUCCESS
  logfile: /opt/oracle/cfgtoollogs/sqlpatch/27105253/21862470/27105253_apply_ORACDB_CDBROOT_2018May22_13_18_20.log (no errors)
Patch 27105253 apply (pdb PDB$SEED): SUCCESS
  logfile: /opt/oracle/cfgtoollogs/sqlpatch/27105253/21862470/27105253_apply_ORACDB_PDBSEED_2018May22_13_20_27.log (no errors)
Patch 27105253 apply (pdb ORAPDB): SUCCESS
  logfile: /opt/oracle/cfgtoollogs/sqlpatch/27105253/21862470/27105253_apply_ORACDB_ORAPDB_2018May22_13_20_27.log (no errors)
SQL Patching tool complete on Tue May 22 13:22:11 2018
Querying the patch registry shows patch applied on all DBs.
SQL> select con_id,patch_id,patch_uid,version,action,status,BUNDLE_ID from cdb_registry_sqlpatch order by 1;

    CON_ID   PATCH_ID  PATCH_UID VERSION              ACTION          STATUS           BUNDLE_ID
---------- ---------- ---------- -------------------- --------------- --------------- ----------
         1   27105253   21862470 12.2.0.1             APPLY           SUCCESS             180116
         3   27105253   21862470 12.2.0.1             APPLY           SUCCESS             180116
The process is repeated when the next patch is need to applied. For example applying release update 12.2.0.1.180417 (27674384) would entail creating a new image
docker images
REPOSITORY          TAG                    IMAGE ID            CREATED             SIZE
oracle/database     12.2.0.1-ee-27674384   b6840341320a        8 minutes ago       16.1GB
oracle/database     12.2.0.1-ee-27105253   5cb7a1afe293        3 hours ago         15.7GB
oraclelinux         7-slim                 c94cc930790a        4 weeks ago         117MB
Remove and run the docker container (= DB) out of the patch container and run the post patch installation scripts. Verify patch is applied in all PDBs.
select con_id,patch_id,patch_uid,version,action,status,BUNDLE_ID from cdb_registry_sqlpatch order by 1;

    CON_ID   PATCH_ID  PATCH_UID VERSION              ACTION          STATUS           BUNDLE_ID
---------- ---------- ---------- -------------------- --------------- --------------- ----------
         1   27674384   22098633 12.2.0.1             APPLY           SUCCESS             180417
         1   27105253   21862470 12.2.0.1             APPLY           SUCCESS             180116
         3   27674384   22098633 12.2.0.1             APPLY           SUCCESS             180417
         3   27105253   21862470 12.2.0.1             APPLY           SUCCESS             180116
Finally remove the old image (if no longer needed).
docker rmi oracle/database:12.2.0.1-ee-27105253
Untagged: oracle/database:12.2.0.1-ee-27105253
Deleted: sha256:5cb7a1afe2933a261d997ff53632a1f8615411b58f1caaebb4407e52e522cdbc
Deleted: sha256:a9dbca6635a8cb33e856ca9b27986d4faea1ffa3a2b34791dce43f2c156ce3e5
Creating a new PDB in Docker
Creating a new PDB in a docker container is same as creating in any other setup. Only thing to consider is if the file name convert path. This has to be a location that is in the external volume. In this post the database created had docker container path /opt/oracle/oradata mapped to an volume for new PDB's datafile to persist after container remove. Having this path as prefixed would make the PDB datafiles to be created in the same volume. For example creating a PDB in a non-OMF setting
create pluggable database mypdb admin user mypdb1 identified by mypdb1 
FILE_NAME_CONVERT=('/opt/oracle/oradata/oracdb/pdbseed/','/opt/oracle/oradata/oracdb/mypdb/');

SQL> alter pluggable database mypdb open;
SQL> show pdbs

    CON_ID CON_NAME                       OPEN MODE  RESTRICTED
---------- ------------------------------ ---------- ----------
         2 PDB$SEED                       READ ONLY  NO
         3 ORAPDB                         READ WRITE NO
         4 MYPDB                          READ WRITE NO
Since the PDB seed was patched any newly created PDBs will also be patched.
SQL> alter session set container=mypdb;

select con_id,patch_id,patch_uid,version,action,status,BUNDLE_ID from cdb_registry_sqlpatch order by 1;

    CON_ID   PATCH_ID  PATCH_UID VERSION              ACTION          STATUS           BUNDLE_ID
---------- ---------- ---------- -------------------- --------------- --------------- ----------
         4   27674384   22098633 12.2.0.1             APPLY           SUCCESS             180417
         4   27105253   21862470 12.2.0.1             APPLY           SUCCESS             180116
If OMF is used and db_create_file_dest location is under the directory mapped to volume ( db_create_file_dest=/opt/oracle/oradata) then a new PDB could be created without any file name conversion.
SQL> create pluggable database cxpdb admin user cxpdb identified by cxpdb default tablespace users;

SQL> alter pluggable database cxpdb open;
SQL> alter session set container=cxpdb;

SQL> show pdbs

    CON_ID CON_NAME                       OPEN MODE  RESTRICTED
---------- ------------------------------ ---------- ----------
         5 CXPDB                          READ WRITE NO



Plug/Unplug PDB in Docker
Similar to creating a new PDB plugging or unplugging operations are also not all that different to any other setup. What needs to be considered is if the metadata file is preserved across docker container stop and starts.
  Unplug a PDB in Docker
  The CDB has a single PDB which will be unplugged.
SQL> show pdbs

    CON_ID CON_NAME                       OPEN MODE  RESTRICTED
---------- ------------------------------ ---------- ----------
         2 PDB$SEED                       READ ONLY  NO
         3 ORAPDB                         READ WRITE NO

SQL> alter pluggable database orapdb close;
The specified metadata file location is part of the volume. As such it will be preserved across container restarts.
SQL> alter pluggable database orapdb unplug into '/opt/oracle/oradata/oracdb/orapdb/orapdb.xml';
The metadata file could listed on the external hosts in the location mapped to the volume (-v /opt/app/oracle/oradata/docker/oracdb:/opt/oracle/oradata)
# ls -l /opt/app/oracle/oradata/docker/oracdg/oracdb/orapdb/orapdb.xml
-rw-r--r--. 1 54321 54321 7336 May 22 12:06 /opt/app/oracle/oradata/docker/oracdb/oracdb/orapdb/orapdb.xml
Finally drop the PDB
SQL> drop pluggable database orapdb keep datafiles;

SQL> show pdbs

    CON_ID CON_NAME                       OPEN MODE  RESTRICTED
---------- ------------------------------ ---------- ----------
         2 PDB$SEED                       READ ONLY  NO
  Plug a PDB in Docker - Same Docker Container
   In this case the earlier unplugged PDB is plugged into the CDB in the same container. As it is the same container the metadata file location is accessible to the container along with data files since there has been no file movement. Therefore plug the PDB specifying the meta data file.
SQL> CREATE PLUGGABLE DATABASE orapdb using '/opt/oracle/oradata/oracdb/orapdb/orapdb.xml' nocopy;

SQL> alter pluggable database orapdb open;
SQL>alter pluggable database orapdb save state;

SQL> show pdbs

    CON_ID CON_NAME                       OPEN MODE  RESTRICTED
---------- ------------------------------ ---------- ----------
         2 PDB$SEED                       READ ONLY  NO
         3 ORAPDB                         READ WRITE NO
  Plug a PDB in Docker - Different Docker Container
Plugging a PDB into a CDB in a different container is slightly different to above as this would require making the files related to PDB (data files and metadata file) accessible inside the destination container. This would require creating a directory in the destination docker container and copying the PDB related files to it. Once this is done reset of the steps are same as moving PDBs between servers.
It's assumed that PDB unplugged and datafiles are available in /opt/oracle/oradata/oracdb/orapdb/ (or externally to docker container in /opt/app/oracle/oradata/docker/oracdb/oracdb/orapdb). This PDB will be moved into a CDB in docker container called mycdb.
In the destination container create a directory to stage the PDB related files. This must be also accessible on the external hosts as well. The mycdb was created with a volume mapped as -v /opt/app/oracle/oradata/docker/mycdb:/opt/oracle/oradata. As such the staging directory is created inside the oradata directory.
docker exec -ti mycdb mkdir -p /opt/oracle/oradata/pdbstaging
This is visible on the external host directory strcuture
 pwd
/opt/app/oracle/oradata/docker/mycdb
ls
dbconfig  mycdb  MYCDB  pdbstaging
Copy (or scp if the destination docker container is in a different location) the PDB related files to the staging directory and check these are visible inside the docker container. Once copied make sure that file ownership is changed to oracle user and group inside the container.
cp /opt/app/oracle/oradata/docker/oracdb/oracdb/orapdb/* /opt/app/oracle/oradata/docker/mycdb/pdbstaging/

chown 54321:54321 *
# ls -l
-rw-r--r--. 1 54321 54321      7577 May 30 14:25 orapdb.xml
-rw-r-----. 1 54321 54321 503324672 May 30 14:25 sysaux01.dbf
-rw-r-----. 1 54321 54321 283123712 May 30 14:25 system01.dbf
-rw-r-----. 1 54321 54321 104865792 May 30 14:25 undotbs01.dbf
-rw-r-----. 1 54321 54321   5251072 May 30 14:25 users01.dbf

docker exec -ti mycdb ls /opt/oracle/oradata/pdbstaging
orapdb.xml  sysaux01.dbf  system01.dbf  undotbs01.dbf  users01.dbf
Run the DBMS_PDB.CHECK_PLUG_COMPATIBILITY and if successful plug the PDB at the destination. In this case the destination CDB uses OMF.
create pluggable database pdbthree as clone using '/opt/oracle/oradata/pdbstaging/orapdb.xml'
source_file_name_convert=('/opt/oracle/oradata/oracdb/orapdb','/opt/oracle/oradata/pdbstaging') move;

Pluggable database created.

SQL> show pdbs;

    CON_ID CON_NAME                       OPEN MODE  RESTRICTED
---------- ------------------------------ ---------- ----------
         2 PDB$SEED                       READ ONLY  NO
         3 MYPDB                          READ WRITE NO
         5 PDBTHREE                       MOUNTED
SQL> alter pluggable database pdbthree open;

SQL> alter session set container=pdbthree;

SQL> select name from v$datafile;

NAME
----------------------------------------------------------------------------------------------------
/opt/oracle/oradata/MYCDB/6D6D69D5DC29042BE053020011ACFE1B/datafile/o1_mf_system_fjxb3m30_.dbf
/opt/oracle/oradata/MYCDB/6D6D69D5DC29042BE053020011ACFE1B/datafile/o1_mf_sysaux_fjxb3m3o_.dbf
/opt/oracle/oradata/MYCDB/6D6D69D5DC29042BE053020011ACFE1B/datafile/o1_mf_undotbs1_fjxb3m3p_.dbf
/opt/oracle/oradata/MYCDB/6D6D69D5DC29042BE053020011ACFE1B/datafile/o1_mf_users_fjxb3m3r_.dbf

Export and Import From DB in Docker
Similar to previous cases the concern here is if the dump files are to be made available out side of the docker container. If the intention is to copy or move the dump file to a different location then this must be accessible out side of the docker container. For this reason the dump file directory must be created in the volume location.
docker exec -ti oracdb mkdir -p /opt/oracle/oradata/dpdumps
This will be visible in the external host directory path
pwd
/opt/app/oracle/oradata/docker/oracdb
ls
dbconfig  dpdumps  oracdb
Create database directory using this and run export and imports.
create directory dumpdir as '/opt/oracle/oradata/dpdumps';
grant read,write on directory dumpdir to asanga;
Export could be done by connecting to DB externally
expdp asanga/asa@localhost:1521/orapdb directory=dumpdir  tables=x

Export: Release 12.2.0.1.0 - Production on Wed May 23 11:43:49 2018

Copyright (c) 1982, 2017, Oracle and/or its affiliates.  All rights reserved.

Connected to: Oracle Database 12c Enterprise Edition Release 12.2.0.1.0 - 64bit Production
Starting "ASANGA"."SYS_EXPORT_TABLE_01":  asanga/********@localhost:1521/orapdb directory=dumpdir tables=x
Processing object type TABLE_EXPORT/TABLE/TABLE_DATA
Processing object type TABLE_EXPORT/TABLE/STATISTICS/TABLE_STATISTICS
Processing object type TABLE_EXPORT/TABLE/STATISTICS/MARKER
Processing object type TABLE_EXPORT/TABLE/TABLE
. . exported "ASANGA"."X"                                12.75 KB    1000 rows
Master table "ASANGA"."SYS_EXPORT_TABLE_01" successfully loaded/unloaded
******************************************************************************
Dump file set for ASANGA.SYS_EXPORT_TABLE_01 is:
  /opt/oracle/oradata/dpdumps/expdat.dmp
Job "ASANGA"."SYS_EXPORT_TABLE_01" successfully completed at Wed May 23 10:44:21 2018 elapsed 0 00:00:2
Or as a docker container execution
docker exec -ti oracdb expdp asanga/asa@orapdb directory=dumpdir  tables=x

Export: Release 12.2.0.1.0 - Production on Wed May 23 10:47:22 2018

Copyright (c) 1982, 2017, Oracle and/or its affiliates.  All rights reserved.

Connected to: Oracle Database 12c Enterprise Edition Release 12.2.0.1.0 - 64bit Production
Starting "ASANGA"."SYS_EXPORT_TABLE_01":  asanga/********@orapdb directory=dumpdir tables=x
Processing object type TABLE_EXPORT/TABLE/TABLE_DATA
Processing object type TABLE_EXPORT/TABLE/STATISTICS/TABLE_STATISTICS
Processing object type TABLE_EXPORT/TABLE/STATISTICS/MARKER
Processing object type TABLE_EXPORT/TABLE/TABLE
. . exported "ASANGA"."X"                                12.75 KB    1000 rows
Master table "ASANGA"."SYS_EXPORT_TABLE_01" successfully loaded/unloaded
******************************************************************************
Dump file set for ASANGA.SYS_EXPORT_TABLE_01 is:
  /opt/oracle/oradata/dpdumps/expdat.dmp
Job "ASANGA"."SYS_EXPORT_TABLE_01" successfully completed at Wed May 23 10:47:43 2018 elapsed 0 00:00:20
Import too could be done same way provided the dump file is visible inside the docker container.
docker exec -ti oracdb impdp asanga/asa@orapdb directory=dumpdir  dumpfile=expdat.dmp

Import: Release 12.2.0.1.0 - Production on Wed May 23 10:48:46 2018

Copyright (c) 1982, 2017, Oracle and/or its affiliates.  All rights reserved.

Connected to: Oracle Database 12c Enterprise Edition Release 12.2.0.1.0 - 64bit Production
Master table "ASANGA"."SYS_IMPORT_FULL_01" successfully loaded/unloaded
Starting "ASANGA"."SYS_IMPORT_FULL_01":  asanga/********@orapdb directory=dumpdir dumpfile=expdat.dmp
Processing object type TABLE_EXPORT/TABLE/TABLE
Processing object type TABLE_EXPORT/TABLE/TABLE_DATA
. . imported "ASANGA"."X"                                12.75 KB    1000 rows
Processing object type TABLE_EXPORT/TABLE/STATISTICS/TABLE_STATISTICS
Processing object type TABLE_EXPORT/TABLE/STATISTICS/MARKER
Job "ASANGA"."SYS_IMPORT_FULL_01" successfully completed at Wed May 23 10:49:10 2018 elapsed 0 00:00:21
Using RMAN Backup in Docker
The database created by default in docker runs in noarchive log mode. As a result one could only take cold backups. It is possible to stop the docker container and make a backup of the data files using OS utilities if data files are visible outside the docker container due to use of volumes. However, if RMAN backups are preferred then it would require putting the database in mount mode. This cannot be done while connected remotely as soon as database is shutdown the listener registration will be lost.
rman target sys/dockerOra278@localhost:1521/oracdb

RMAN> shutdown immediate;

using target database control file instead of recovery catalog
database dismounted
Oracle instance shut down

RMAN> startup mount

RMAN-00571: ===========================================================
RMAN-00569: =============== ERROR MESSAGE STACK FOLLOWS ===============
RMAN-00571: ===========================================================
RMAN-03002: failure of startup command at 05/30/2018 15:34:53
ORA-12514: TNS:listener does not currently know of service requested in connect descriptor
If the backup files to persist then backup file location must be on directory path mapped to the external volume.
docker exec -ti oracdb mkdir -p /opt/oracle/oradata/backups
This folder is visible on the external host
pwd
/opt/app/oracle/oradata/docker/oracdg
ls
backups  dbconfig  dpdumps  oracdb
To start the database in mount mode login to the docker container and start the DB in mount mode(this is fundamentally against the use of docker containers. One should access the service running out of docker container but should never connect to it, if it could be helped).
$ docker exec -ti oracdb /bin/bash
[oracle@4252ebebab2f ~]$ sqlplus  / as sysdba
SQL> shutdown immediate;
SQL> startup mount;
Once the database is in mount backup could be run by connecting remotely
rman target sys/dockerOra278@localhost:1521/oracdb

Starting backup at 23-MAY-18
using channel ORA_DISK_1
channel ORA_DISK_1: starting full datafile backup set
channel ORA_DISK_1: specifying datafile(s) in backup set
input datafile file number=00001 name=/opt/oracle/oradata/oracdb/system01.dbf
input datafile file number=00003 name=/opt/oracle/oradata/oracdb/sysaux01.dbf
input datafile file number=00004 name=/opt/oracle/oradata/oracdb/undotbs01.dbf
input datafile file number=00007 name=/opt/oracle/oradata/oracdb/users01.dbf
channel ORA_DISK_1: starting piece 1 at 23-MAY-18
channel ORA_DISK_1: finished piece 1 at 23-MAY-18
piece handle=/opt/oracle/oradata/backups/backup05t3jug5_1_1 tag=TAG20180523T110125 comment=NONE
channel ORA_DISK_1: backup set complete, elapsed time: 00:00:35
channel ORA_DISK_1: starting full datafile backup set
channel ORA_DISK_1: specifying datafile(s) in backup set
input datafile file number=00014 name=/opt/oracle/oradata/oracdb/orapdb/sysaux01.dbf
input datafile file number=00013 name=/opt/oracle/oradata/oracdb/orapdb/system01.dbf
input datafile file number=00015 name=/opt/oracle/oradata/oracdb/orapdb/undotbs01.dbf
input datafile file number=00016 name=/opt/oracle/oradata/oracdb/orapdb/users01.dbf
channel ORA_DISK_1: starting piece 1 at 23-MAY-18
channel ORA_DISK_1: finished piece 1 at 23-MAY-18
piece handle=/opt/oracle/oradata/backups/backup06t3juh9_1_1 tag=TAG20180523T110125 comment=NONE
channel ORA_DISK_1: backup set complete, elapsed time: 00:00:15
channel ORA_DISK_1: starting full datafile backup set
channel ORA_DISK_1: specifying datafile(s) in backup set
input datafile file number=00006 name=/opt/oracle/oradata/oracdb/pdbseed/sysaux01.dbf
input datafile file number=00005 name=/opt/oracle/oradata/oracdb/pdbseed/system01.dbf
input datafile file number=00008 name=/opt/oracle/oradata/oracdb/pdbseed/undotbs01.dbf
channel ORA_DISK_1: starting piece 1 at 23-MAY-18
channel ORA_DISK_1: finished piece 1 at 23-MAY-18
piece handle=/opt/oracle/oradata/backups/backup07t3juho_1_1 tag=TAG20180523T110125 comment=NONE
channel ORA_DISK_1: backup set complete, elapsed time: 00:00:15
Finished backup at 23-MAY-18

Starting Control File and SPFILE Autobackup at 23-MAY-18
piece handle=/opt/oracle/product/12.2.0.1/dbhome_1/dbs/c-2651185508-20180523-01 comment=NONE
Finished Control File and SPFILE Autobackup at 23-MAY-18
RMAN could also be accessed via docker exec
docker exec -ti oracdb rman target /

RMAN> list backup summary;

using target database control file instead of recovery catalog

List of Backups
===============
Key     TY LV S Device Type Completion Time #Pieces #Copies Compressed Tag
------- -- -- - ----------- --------------- ------- ------- ---------- ---
5       B  F  A DISK        23-MAY-18       1       1       NO         TAG20180523T110125
6       B  F  A DISK        23-MAY-18       1       1       NO         TAG20180523T110125
7       B  F  A DISK        23-MAY-18       1       1       NO         TAG20180523T110125
8       B  F  A DISK        23-MAY-18       1       1       NO         TAG20180523T110231
The backup files are accessible outside the docker container.
cd /opt/app/oracle/oradata/docker/oracdg/backups/
ls
backup05t3jug5_1_1  backup06t3juh9_1_1  backup07t3juho_1_1

Miscellaneous
  Login to docker container as root
If ever needed to login to docker container as root use the following.
docker exec -ti --user root oracdb /bin/bash
bash-4.2# whoami
root
bash-4.2#
  Committing changes to the docker image
Imagine the scenario where want to disable the use of OMF after the docker image has been built. One option is to make the changes to earlier mentioned files and built the image from scratch. But if there's patches to apply too this could require building another image. However it is possible to make changes to the files inside the docker container and create a new image with those changes. Taking the earlier mentioned scenario, to disable OMF login the docker container and modify the dbca.rsp.tmpl and createDB.sh files to default settings. (vi editior is not present on the OEL 7 slim image. However this could be installed using yum connected as root (see above to connect to docker as root).
docker exec -ti oracdb /bin/bash
[oracle@4252ebebab2f ~]$ cd /opt/oracle
[oracle@4252ebebab2f oracle]$ ls
admin             createDB.sh    oradata            scripts
cfgtoollogs       dbca.rsp.tmpl  product            setPassword.sh
checkDBStatus.sh  diag           runOracle.sh       startDB.sh
checkpoints       oraInventory   runUserScripts.sh
[oracle@4252ebebab2f oracle]$ vi dbca.rsp.tmpl
[oracle@4252ebebab2f oracle]$ grep initParam dbca.rsp.tmpl
# Name          : initParams
initParams=audit_trail=none,audit_sys_operations=false

[oracle@4252ebebab2f oracle]$ vi createDB.sh
[oracle@4252ebebab2f oracle]$ grep control01 createDB.sh
   ALTER SYSTEM SET control_files='$ORACLE_BASE/oradata/$ORACLE_SID/control01.ctl' scope=spfile;
Exit the docker container and commit the changes by building a new image. For the commit specify the container and a new tag.
docker commit -m="no OMF" oracdb oracle/database:12.2.0.1-ee-27674384-noOMF
sha256:e0c9e849dc02a9b3a364c5761c3f4bd4281920fb68a953271be5f93bf2cd8afc
[oracle@hpc3 ~]$ docker images
REPOSITORY          TAG                          IMAGE ID            CREATED             SIZE
oracle/database     12.2.0.1-ee-27674384-noOMF   e0c9e849dc02        16 seconds ago      17.1GB
oracle/database     12.2.0.1-ee-27674384         efab903d5833        7 days ago          16.2GB
oraclelinux         7-slim                       c94cc930790a        5 weeks ago         117MB
Now there are two images with OMF and without OMF which could be used to create databases.
docker run --name cdb2 ..... oracle/database:12.2.0.1-ee-27674384-noOMF
  Remove dangling images and volumes
Identify dangling volumes
docker volume ls -qf dangling=true
299236fea77483719886b804ca48ad5f42738e7c0085d597824793daae968165
7dfa1c6d924180bc2f412fa52a6a6a5f0f90d82297c51501e76e3fa227e71eb7
Remove dangling volumes
docker volume rm $(docker volume ls -qf dangling=true)
Identify dangling images
docker images -qf dangling=true
Remove dangling images
docker rmi $(docker images -qf dangling=true)
Useful metalink notes
Oracle Support for Database Running on Docker [ID 2216342.1]
Support for Docker Running on Oracle Linux [ID 1921163.1]

Related Post
Installing Docker CE on RHEL 7