Tuesday, November 1, 2016

Plugging a Non-CDB (pre-12c) into CDB Using Transportable DB (TDB) / Tablespaces (TTS)

In previous post it was shown how a 12c non-CDB could be plugged in as a PDB. This post shows steps for plugging a pre-12c DB (anything pre-12c is by default non-CDB) as a PBD using transportable DB (TDB) and tablespace (TTS) methods. Both methods have similar steps, only difference being in TBD entire DB is transported while in TTS method only a set of tablespaces plugged in. In this case enterprise edition RAC DB (called ent114) of 11.2.0.4 version will be plugged into a single instance CDB using TDB/TTS methods.

Both methods requires creating a PDB before the transporting could begin. The tablespaces are attached to this "pre-existing" PDB. In this case it is called PDB114.
SQL> create pluggable database pdb114 admin user admin identified by admin;
SQL> alter pluggable database pdb114 open;
Following information and steps are common for both methods. The 11.2 database has following tablespaces. The user defined tablespaces are in bold.
SQL> select TABLESPACE_NAME,BLOCK_SIZE from dba_tablespaces order by 1;

TABLESPACE_NAME                BLOCK_SIZE
------------------------------ ----------
EXAMPLE                              8192
NEWTBS                               8192
SQLT                                 8192
SYSAUX                               8192
SYSTEM                               8192
TBS2K                                2048
TBS32K                              32768
TEMP                                 8192
TEST                                 8192
UNDOTBS1                             8192
UNDOTBS2                             8192
USERS                                8192
There are two tablespaces with non-default block sizes. Before the transport create non-default memory pools in the CDB as well.
Secondly also check if the endieness of the source and target systems are the same. If they are different, conversions is required before transported data files could be attached.
SQL> SELECT d.PLATFORM_NAME, ENDIAN_FORMAT FROM V$TRANSPORTABLE_PLATFORM tp, V$DATABASE d WHERE tp.PLATFORM_NAME = d.PLATFORM_NAME;  

PLATFORM_NAME        ENDIAN_FORMAT
-------------------- --------------
Linux x86 64-bit     Little
This is where the steps common for both TDB and TTS ends. Next is the steps specific to TDB method.

Plugging pre-12c DB using TDB
Put the user defined tablespaces to read only mode.
alter tablespace EXAMPLE read only;
alter tablespace NEWTBS read only;
alter tablespace SQLT read only;
alter tablespace TBS2K read only;
alter tablespace TBS32K read only;
alter tablespace TEST read only;
alter tablespace USERS read only;

SQL> select tablespace_name,status from dba_tablespaces;

TABLESPACE_NAME                STATUS
------------------------------ ---------
SYSTEM                         ONLINE
SYSAUX                         ONLINE
UNDOTBS1                       ONLINE
TEMP                           ONLINE
UNDOTBS2                       ONLINE
USERS                          READ ONLY
EXAMPLE                        READ ONLY
TEST                           READ ONLY
SQLT                           READ ONLY
NEWTBS                         READ ONLY
TBS2K                          READ ONLY
TBS32K                         READ ONLY
While the user defined tablespaces are in read only mode run a full export of the DB with transportable = always and version = 12 options. This option is supported on standard edition databases. Following expdp command is used in this case and tail end of the export log output is shown as well.
expdp system full=y dumpfile=ent114.dmp directory=EXEC_DIR transportable=always version=12 logfile=ent114.log

******************************************************************************
Dump file set for SYSTEM.SYS_EXPORT_FULL_01 is:
  /usr/local/exdata/ent114.dmp
******************************************************************************
Datafiles required for transportable tablespace EXAMPLE:
  +DATA/ent11g2/datafile/example.268.785857285
Datafiles required for transportable tablespace NEWTBS:
  +DATA/ent11g2/datafile/newtbs.273.888232727
Datafiles required for transportable tablespace SQLT:
  +DATA/ent11g2/datafile/sqlt.269.826463789
Datafiles required for transportable tablespace TBS2K:
  +DATA/ent11g2/datafile/tbs2k.272.921767965
Datafiles required for transportable tablespace TBS32K:
  +DATA/ent11g2/datafile/tbs32k.271.921768047
Datafiles required for transportable tablespace TEST:
  +DATA/ent11g2/datafile/test.274.888250337
Datafiles required for transportable tablespace USERS:
  +DATA/ent11g2/datafile/users.264.785694801
Job "SYSTEM"."SYS_EXPORT_FULL_01" completed with 1 error(s) at Mon Sep 5 14:52:08 2016 elapsed 0 00:05:34
The data files listed for transport must be copied out of the ASM disk group. There are many ways to do this, easiest could be to use cp with asmcmd. One example is shown here
ASMCMD> cp TBS2K.272.921767965 /home/grid/backup/
copying +DATA/ent11g2/datafile/TBS2K.272.921767965 -> /home/grid/backup/TBS2K.272.921767965
Once all the data files required for transport is copied out of ASM, the tablespaces made read only earlier could be made read write again.
alter tablespace EXAMPLE read write;
alter tablespace NEWTBS read write;
alter tablespace SQLT read write;
alter tablespace TBS2K read write;
alter tablespace TBS32K read write;
alter tablespace TEST read write;
alter tablespace USERS read write;
Next step is moving of data files copied out of ASM eariler to the actual location where PDB reside. In this case the PDB reside on a remote server and data file location for the PDB is
/opt/app/oracle/oradata/CXCDB/3BC31BB9D1A967C7E0536300A8C0D124/datafile
Therefore the data files are copied over the same location.
scp * oracle@192.168.0.99:/opt/app/oracle/oradata/CXCDB/3BC31BB9D1A967C7E0536300A8C0D124/datafile/
Create a database directory in the PDB and copy the dump file created by the expdp to location refereed by the DB directory. This will be used during the import. Since there's no conversion needed due to endian format being the same the import can go ahead without any additional work. The parameter file used for import is shown below. The TRANSPORT_DATAFILES has comma separated list of datafiles with their new paths.
cat import.par
FULL=Y
DUMPFILE=ent114.dmp
DIRECTORY=ora_dump
TRANSPORT_DATAFILES=
'/opt/app/oracle/oradata/CXCDB/3BC31BB9D1A967C7E0536300A8C0D124/datafile/EXAMPLE.268.785857285',
'/opt/app/oracle/oradata/CXCDB/3BC31BB9D1A967C7E0536300A8C0D124/datafile/NEWTBS.273.888232727',
'/opt/app/oracle/oradata/CXCDB/3BC31BB9D1A967C7E0536300A8C0D124/datafile/SQLT.269.826463789',
'/opt/app/oracle/oradata/CXCDB/3BC31BB9D1A967C7E0536300A8C0D124/datafile/TBS2K.272.921767965',
'/opt/app/oracle/oradata/CXCDB/3BC31BB9D1A967C7E0536300A8C0D124/datafile/TBS32K.271.921768047',
'/opt/app/oracle/oradata/CXCDB/3BC31BB9D1A967C7E0536300A8C0D124/datafile/TEST.274.888250337',
'/opt/app/oracle/oradata/CXCDB/3BC31BB9D1A967C7E0536300A8C0D124/datafile/USERS.264.785694801'
LOGFILE=ent114imp.log
The DB is imported using the parameter file. The objects that already exists could result in warnings however the import completes successfully.
impdp system@pdb114 parfile=import.par
At the end of the import the user defined tablespaces are plugged in and online.
SQL> SELECT tablespace_name, plugged_in, status FROM   dba_tablespaces;

TABLESPACE_NAME                PLU STATUS
------------------------------ --- ---------
SYSTEM                         NO  ONLINE
SYSAUX                         NO  ONLINE
TEMP                           NO  ONLINE
EXAMPLE                        YES ONLINE
NEWTBS                         YES ONLINE
SQLT                           YES ONLINE
TBS2K                          YES ONLINE
TBS32K                         YES ONLINE
TEST                           YES ONLINE
USERS                          YES ONLINE
All users in the pre-12c DB are created as local users of the PDB. In the default database properties are not affected, for example in the pre-12c DB the default DB tablespace was users but after the transport the PDB still retain it's original default tablespace.
SQL> SELECT PROPERTY_NAME,PROPERTY_VALUE FROM database_properties where property_name like 'DEFAULT%';

PROPERTY_NAME                  PROPERTY_VALUE
------------------------------ ----------------
DEFAULT_TEMP_TABLESPACE        TEMP
DEFAULT_PERMANENT_TABLESPACE   SYSTEM
At this stage the PDB is ready for use with pre-12c DB plugged in.



Plugging pre-12c DB using TTS
This method uses transportable tablespaces to copy the user defined tablespaces to the PDB location and plug them to the PDB created. Same user defined tablespaces mentioned in earlier methods are used in this case as well. Before tablespaces could be transported verify they are self contained. DBMS_TTS.TRANSPORT_SET_CHECK could be used for this, however unlike in previous post strict or full containment check is also performed. Oracle admin guide says "for strict or full containment, you must additionally set the TTS_FULL_CHECK parameter to TRUE. The strict or full containment check is for cases that require capturing not only references going outside the transportable set, but also those coming into the set. Tablespace Point-in-Time Recovery (TSPITR) is one such case where dependent objects must be fully contained or fully outside the transportable set".
SQL> exec DBMS_TTS.TRANSPORT_SET_CHECK('EXAMPLE,NEWTBS,SQLT,TBS2K,TBS32K,TEST,USERS', TRUE,TRUE);
Check for any violations
SQL> SELECT * FROM TRANSPORT_SET_VIOLATIONS;
no rows selected
If there's no violation proceed with the next steps. Similar to earlier method it's important to check if the source and the target both have the same endian.
Put the user defined tablespace to read only mode and run the export. The parameter file content is shown below.
cat tts.par
dumpfile=tts114.dmp
logfile=tts114.log
directory=EXEC_DIR
transport_tablespaces=EXAMPLE,NEWTBS,SQLT,TBS2K,TBS32K,TEST,USERS
transport_full_check=y

expdp system parfile=tts.par
End of the export the output log will list the data files for transport
Connected to: Oracle Database 11g Enterprise Edition Release 11.2.0.4.0 - 64bit Production
With the Partitioning, Real Application Clusters, Automatic Storage Management, OLAP,
Data Mining and Real Application Testing options
Starting "SYSTEM"."SYS_EXPORT_TRANSPORTABLE_01":  system/******** parfile=tts.par
Processing object type TRANSPORTABLE_EXPORT/PLUGTS_BLK
Processing object type TRANSPORTABLE_EXPORT/TABLE
Processing object type TRANSPORTABLE_EXPORT/GRANT/OWNER_GRANT/OBJECT_GRANT
Processing object type TRANSPORTABLE_EXPORT/INDEX/INDEX
Processing object type TRANSPORTABLE_EXPORT/INDEX/FUNCTIONAL_INDEX/INDEX
Processing object type TRANSPORTABLE_EXPORT/CONSTRAINT/CONSTRAINT
Processing object type TRANSPORTABLE_EXPORT/INDEX_STATISTICS
Processing object type TRANSPORTABLE_EXPORT/CONSTRAINT/REF_CONSTRAINT
Processing object type TRANSPORTABLE_EXPORT/TABLE_STATISTICS
Processing object type TRANSPORTABLE_EXPORT/POST_INSTANCE/PLUGTS_BLK
Master table "SYSTEM"."SYS_EXPORT_TRANSPORTABLE_01" successfully loaded/unloaded
******************************************************************************
Dump file set for SYSTEM.SYS_EXPORT_TRANSPORTABLE_01 is:
  /usr/local/exdata/tts114.dmp
******************************************************************************
Datafiles required for transportable tablespace EXAMPLE:
  +DATA/ent11g2/datafile/example.268.785857285
Datafiles required for transportable tablespace NEWTBS:
  +DATA/ent11g2/datafile/newtbs.273.888232727
Datafiles required for transportable tablespace SQLT:
  +DATA/ent11g2/datafile/sqlt.269.826463789
Datafiles required for transportable tablespace TBS2K:
  +DATA/ent11g2/datafile/tbs2k.272.921767965
Datafiles required for transportable tablespace TBS32K:
  +DATA/ent11g2/datafile/tbs32k.271.921768047
Datafiles required for transportable tablespace TEST:
  +DATA/ent11g2/datafile/test.274.888250337
Datafiles required for transportable tablespace USERS:
  +DATA/ent11g2/datafile/users.264.785694801
Job "SYSTEM"."SYS_EXPORT_TRANSPORTABLE_01" successfully completed at Mon Sep 5 16:36:07 2016 elapsed 0 00:01:41
Same as previous method copy the data files out of ASM. Once copied out put the tablespace to read write mode. Transfer the data files copied out of ASM to PDB's data file location (same as earlier method).
On the PDB create a DB directory and transfer the exported dump file. Unlike the TDB method, under TTS method before the import the users must exist in the PDB. If not import will fail
ORA-39123: Data Pump transportable tablespace job aborted
ORA-29342: user ASANGA does not exist in the database
Once all requried users are created in the PDB begin the import. The content of the import parameter file is given below. The TRANSPORT_DATAFILES reflect the actual file paths of the data files.
cat ttsimp.par
DUMPFILE=tts114.dmp
DIRECTORY=tts_dir
TRANSPORT_DATAFILES=
'/opt/app/oracle/oradata/CXCDB/3BC5DE03FC5E720DE0536300A8C07F97/datafile/EXAMPLE.268.785857285',
'/opt/app/oracle/oradata/CXCDB/3BC5DE03FC5E720DE0536300A8C07F97/datafile/NEWTBS.273.888232727',
'/opt/app/oracle/oradata/CXCDB/3BC5DE03FC5E720DE0536300A8C07F97/datafile/SQLT.269.826463789',
'/opt/app/oracle/oradata/CXCDB/3BC5DE03FC5E720DE0536300A8C07F97/datafile/TBS2K.272.921767965',
'/opt/app/oracle/oradata/CXCDB/3BC5DE03FC5E720DE0536300A8C07F97/datafile/TBS32K.271.921768047',
'/opt/app/oracle/oradata/CXCDB/3BC5DE03FC5E720DE0536300A8C07F97/datafile/TEST.274.888250337',
'/opt/app/oracle/oradata/CXCDB/3BC5DE03FC5E720DE0536300A8C07F97/datafile/USERS.264.785694801'
LOGFILE=tts114imp.log

impdp system@pdb114 parfile=ttsimp.par
After the import the tbalespaces will be plugged but read only mode.
SQL>  SELECT tablespace_name, plugged_in, status FROM   dba_tablespaces;

TABLESPACE_NAME                PLU STATUS
------------------------------ --- ---------
SYSTEM                         NO  ONLINE
SYSAUX                         NO  ONLINE
TEMP                           NO  ONLINE
EXAMPLE                        YES READ ONLY
NEWTBS                         YES READ ONLY
SQLT                           YES READ ONLY
TBS2K                          YES READ ONLY
TBS32K                         YES READ ONLY
TEST                           YES READ ONLY
USERS                          YES READ ONLY
Change the read only mode to read write
alter tablespace EXAMPLE read write;
alter tablespace NEWTBS read write;
alter tablespace SQLT read write;
alter tablespace TBS2K read write;
alter tablespace TBS32K read write;
alter tablespace TEST read write;
alter tablespace USERS read write;

SQL>  SELECT tablespace_name, plugged_in, status FROM   dba_tablespaces;

TABLESPACE_NAME                PLU STATUS
------------------------------ --- ---------
SYSTEM                         NO  ONLINE
SYSAUX                         NO  ONLINE
TEMP                           NO  ONLINE
EXAMPLE                        YES ONLINE
NEWTBS                         YES ONLINE
SQLT                           YES ONLINE
TBS2K                          YES ONLINE
TBS32K                         YES ONLINE
TEST                           YES ONLINE
USERS                          YES ONLINE
At this stage the PDB is ready for use with pre-12c DB plugged in.

Related Posts
Remote Cloning of a PDB
Move a PDB Between Servers
Plugging a SE2 non-CDB as an EE PDB Using File Copying and Remote Link

Update on 2018-03-29
The transportable tablespace method may not work for all occasions. For example certain objects that are either in system tablespace or owned by sys won't be moved as part of this approach. Most notably the sequences won't move across. This is mentioned in 11.1 doc" You cannot transport the SYSTEM tablespace or objects owned by the user SYS. Some examples of such objects are PL/SQL, Java classes, callouts, views, synonyms, users, privileges, dimensions, directories, and sequences." This is not explicitly mentioned in 12.2 doc (limitations on transportable tablespace or general limitation for transporting data). However testing has shown that sequences,views and PL/SQL functions are not moved across when tablespaces are plugged into PDB on 12.2. On the non-CDB user had following objects
SQL> select object_type,count(*) from user_objects group by object_type;

OBJECT_TYPE               COUNT(*)
----------------------- ----------
SEQUENCE                         1
TRIGGER                          1
INDEX                            1
TABLE                            2
FUNCTION                         1
VIEW                             1
After the self-contained tablespace was transported and plugged into the PDB sequence,view and PL/SQL fucntion were missing
select object_type,count(*) from user_objects group by object_type;

OBJECT_TYPE               COUNT(*)
----------------------- ----------
TRIGGER                          1
TABLE                            2
INDEX                            1