Thursday, December 1, 2016

ORA-39127: unexpected error from call to "SYS"."DBMS_JVM_EXP_PERMS"."GRANT_SYSPRIVS_EXP"

While doing an export (expdp) following error was encountered.
Processing object type DATABASE_EXPORT/ROLE
ORA-39127: unexpected error from call to "SYS"."DBMS_JVM_EXP_PERMS"."GRANT_SYSPRIVS_EXP"
ORA-44002: invalid object name
ORA-06512: at "SYS.DBMS_ASSERT", line 383
ORA-06512: at "SYS.DBMS_METADATA", line 9408
The reason for this is the java components are removed but DBMS_JVM_EXP_PERMS package remains in the database.
SQL> select comp_name,status from dba_registry order by 2;

COMP_NAME                                STATUS
---------------------------------------- --------
Oracle Database Java Packages            REMOVED
JServer JAVA Virtual Machine             REMOVED
Oracle XDK                               REMOVED
Oracle Real Application Clusters         VALID
Oracle Database Catalog Views            VALID
Oracle Workspace Manager                 VALID
Oracle XML Database                      VALID
Oracle Database Packages and Types       VALID
To fix this, remove the DBMS_JVM_EXP_PERMS package from the database. Refer 1095533.1 for exact steps. Removing this package doesn't make any changes to component status. Java components will be shown as removed. However the packages left behind during the removal process is cleaned up.
Afterwards expdp continue without any issue.



Useful metalink note
DataPump Export (EXPDP) Failed On Identifier SYS.DBMS_JVM_EXP_PERMS Must Be Declared [ID 1095533.1]

Related Posts
ORA-39127: unexpected error from call to export_string :=WMSYS.LT_EXPORT_PKG.SCHEMA_INFO_EXP
ORA-39127: unexpected error from call to export_string :=SYS.DBMS_CUBE_EXP.SCHEMA_INFO_EXP while Exporting

Tuesday, November 1, 2016

Plugging a Non-CDB (pre-12c) into CDB Using Transportable DB (TDB) / Tablespaces (TTS)

In previous post it was shown how a 12c non-CDB could be plugged in as a PDB. This post shows steps for plugging a pre-12c DB (anything pre-12c is by default non-CDB) as a PBD using transportable DB (TDB) and tablespace (TTS) methods. Both methods have similar steps, only difference being in TBD entire DB is transported while in TTS method only a set of tablespaces plugged in. In this case enterprise edition RAC DB (called ent114) of 11.2.0.4 version will be plugged into a single instance CDB using TDB/TTS methods.

Both methods requires creating a PDB before the transporting could begin. The tablespaces are attached to this "pre-existing" PDB. In this case it is called PDB114.
SQL> create pluggable database pdb114 admin user admin identified by admin;
SQL> alter pluggable database pdb114 open;
Following information and steps are common for both methods. The 11.2 database has following tablespaces. The user defined tablespaces are in bold.
SQL> select TABLESPACE_NAME,BLOCK_SIZE from dba_tablespaces order by 1;

TABLESPACE_NAME                BLOCK_SIZE
------------------------------ ----------
EXAMPLE                              8192
NEWTBS                               8192
SQLT                                 8192
SYSAUX                               8192
SYSTEM                               8192
TBS2K                                2048
TBS32K                              32768
TEMP                                 8192
TEST                                 8192
UNDOTBS1                             8192
UNDOTBS2                             8192
USERS                                8192
There are two tablespaces with non-default block sizes. Before the transport create non-default memory pools in the CDB as well.
Secondly also check if the endieness of the source and target systems are the same. If they are different, conversions is required before transported data files could be attached.
SQL> SELECT d.PLATFORM_NAME, ENDIAN_FORMAT FROM V$TRANSPORTABLE_PLATFORM tp, V$DATABASE d WHERE tp.PLATFORM_NAME = d.PLATFORM_NAME;  

PLATFORM_NAME        ENDIAN_FORMAT
-------------------- --------------
Linux x86 64-bit     Little
This is where the steps common for both TDB and TTS ends. Next is the steps specific to TDB method.

Plugging pre-12c DB using TDB
Put the user defined tablespaces to read only mode.
alter tablespace EXAMPLE read only;
alter tablespace NEWTBS read only;
alter tablespace SQLT read only;
alter tablespace TBS2K read only;
alter tablespace TBS32K read only;
alter tablespace TEST read only;
alter tablespace USERS read only;

SQL> select tablespace_name,status from dba_tablespaces;

TABLESPACE_NAME                STATUS
------------------------------ ---------
SYSTEM                         ONLINE
SYSAUX                         ONLINE
UNDOTBS1                       ONLINE
TEMP                           ONLINE
UNDOTBS2                       ONLINE
USERS                          READ ONLY
EXAMPLE                        READ ONLY
TEST                           READ ONLY
SQLT                           READ ONLY
NEWTBS                         READ ONLY
TBS2K                          READ ONLY
TBS32K                         READ ONLY
While the user defined tablespaces are in read only mode run a full export of the DB with transportable = always and version = 12 options. This option is supported on standard edition databases. Following expdp command is used in this case and tail end of the export log output is shown as well.
expdp system full=y dumpfile=ent114.dmp directory=EXEC_DIR transportable=always version=12 logfile=ent114.log

******************************************************************************
Dump file set for SYSTEM.SYS_EXPORT_FULL_01 is:
  /usr/local/exdata/ent114.dmp
******************************************************************************
Datafiles required for transportable tablespace EXAMPLE:
  +DATA/ent11g2/datafile/example.268.785857285
Datafiles required for transportable tablespace NEWTBS:
  +DATA/ent11g2/datafile/newtbs.273.888232727
Datafiles required for transportable tablespace SQLT:
  +DATA/ent11g2/datafile/sqlt.269.826463789
Datafiles required for transportable tablespace TBS2K:
  +DATA/ent11g2/datafile/tbs2k.272.921767965
Datafiles required for transportable tablespace TBS32K:
  +DATA/ent11g2/datafile/tbs32k.271.921768047
Datafiles required for transportable tablespace TEST:
  +DATA/ent11g2/datafile/test.274.888250337
Datafiles required for transportable tablespace USERS:
  +DATA/ent11g2/datafile/users.264.785694801
Job "SYSTEM"."SYS_EXPORT_FULL_01" completed with 1 error(s) at Mon Sep 5 14:52:08 2016 elapsed 0 00:05:34
The data files listed for transport must be copied out of the ASM disk group. There are many ways to do this, easiest could be to use cp with asmcmd. One example is shown here
ASMCMD> cp TBS2K.272.921767965 /home/grid/backup/
copying +DATA/ent11g2/datafile/TBS2K.272.921767965 -> /home/grid/backup/TBS2K.272.921767965
Once all the data files required for transport is copied out of ASM, the tablespaces made read only earlier could be made read write again.
alter tablespace EXAMPLE read write;
alter tablespace NEWTBS read write;
alter tablespace SQLT read write;
alter tablespace TBS2K read write;
alter tablespace TBS32K read write;
alter tablespace TEST read write;
alter tablespace USERS read write;
Next step is moving of data files copied out of ASM eariler to the actual location where PDB reside. In this case the PDB reside on a remote server and data file location for the PDB is
/opt/app/oracle/oradata/CXCDB/3BC31BB9D1A967C7E0536300A8C0D124/datafile
Therefore the data files are copied over the same location.
scp * oracle@192.168.0.99:/opt/app/oracle/oradata/CXCDB/3BC31BB9D1A967C7E0536300A8C0D124/datafile/
Create a database directory in the PDB and copy the dump file created by the expdp to location refereed by the DB directory. This will be used during the import. Since there's no conversion needed due to endian format being the same the import can go ahead without any additional work. The parameter file used for import is shown below. The TRANSPORT_DATAFILES has comma separated list of datafiles with their new paths.
cat import.par
FULL=Y
DUMPFILE=ent114.dmp
DIRECTORY=ora_dump
TRANSPORT_DATAFILES=
'/opt/app/oracle/oradata/CXCDB/3BC31BB9D1A967C7E0536300A8C0D124/datafile/EXAMPLE.268.785857285',
'/opt/app/oracle/oradata/CXCDB/3BC31BB9D1A967C7E0536300A8C0D124/datafile/NEWTBS.273.888232727',
'/opt/app/oracle/oradata/CXCDB/3BC31BB9D1A967C7E0536300A8C0D124/datafile/SQLT.269.826463789',
'/opt/app/oracle/oradata/CXCDB/3BC31BB9D1A967C7E0536300A8C0D124/datafile/TBS2K.272.921767965',
'/opt/app/oracle/oradata/CXCDB/3BC31BB9D1A967C7E0536300A8C0D124/datafile/TBS32K.271.921768047',
'/opt/app/oracle/oradata/CXCDB/3BC31BB9D1A967C7E0536300A8C0D124/datafile/TEST.274.888250337',
'/opt/app/oracle/oradata/CXCDB/3BC31BB9D1A967C7E0536300A8C0D124/datafile/USERS.264.785694801'
LOGFILE=ent114imp.log
The DB is imported using the parameter file. The objects that already exists could result in warnings however the import completes successfully.
impdp system@pdb114 parfile=import.par
At the end of the import the user defined tablespaces are plugged in and online.
SQL> SELECT tablespace_name, plugged_in, status FROM   dba_tablespaces;

TABLESPACE_NAME                PLU STATUS
------------------------------ --- ---------
SYSTEM                         NO  ONLINE
SYSAUX                         NO  ONLINE
TEMP                           NO  ONLINE
EXAMPLE                        YES ONLINE
NEWTBS                         YES ONLINE
SQLT                           YES ONLINE
TBS2K                          YES ONLINE
TBS32K                         YES ONLINE
TEST                           YES ONLINE
USERS                          YES ONLINE
All users in the pre-12c DB are created as local users of the PDB. In the default database properties are not affected, for example in the pre-12c DB the default DB tablespace was users but after the transport the PDB still retain it's original default tablespace.
SQL> SELECT PROPERTY_NAME,PROPERTY_VALUE FROM database_properties where property_name like 'DEFAULT%';

PROPERTY_NAME                  PROPERTY_VALUE
------------------------------ ----------------
DEFAULT_TEMP_TABLESPACE        TEMP
DEFAULT_PERMANENT_TABLESPACE   SYSTEM
At this stage the PDB is ready for use with pre-12c DB plugged in.



Plugging pre-12c DB using TTS
This method uses transportable tablespaces to copy the user defined tablespaces to the PDB location and plug them to the PDB created. Same user defined tablespaces mentioned in earlier methods are used in this case as well. Before tablespaces could be transported verify they are self contained. DBMS_TTS.TRANSPORT_SET_CHECK could be used for this, however unlike in previous post strict or full containment check is also performed. Oracle admin guide says "for strict or full containment, you must additionally set the TTS_FULL_CHECK parameter to TRUE. The strict or full containment check is for cases that require capturing not only references going outside the transportable set, but also those coming into the set. Tablespace Point-in-Time Recovery (TSPITR) is one such case where dependent objects must be fully contained or fully outside the transportable set".
SQL> exec DBMS_TTS.TRANSPORT_SET_CHECK('EXAMPLE,NEWTBS,SQLT,TBS2K,TBS32K,TEST,USERS', TRUE,TRUE);
Check for any violations
SQL> SELECT * FROM TRANSPORT_SET_VIOLATIONS;
no rows selected
If there's no violation proceed with the next steps. Similar to earlier method it's important to check if the source and the target both have the same endian.
Put the user defined tablespace to read only mode and run the export. The parameter file content is shown below.
cat tts.par
dumpfile=tts114.dmp
logfile=tts114.log
directory=EXEC_DIR
transport_tablespaces=EXAMPLE,NEWTBS,SQLT,TBS2K,TBS32K,TEST,USERS
transport_full_check=y

expdp system parfile=tts.par
End of the export the output log will list the data files for transport
Connected to: Oracle Database 11g Enterprise Edition Release 11.2.0.4.0 - 64bit Production
With the Partitioning, Real Application Clusters, Automatic Storage Management, OLAP,
Data Mining and Real Application Testing options
Starting "SYSTEM"."SYS_EXPORT_TRANSPORTABLE_01":  system/******** parfile=tts.par
Processing object type TRANSPORTABLE_EXPORT/PLUGTS_BLK
Processing object type TRANSPORTABLE_EXPORT/TABLE
Processing object type TRANSPORTABLE_EXPORT/GRANT/OWNER_GRANT/OBJECT_GRANT
Processing object type TRANSPORTABLE_EXPORT/INDEX/INDEX
Processing object type TRANSPORTABLE_EXPORT/INDEX/FUNCTIONAL_INDEX/INDEX
Processing object type TRANSPORTABLE_EXPORT/CONSTRAINT/CONSTRAINT
Processing object type TRANSPORTABLE_EXPORT/INDEX_STATISTICS
Processing object type TRANSPORTABLE_EXPORT/CONSTRAINT/REF_CONSTRAINT
Processing object type TRANSPORTABLE_EXPORT/TABLE_STATISTICS
Processing object type TRANSPORTABLE_EXPORT/POST_INSTANCE/PLUGTS_BLK
Master table "SYSTEM"."SYS_EXPORT_TRANSPORTABLE_01" successfully loaded/unloaded
******************************************************************************
Dump file set for SYSTEM.SYS_EXPORT_TRANSPORTABLE_01 is:
  /usr/local/exdata/tts114.dmp
******************************************************************************
Datafiles required for transportable tablespace EXAMPLE:
  +DATA/ent11g2/datafile/example.268.785857285
Datafiles required for transportable tablespace NEWTBS:
  +DATA/ent11g2/datafile/newtbs.273.888232727
Datafiles required for transportable tablespace SQLT:
  +DATA/ent11g2/datafile/sqlt.269.826463789
Datafiles required for transportable tablespace TBS2K:
  +DATA/ent11g2/datafile/tbs2k.272.921767965
Datafiles required for transportable tablespace TBS32K:
  +DATA/ent11g2/datafile/tbs32k.271.921768047
Datafiles required for transportable tablespace TEST:
  +DATA/ent11g2/datafile/test.274.888250337
Datafiles required for transportable tablespace USERS:
  +DATA/ent11g2/datafile/users.264.785694801
Job "SYSTEM"."SYS_EXPORT_TRANSPORTABLE_01" successfully completed at Mon Sep 5 16:36:07 2016 elapsed 0 00:01:41
Same as previous method copy the data files out of ASM. Once copied out put the tablespace to read write mode. Transfer the data files copied out of ASM to PDB's data file location (same as earlier method).
On the PDB create a DB directory and transfer the exported dump file. Unlike the TDB method, under TTS method before the import the users must exist in the PDB. If not import will fail
ORA-39123: Data Pump transportable tablespace job aborted
ORA-29342: user ASANGA does not exist in the database
Once all requried users are created in the PDB begin the import. The content of the import parameter file is given below. The TRANSPORT_DATAFILES reflect the actual file paths of the data files.
cat ttsimp.par
DUMPFILE=tts114.dmp
DIRECTORY=tts_dir
TRANSPORT_DATAFILES=
'/opt/app/oracle/oradata/CXCDB/3BC5DE03FC5E720DE0536300A8C07F97/datafile/EXAMPLE.268.785857285',
'/opt/app/oracle/oradata/CXCDB/3BC5DE03FC5E720DE0536300A8C07F97/datafile/NEWTBS.273.888232727',
'/opt/app/oracle/oradata/CXCDB/3BC5DE03FC5E720DE0536300A8C07F97/datafile/SQLT.269.826463789',
'/opt/app/oracle/oradata/CXCDB/3BC5DE03FC5E720DE0536300A8C07F97/datafile/TBS2K.272.921767965',
'/opt/app/oracle/oradata/CXCDB/3BC5DE03FC5E720DE0536300A8C07F97/datafile/TBS32K.271.921768047',
'/opt/app/oracle/oradata/CXCDB/3BC5DE03FC5E720DE0536300A8C07F97/datafile/TEST.274.888250337',
'/opt/app/oracle/oradata/CXCDB/3BC5DE03FC5E720DE0536300A8C07F97/datafile/USERS.264.785694801'
LOGFILE=tts114imp.log

impdp system@pdb114 parfile=ttsimp.par
After the import the tbalespaces will be plugged but read only mode.
SQL>  SELECT tablespace_name, plugged_in, status FROM   dba_tablespaces;

TABLESPACE_NAME                PLU STATUS
------------------------------ --- ---------
SYSTEM                         NO  ONLINE
SYSAUX                         NO  ONLINE
TEMP                           NO  ONLINE
EXAMPLE                        YES READ ONLY
NEWTBS                         YES READ ONLY
SQLT                           YES READ ONLY
TBS2K                          YES READ ONLY
TBS32K                         YES READ ONLY
TEST                           YES READ ONLY
USERS                          YES READ ONLY
Change the read only mode to read write
alter tablespace EXAMPLE read write;
alter tablespace NEWTBS read write;
alter tablespace SQLT read write;
alter tablespace TBS2K read write;
alter tablespace TBS32K read write;
alter tablespace TEST read write;
alter tablespace USERS read write;

SQL>  SELECT tablespace_name, plugged_in, status FROM   dba_tablespaces;

TABLESPACE_NAME                PLU STATUS
------------------------------ --- ---------
SYSTEM                         NO  ONLINE
SYSAUX                         NO  ONLINE
TEMP                           NO  ONLINE
EXAMPLE                        YES ONLINE
NEWTBS                         YES ONLINE
SQLT                           YES ONLINE
TBS2K                          YES ONLINE
TBS32K                         YES ONLINE
TEST                           YES ONLINE
USERS                          YES ONLINE
At this stage the PDB is ready for use with pre-12c DB plugged in.

Related Posts
Remote Cloning of a PDB
Move a PDB Between Servers
Plugging a SE2 non-CDB as an EE PDB Using File Copying and Remote Link

Update on 2018-03-29
The transportable tablespace method may not work for all occasions. For example certain objects that are either in system tablespace or owned by sys won't be moved as part of this approach. Most notably the sequences won't move across. This is mentioned in 11.1 doc" You cannot transport the SYSTEM tablespace or objects owned by the user SYS. Some examples of such objects are PL/SQL, Java classes, callouts, views, synonyms, users, privileges, dimensions, directories, and sequences." This is not explicitly mentioned in 12.2 doc (limitations on transportable tablespace or general limitation for transporting data). However testing has shown that sequences,views and PL/SQL functions are not moved across when tablespaces are plugged into PDB on 12.2. On the non-CDB user had following objects
SQL> select object_type,count(*) from user_objects group by object_type;

OBJECT_TYPE               COUNT(*)
----------------------- ----------
SEQUENCE                         1
TRIGGER                          1
INDEX                            1
TABLE                            2
FUNCTION                         1
VIEW                             1
After the self-contained tablespace was transported and plugged into the PDB sequence,view and PL/SQL fucntion were missing
select object_type,count(*) from user_objects group by object_type;

OBJECT_TYPE               COUNT(*)
----------------------- ----------
TRIGGER                          1
TABLE                            2
INDEX                            1

Saturday, October 15, 2016

Change in 12c JDBC Behavior - setDate & getDate Does Not Truncate Timestamp

The 12c driver (12.1.0.2, ojdbc7.jar) certified for JDK7 and JDK8 does not truncate or set to "00:00:00" the time component when called with setDate and getDate methods. Same could be observed for 12c driver for JDK 6 as well (ojdbc6.jar). The test code given at the end of the post could be used to demonstrate this change in behavior compared to 11g2 (11.2.0.4, ojdbc6.jar certified for JDK6, JDK7 and JDK8) driver. Various driver/database/JDK compatible matrix could be found here.
The test java code runs on JDK 8 and database is a 12.1.0.2 CDB. The only thing changes between tests is the JDBC driver.
First the test is run with 11.2 (11.2.0.4) driver. The output (truncated here) will have no time component on the values inserted or returned by setDate and getDate methods. Timestamp method is used to check what is stored in the table. The output is of the following form: the method (getDate or getTimestamp) used, the driver used to insert the value and value returned by get method used.
Date 11.2.0.4.0 2016-10-12 00:00:00
Timestamp 11.2.0.4.0 2016-10-12 00:00:00.0
There's no issues to reading these values with a 12.1 (12.1.0.2, ojdbc7.jar) driver, the output will be the same (run the test commenting calls to delete and insert methods)
Date 11.2.0.4.0 2016-10-12 00:00:00
Timestamp 11.2.0.4.0 2016-10-12 00:00:00.0
Changing the driver to 12c results in date being inserted with a time component (run test by un-commenting the previously commented methods).
Date 12.1.0.2.0 2016-10-12 12:26:12
Timestamp 12.1.0.2.0 2016-10-12 12:26:12.0
Depending on the application logic upgrade to 12c driver could cause issues due to this change in behavior. In such cases going back to 11.2 driver may work in some cases, as getDate would truncate the time component. But the actual value stored has the time component, shown here in the getTimestamp method, so the behavior is not entirely reversible by reverting to the 11.2 driver.
Date 12.1.0.2.0 2016-10-12 00:00:00
Timestamp 12.1.0.2.0 2016-10-12 12:26:12.0
There's few bug reports on MOS related this issue (Bug 19297927, Bug 20551186, 1944845.1). But the final outcome seem to be that this deliberate (Bug 17766200) and expected behavior in 12c driver (2177909.1) and that oracle documentation isn't reflecting it (Bug 18124680).



Therefore, if the use of 12c driver is a must then oracle has few workarounds and patches for this. One option is to use a calendar object and set the time components to 0 (1944845.1). Other is to apply patch 21161279 (2177909.1). The patch 21161279 supersedes patch 19297927 which only patched the setDate method. So getDate will return values inserted with time component. Once the patch 21161279 is applied following JVM option must be set in order for the set/get Date method to behave as 11.2 driver's methods. Patch has no effect unless this option is set
-Doracle.jdbc.DateZeroTime=true
Once patched and JVM option is set the run the test with 12.1 driver. The time components will be set to 0.
Date 12.1.0.2.0 2016-10-12 00:00:00
Timestamp 12.1.0.2.0 2016-10-12 00:00:00.0
This behavior and output values are same as 11.2 driver.
Same patch and JVM option could be used for 12.1 driver for JDK 6 as well (ojdbc6.jar).

Useful metalink notes
getDate() Gets Time After Upgrading To 12c [ID 2177909.1]
JDBC 12c Adds Timestamp to java.sql.Date After Upgrading From 11.2.0.4 [ID 1944845.1]
Bug 19297927 : CHANGE OF BEHAVIOR IN JDBC 12.1 DUE TO BUG 14389749 CAUSES QUERIES TO FAIL
Bug 17766200 : GETDATE AND SETDATE DO NOT TRUNCATE TIME IN JDBC 12C
Bug 20551186 : GETDATE DIFFERS WITH THE JDBC DRIVER VERSION 11G AND 12C
Bug 18124680 : GETDATE AND SETDATE DO NOT TRUNCATE TIME IN JDBC 12C BUT DOC STATES OTHERWISE
Bug 17228297 : JDBC DRIVER OJDBC*.JAR 12.1.0.1.0 SPECIFICATION VIOLATION REGRESSION

Related Post
java.sql.SQLException: Could not commit with auto-commit set on When Using 12c JDBC Driver

Update 29 March 2017
Same behavior is observed with 12.2.0.1 driver. To remove the time portion the JVM option (-Doracle.jdbc.DateZeroTime=true) is required. No patch is needed.

Test Table DDL
create table datetest (version varchar2(20), indate date);
Java Test Code
public class OJDBC7DateTest {

    public static void main(String[] args) throws SQLException {

        OracleDataSource ds = new OracleDataSource();
        ds.setUser("asanga");
        ds.setPassword("asa");
        ds.setURL("jdbc:oracle:thin:@192.168.0.66:1521/onepdb");

        Connection con = ds.getConnection();
        con.setAutoCommit(false);

        System.out.println("Auto commit status : " + con.getAutoCommit());
        DatabaseMetaData meta = con.getMetaData();

        System.out.println("Driver Name " + meta.getDriverName());
        System.out.println("Driver Version " + meta.getDriverVersion());
        System.out.println("Driver Major Version " + meta.getDriverMajorVersion());
        System.out.println("Driver Minor Version " + meta.getDriverMinorVersion());
        System.out.println("Database Major Version " + meta.getDatabaseMajorVersion());
        System.out.println("Database Minor Version " + meta.getDatabaseMinorVersion());

        System.out.println("delete");
        delete(con);
        
        System.out.println("insert");
        insert(con,meta.getDriverVersion());
        
        System.out.println("select");
        select(con);
              
        con.close();
    }

    public static void select(Connection con) throws SQLException {

        SimpleDateFormat sdf = new SimpleDateFormat("yyyy-MM-dd HH:mm:ss");

        PreparedStatement ps = con.prepareStatement("select * from datetest order by 1");
        ResultSet rs = ps.executeQuery();
        while (rs.next()) {

            System.out.println("Date " + rs.getString(1) + " " + sdf.format(rs.getDate(2)));
            System.out.println("Timestamp " + rs.getString(1) + " " + rs.getTimestamp(2));
        }
        rs.close();
        ps.close();

    }

    public static void insert(Connection con, String version) throws SQLException {

        PreparedStatement ps = con.prepareStatement("insert into datetest values(?, ?)");
        Date cd = new Date(System.currentTimeMillis());
        ps.setString(1, version);;
        ps.setDate(2, cd);
        ps.execute();
        con.commit();
        ps.close();

    }
    
    public static void delete(Connection con) throws SQLException {

        PreparedStatement ps = con.prepareStatement("delete from datetest");
        ps.execute();
        con.commit();
        ps.close();

    }
}
Related Posts
java.sql.SQLException: Could not commit with auto-commit set on When Using 12c JDBC Driver
getBoolean in 12c JDBC Driver and java.sql.SQLException: Fail to convert to internal representation

Saturday, October 1, 2016

Moving PDB from RAC CDB to Single Instance CDB

The RAC CDB and the non-RAC CDB are both 12.1.0.2 SE2 instances. Currently the single instance CDB doesn't have any PDB plugged on it. SE2 CDBs only support single tenancy due to license restriction. The RAC CDB has a PDB available on both instances.
INSTANCE     NAME         OPEN_MODE  STATUS    RES
------------ ------------ ---------- --------- ---
stdcdb1      PDB$SEED     READ ONLY  NORMAL    NO
stdcdb1      STDPDB       READ WRITE NORMAL    NO
stdcdb2      PDB$SEED     READ ONLY  NORMAL    NO
stdcdb2      STDPDB       READ WRITE NORMAL    NO
Both CDBs has same components except for RAC components which is set not available on the single instance CDB. DB registry of the RAC CDB
SQL> select comp_name,status from dba_registry order by 1;

COMP_NAME                                STATUS
---------------------------------------- -----------
JServer JAVA Virtual Machine             VALID
Oracle Application Express               VALID
Oracle Database Catalog Views            VALID
Oracle Database Java Packages            VALID
Oracle Database Packages and Types       VALID
Oracle Multimedia                        VALID
Oracle Real Application Clusters         VALID
Oracle Text                              VALID
Oracle Workspace Manager                 VALID
Oracle XDK                               VALID
Oracle XML Database                      VALID
DB Registry of the non-RAC CDB
SQL> select comp_name,status from dba_registry order by 1;

COMP_NAME                                STATUS
---------------------------------------- -----------
JServer JAVA Virtual Machine             VALID
Oracle Application Express               VALID
Oracle Database Catalog Views            VALID
Oracle Database Java Packages            VALID
Oracle Database Packages and Types       VALID
Oracle Multimedia                        VALID
Oracle Real Application Clusters         OPTION OFF
Oracle Text                              VALID
Oracle Workspace Manager                 VALID
Oracle XDK                               VALID
Oracle XML Database                      VALID
Process of moving the PDB is similar to earlier post except for few RAC and ASM related tasks. To summarize, before the PDB could be moved from currently plugged in CDB, it must be stopped, unplugged and dropped. The unplugging will create a PDB descriptor file and on in the DB file names will appear in lower case while rest of the path appear upper case.
cat stdpdb_desc.xml | grep path
      +DATA/STDCDB/238FA674AFE26E9DE0535E00A8C0F5A1/DATAFILE/system.291.894734477
      +DATA/STDCDB/238FA674AFE26E9DE0535E00A8C0F5A1/DATAFILE/sysaux.292.894734481
      +DATA/STDCDB/238FA674AFE26E9DE0535E00A8C0F5A1/TEMPFILE/temp.295.922121143
      +DATA/STDCDB/238FA674AFE26E9DE0535E00A8C0F5A1/DATAFILE/users.293.894734495
      +DATA/STDCDB/238FA674AFE26E9DE0535E00A8C0F5A1/DATAFILE/tapp.294.894734495
However when data files are copied out of ASM using asmcmd they will have the names in upper case (e.g. SYSTEM.291.894734477). As a result plugging will fail as it is unable to identify the file name
create pluggable database stdpdb as clone using '/home/oracle/stdcdb/datafiles/stdpdb_desc.xml'
*
ERROR at line 1:
ORA-19505: failed to identify file "/home/oracle/stdcdb/datafiles/system.291.894734477"
ORA-27037: unable to obtain file status
Linux-x86_64 Error: 2: No such file or directory

SQL> ! ls "/home/oracle/stdcdb/datafiles/system.291.894734477"
ls: /home/oracle/stdcdb/datafiles/system.291.894734477: No such file or directory

SQL> ! ls -l /home/oracle/stdcdb/datafiles/*
-rw-r--r-- 1 oracle oinstall      6925 Sep 12 10:47 /home/oracle/stdcdb/datafiles/stdpdb_desc.xml
-rw-r----- 1 oracle oinstall 550510592 Sep 12 10:44 /home/oracle/stdcdb/datafiles/SYSAUX.292.894734481
-rw-r----- 1 oracle oinstall 293609472 Sep 12 10:45 /home/oracle/stdcdb/datafiles/SYSTEM.291.894734477
-rw-r----- 1 oracle oinstall  20979712 Sep 12 10:45 /home/oracle/stdcdb/datafiles/TAPP.294.894734495
-rw-r----- 1 oracle oinstall   5251072 Sep 12 10:45 /home/oracle/stdcdb/datafiles/USERS.293.894734495
So before the plugging rename the file names same as on descriptor file.
 mv SYSTEM.291.894734477 system.291.894734477
 mv SYSAUX.292.894734481 sysaux.292.894734481
 mv TAPP.294.894734495 tapp.294.894734495
 mv USERS.293.894734495 users.293.894734495
Also in ASM datafiles and tempfiles have separate paths, as a result the tempfile location must also specified in the source_file_name convert even though file itself is not necessary to be copied to new location. Temp file get dropped when the PDB is dropped even though keep data file options is used.
drop pluggable database stdpdb keep datafiles
Deleted Oracle managed file +DATA/STDCDB/238FA674AFE26E9DE0535E00A8C0F5A1/TEMPFILE/temp.295.922364799
Completed: drop pluggable database stdpdb keep datafiles


So the final command to plug the PDB into single instance will have two file name convert entires, one for data files and another for temp files
create pluggable database stdpdb as clone using '/home/oracle/stdcdb/datafiles/stdpdb_desc.xml' 
source_file_name_convert=(
'+DATA/STDCDB/238FA674AFE26E9DE0535E00A8C0F5A1/DATAFILE','/home/oracle/stdcdb/datafiles',
'+DATA/STDCDB/238FA674AFE26E9DE0535E00A8C0F5A1/TEMPFILE','/home/oracle/stdcdb/datafiles') move;
Once the PDB is plugged open it
SQL> show pdbs

    CON_ID CON_NAME                       OPEN MODE  RESTRICTED
---------- ------------------------------ ---------- ----------
         2 PDB$SEED                       READ ONLY  NO
         3 STDPDB                         MOUNTED

SQL> alter pluggable database stdpdb open;

SQL> show pdbs

    CON_ID CON_NAME                       OPEN MODE  RESTRICTED
---------- ------------------------------ ---------- ----------
         2 PDB$SEED                       READ ONLY  NO
         3 STDPDB                         READ WRITE NO

SQL> select name from v$datafile;

NAME
-----------------------------------------------------------------------------------------------
/ssdoracle/oradata/STDCDB/datafile/o1_mf_undotbs1_cx2j8d05_.dbf
/ssdoracle/oradata/STDCDB/3C4F4864ACAE0C3DE0534200A8C05C7A/datafile/o1_mf_system_cxf6fp5g_.dbf
/ssdoracle/oradata/STDCDB/3C4F4864ACAE0C3DE0534200A8C05C7A/datafile/o1_mf_sysaux_cxf6fp5h_.dbf
/ssdoracle/oradata/STDCDB/3C4F4864ACAE0C3DE0534200A8C05C7A/datafile/o1_mf_users_cxf6fp5j_.dbf
/ssdoracle/oradata/STDCDB/3C4F4864ACAE0C3DE0534200A8C05C7A/datafile/o1_mf_tapp_cxf6fp5k_.dbf
Looking at the PDB violation view will show the warning related to missing RAC option.
SQL> select name,status,action,message from PDB_PLUG_IN_VIOLATIONS where cause='OPTION';

NAME     STATUS    ACTION                                             MESSAGE
-------- --------- -------------------------------------------------- -------------------------------------------------------------------------------------------
STDPDB   PENDING   Fix the database option in the PDB or the CDB      Database option RAC mismatch: PDB installed version 12.1.0.2.0. CDB installed version NULL.
Login into the PDB and turn off the RAC options
SQL> alter session set container=stdpdb;

SQL> show con_name

CON_NAME
---------
STDPDB

SQL> select comp_name,status from dba_registry order by 1;

COMP_NAME                                STATUS
---------------------------------------- -----------
JServer JAVA Virtual Machine             VALID
Oracle Application Express               VALID
Oracle Database Catalog Views            VALID
Oracle Database Java Packages            VALID
Oracle Database Packages and Types       VALID
Oracle Multimedia                        VALID
Oracle Real Application Clusters         VALID
Oracle Text                              VALID
Oracle Workspace Manager                 VALID
Oracle XDK                               VALID
Oracle XML Database                      VALID

11 rows selected.

SQL> exec dbms_registry.OPTION_OFF('RAC');

SQL> select comp_name,status from dba_registry order by 1;

COMP_NAME                                STATUS
---------------------------------------- -----------
JServer JAVA Virtual Machine             VALID
Oracle Application Express               VALID
Oracle Database Catalog Views            VALID
Oracle Database Java Packages            VALID
Oracle Database Packages and Types       VALID
Oracle Multimedia                        VALID
Oracle Real Application Clusters         OPTION OFF
Oracle Text                              VALID
Oracle Workspace Manager                 VALID
Oracle XDK                               VALID
Oracle XML Database                      VALID
Close and open the PDB. The PDB violation view will show the issue has been resolved.
SQL> select name,status,action,message from PDB_PLUG_IN_VIOLATIONS where cause='OPTION';

NAME     STATUS    ACTION                                             MESSAGE
-------- --------- -------------------------------------------------- -------------------------------------------------------------------------------------------
STDPDB   RESOLVED  Fix the database option in the PDB or the CDB      Database option RAC mismatch: PDB installed version 12.1.0.2.0. CDB installed version NULL.
This concludes the moving of PDB from RAC CDB to single instance CDB.

Related Posts
Move a PDB Between Servers
Plugging a SE2 non-CDB as an EE PDB Using File Copying and Remote Link

Wednesday, September 28, 2016

Insert a File as BLOB Using DBMS_LOB

Following script could be used to insert a file as a BLOB into a table. The file to be inserted must reside in the location referred by the database directory LOADF. File name in this case data.log. File is inserted to table called lobins which only has single column which is of BLOB type.
declare

  file_name varchar2(100) := 'data.log';
  db_dir_name varchar2(100) := 'LOADF';

  dest_loc  BLOB := empty_blob();
  src_loc   BFILE := BFILENAME(db_dir_name, file_name);
  destoff number := 1;
  srcoff number :=1;

begin

  DBMS_LOB.OPEN(src_loc, DBMS_LOB.FILE_READONLY);
  DBMS_LOB.CREATETEMPORARY(lob_loc => dest_loc, cache => false);
  DBMS_LOB.OPEN(dest_loc, DBMS_LOB.LOB_READWRITE);
  DBMS_LOB.LOADBLOBFROMFILE (dest_loc,src_loc,DBMS_LOB.getLength(src_loc),destoff,srcoff);

  insert into lobins values (dest_loc);
  commit;

  DBMS_LOB.CLOSE(dest_loc);
  DBMS_LOB.CLOSE(src_loc);

  end;
  /

Sunday, September 18, 2016

Remote Cloning of a PDB

Similar to non-CDB, PDB too could cloned over a remote link. In this case both source and remote DBs are CDBs and one PDB is cloned on the local DB. As the first step create a TNS entry and a link on the local DB. The remote PDB is called PDB1K
PDB1KTNS =
  (DESCRIPTION =
    (ADDRESS = (PROTOCOL = TCP)(HOST = 192.168.0.88)(PORT = 1521))
    (CONNECT_DATA =
      (SERVER = DEDICATED)
      (SERVICE_NAME = pdb1k)
    )
  )

SQL> create database link pdb1k_link connect to  system identified by system using 'PDB1KTNS';
Validate the link by querying a view on the remote PDB
SQL> select name from v$pdbs@pdb1k_link;

NAME
------------------------------
PDB1K
If OMF is used nothing else is needed and PDB could be cloned. However in this case a data files of the remotely cloned PDBs are stored separately. To achieve that set the db_create_dest parameter to desired location with scope set to memory.
SQL> alter system set db_create_file_dest='/opt/app/oracle/oradata/remoteclones' scope=memory;

SQL> show parameter db_create

NAME                                 TYPE        VALUE
------------------------------------ ----------- ------------------------------
db_create_file_dest                  string      /opt/app/oracle/oradata/remoteclones


Put the source PDB into read only mode. Refer oracle doc for full list of pre-reqs. Create the PDB, the new PDB is named PDB1KRMT.
SQL> create pluggable database pdb1krmt from pdb1k@pdb1k_link;

SQL> show pdbs

    CON_ID CON_NAME                       OPEN MODE  RESTRICTED
---------- ------------------------------ ---------- ----------
         2 PDB$SEED                       READ ONLY  NO
         3 ONEPDB                         READ WRITE NO
         4 TWOPDB                         READ WRITE NO
         5 PDB1KRMT                       MOUNTED
Finally open the PDB
SQL> alter pluggable database pdb1krmt open;

SQL> show pdbs;

    CON_ID CON_NAME                       OPEN MODE  RESTRICTED
---------- ------------------------------ ---------- ----------
         2 PDB$SEED                       READ ONLY  NO
         3 ONEPDB                         READ WRITE NO
         4 TWOPDB                         READ WRITE NO
         5 PDB1KRMT                       READ WRITE NO
Verify the PDB data files are created in the intended location
SQL>  select name from v$datafile;

NAME
------------------------------------------------------------------------------------------------------------------------
/opt/app/oracle/oradata/CGCDB/datafile/o1_mf_undotbs1_cvchzywd_.dbf
/opt/app/oracle/oradata/remoteclones/CGCDB/3B5F12ED0B6F4762E0536300A8C0A85F/datafile/o1_mf_system_cwfq6dd5_.dbf
/opt/app/oracle/oradata/remoteclones/CGCDB/3B5F12ED0B6F4762E0536300A8C0A85F/datafile/o1_mf_sysaux_cwfq6ddt_.dbf
/opt/app/oracle/oradata/remoteclones/CGCDB/3B5F12ED0B6F4762E0536300A8C0A85F/datafile/o1_mf_pdb1ktbs_cwfq6ddv_.dbf
Using the USER_TABLESPACES clause available on 12.1.0.2 it is possible to clone the new PDB only with a subset of tablespaces. Assume that original PDB has 3 application specific tablespaces.
SQL> select tablespace_name,status from dba_tablespaces;

TABLESPACE_NAME                STATUS
------------------------------ ---------
SYSTEM                         ONLINE
SYSAUX                         ONLINE
TEMP                           ONLINE
APP1                           ONLINE
APP2                           ONLINE
APP3                           ONLINE
Only 2 of them are wanted in the newly cloned PDB. It is possible to include just these two tablespaces in the user_tablespaces clause excluding all other tablespaces.
create pluggable database pdb1krmt from pdb1k@pdb1k_link USER_TABLESPACES=('APP1','APP3');
Tablespace name exists but status will be offline with data file missing as well.
SQL> select tablespace_name,status from dba_tablespaces;

TABLESPACE_NAME                STATUS
------------------------------ ---------
SYSTEM                         ONLINE
SYSAUX                         ONLINE
TEMP                           ONLINE
APP1                           ONLINE
APP2                           OFFLINE
APP3                           ONLINE

SQL> select tablespace_name,status,file_name from dba_data_files;

TABLESPACE_NAME                STATUS    FILE_NAME
------------------------------ --------- ----------------------------------------------------------------------------------------------------
SYSTEM                         AVAILABLE /opt/app/oracle/oradata/CGCDB/3BED1E0F3E63672CE0536300A8C0356F/datafile/o1_mf_system_cx0bpkwo_.dbf
SYSAUX                         AVAILABLE /opt/app/oracle/oradata/CGCDB/3BED1E0F3E63672CE0536300A8C0356F/datafile/o1_mf_sysaux_cx0bpkwy_.dbf
APP1                           AVAILABLE /opt/app/oracle/oradata/CGCDB/3BED1E0F3E63672CE0536300A8C0356F/datafile/o1_mf_app1_cx0bpkwz_.dbf
APP3                           AVAILABLE /opt/app/oracle/oradata/CGCDB/3BED1E0F3E63672CE0536300A8C0356F/datafile/o1_mf_app3_cx0bpkx1_.dbf
APP2                           AVAILABLE /opt/app/oracle/product/12.1.0/dbhome_2/dbs/MISSING00122
Offline tablespace could be dropped to clean up the new PDB
SQL> drop tablespace app2 including contents and datafiles cascade constraints;
Same could be done when plugging non-CDB as PDBs as well.
SQL> create pluggable database stdpdb from std12c1@std_link USER_TABLESPACES=('APP1','APP3');
Run the post cloning steps and verify the tablespace list
SQL> select tablespace_name,status from dba_tablespaces order by 2,1;

TABLESPACE_NAME                STATUS
------------------------------ ---------
APP2                           OFFLINE
TOOLS                          OFFLINE
UNDOTBS1                       OFFLINE
USERS                          OFFLINE
APP1                           ONLINE
APP3                           ONLINE
SYSAUX                         ONLINE
SYSTEM                         ONLINE
TEMP                           ONLINE

SQL> select tablespace_name,status,file_name from dba_data_files;

TABLESPACE_NAME                STATUS    FILE_NAME
------------------------------ --------- ----------------------------------------------------------------------------------------------------
SYSTEM                         AVAILABLE /opt/app/oracle/oradata/CGCDB/3BEC00E8AA7862ECE0536300A8C0B5F7/datafile/o1_mf_system_cx060v17_.dbf
SYSAUX                         AVAILABLE /opt/app/oracle/oradata/CGCDB/3BEC00E8AA7862ECE0536300A8C0B5F7/datafile/o1_mf_sysaux_cx060v18_.dbf
USERS                          AVAILABLE /opt/app/oracle/product/12.1.0/dbhome_2/dbs/MISSING00114
TOOLS                          AVAILABLE /opt/app/oracle/product/12.1.0/dbhome_2/dbs/MISSING00115
APP1                           AVAILABLE /opt/app/oracle/oradata/CGCDB/3BEC00E8AA7862ECE0536300A8C0B5F7/datafile/o1_mf_app1_cx060v1b_.dbf
APP2                           AVAILABLE /opt/app/oracle/product/12.1.0/dbhome_2/dbs/MISSING00117
APP3                           AVAILABLE /opt/app/oracle/oradata/CGCDB/3BEC00E8AA7862ECE0536300A8C0B5F7/datafile/o1_mf_app3_cx060v1c_.dbf
Undo tablespace within the PDB cannot be removed (2067414.1).
SQL> drop tablespace UNDOTBS1 including contents and datafiles cascade constraints;
drop tablespace UNDOTBS1 including contents and datafiles cascade constraints
*
ERROR at line 1:
ORA-30013: undo tablespace 'UNDOTBS1' is currently in use
Offline undo tablespace in this case is the undo tablespace on non-CDB. This is because the CDB undo tablespace name and cloned non-CDB tablespace name is the same and undo is not local to PDB but common to entire CDB. It was not possible to get rid of the undotbs1 offline status even after switching the default undo tablespace of the CDB to a different undo tablespace.
This clause does not apply to the SYSTEM, SYSAUX, or TEMP tablespaces.

Related Posts
Move a PDB Between Servers
Plugging a SE2 non-CDB as an EE PDB Using File Copying and Remote Link

Thursday, September 8, 2016

Plugging a SE2 non-CDB as an EE PDB Using File Copying and Remote Link

This post list two methods for plugging a non-CDB on standard edition 2 (SE2) as a pluggable database on enterprise edition (EE) CDB. One is using a XML description file which also includes copying the data files to (if EE database reside in another server). Other method is using a remote link. Reverse of this process, that is EE plugged into SE2 is not possible according to 1631260.1. This is because SE2 has less options than EE. But as long as the options in the SE2 are a subset of EE, moving from SE2 to EE is possible (2020172.1). Current options on the SE2 are
SQL> select comp_id,comp_name,status,version from dba_registry;

COMP_ID    COMP_NAME                                STATUS   VERSION
---------- ---------------------------------------- -------- ----------
CONTEXT    Oracle Text                              VALID    12.1.0.2.0
OWM        Oracle Workspace Manager                 VALID    12.1.0.2.0
XDB        Oracle XML Database                      VALID    12.1.0.2.0
CATALOG    Oracle Database Catalog Views            VALID    12.1.0.2.0
CATPROC    Oracle Database Packages and Types       VALID    12.1.0.2.0
Options on the EE are
SQL> select comp_id,comp_name,status,version from dba_registry;

COMP_ID    COMP_NAME                                STATUS     VERSION
---------- ---------------------------------------- ---------- ---------------
DV         Oracle Database Vault                    VALID      12.1.0.2.0
APEX       Oracle Application Express               VALID      4.2.5.00.08
OLS        Oracle Label Security                    VALID      12.1.0.2.0
SDO        Spatial                                  VALID      12.1.0.2.0
ORDIM      Oracle Multimedia                        VALID      12.1.0.2.0
CONTEXT    Oracle Text                              VALID      12.1.0.2.0
OWM        Oracle Workspace Manager                 VALID      12.1.0.2.0
XDB        Oracle XML Database                      VALID      12.1.0.2.0
CATALOG    Oracle Database Catalog Views            VALID      12.1.0.2.0
CATPROC    Oracle Database Packages and Types       VALID      12.1.0.2.0
JAVAVM     JServer JAVA Virtual Machine             VALID      12.1.0.2.0
XML        Oracle XDK                               VALID      12.1.0.2.0
CATJAVA    Oracle Database Java Packages            VALID      12.1.0.2.0
APS        OLAP Analytic Workspace                  VALID      12.1.0.2.0
XOQ        Oracle OLAP API                          VALID      12.1.0.2.0
RAC        Oracle Real Application Clusters         OPTION OFF 12.1.0.2.0
Both SE2 and EE has been patched to the same level.

Method 1. File Copying
Start the SE2 in read only mode, create the non-CDB descriptor file and shutdown.
SQL> startup mount exclusive;
SQL> alter database open read only;
SQL> exec dbms_pdb.describe(pdb_descr_file=>'/home/oracle/backup/se2_noncdb_desc.xml');
SQL> shutdown immediate;
Copy the descriptor file the data files to the location where EE CDB resides
scp -C se2_noncdb_desc.xml  192.168.0.99:/home/oracle/backup/

cd /data/oradata/STD12C1/datafile/
scp -C * 192.168.0.99:/home/oracle/backup/

o1_mf_sysaux_byjg3ypy_.dbf                         100%  550MB  25.0MB/s   00:22
o1_mf_system_byjg3q5k_.dbf                         100%  700MB  25.0MB/s   00:28
...
On the EE CDB run a compatibility check against the descriptor file
set serveroutput on
declare
    compa boolean;
    begin
    compa := dbms_pdb.CHECK_PLUG_COMPATIBILITY(pdb_descr_file=>'/home/oracle/backup/se2_noncdb_desc.xml');
 if compa = true then
      dbms_output.put_line('compatible');
 else
    dbms_output.put_line('no compatible');
 end if;
end;
/
no compatible
As seen from the output the compatibility fails. Look in the PDB_PLUG_IN_VIOLATIONS and if the cause for warnings are due to missing options then the incompatibility is ignore-able (2020172.1).
SQL> select name,cause,status,type,action from PDB_PLUG_IN_VIOLATIONS where name='STDPDB' AND STATUS <> 'RESOLVED';

NAME     CAUSE      STATUS    TYPE      ACTION
-------- ---------- --------- --------- ------------------------------------------------------------
STDPDB   OPTION     PENDING   WARNING   Fix the database option in the PDB or the CDB
STDPDB   OPTION     PENDING   WARNING   Fix the database option in the PDB or the CDB
STDPDB   APEX       PENDING   WARNING   Please contact Oracle Support.

SQL> select status,message from PDB_PLUG_IN_VIOLATIONS where name='STDPDB' AND STATUS <> 'RESOLVED';

STATUS    MESSAGE
--------- ----------------------------------------------------------------------------------------------------
PENDING   Database option APS mismatch: PDB installed version NULL. CDB installed version 12.1.0.2.0.
PENDING   Database option CATJAVA mismatch: PDB installed version NULL. CDB installed version 12.1.0.2.0.
PENDING   Database option DV mismatch: PDB installed version NULL. CDB installed version 12.1.0.2.0.
PENDING   Database option JAVAVM mismatch: PDB installed version NULL. CDB installed version 12.1.0.2.0.
PENDING   Database option OLS mismatch: PDB installed version NULL. CDB installed version 12.1.0.2.0.
PENDING   Database option ORDIM mismatch: PDB installed version NULL. CDB installed version 12.1.0.2.0.
PENDING   Database option SDO mismatch: PDB installed version NULL. CDB installed version 12.1.0.2.0.
PENDING   Database option XML mismatch: PDB installed version NULL. CDB installed version 12.1.0.2.0.
PENDING   Database option XOQ mismatch: PDB installed version NULL. CDB installed version 12.1.0.2.0.
PENDING   APEX mismatch: PDB installed version NULL CDB installed version 4.2.5.00.08
Plug the SE2 non-CDB using the descriptor file
SQL> create pluggable database se2pdb as clone using '/home/oracle/backup/se2_noncdb_desc.xml' 
source_file_name_convert=('/data/oradata/STD12C1/datafile','/home/oracle/backup') move;

Pluggable database created.

SQL> show pdbs

    CON_ID CON_NAME                       OPEN MODE  RESTRICTED
---------- ------------------------------ ---------- ----------
         2 PDB$SEED                       READ ONLY  NO
         3 ONEPDB                         READ WRITE NO
         4 TWOPDB                         READ WRITE NO
         5 SE2PDB                         MOUNTED
Chane to the PDB container and run noncdb_to_pdb.sql script
SQL> ALTER SESSION SET CONTAINER=se2pdb;
SQL> show con_name;

CON_NAME
------------------------------
SE2PDB

SQL> @?/rdbms/admin/noncdb_to_pdb.sql
When the script completes open the PDB
SQL> alter pluggable database open;

SQL> show pdbs;

    CON_ID CON_NAME                       OPEN MODE  RESTRICTED
---------- ------------------------------ ---------- ----------
         5 SE2PDB                         READ WRITE NO
PDB opens without any restriction and is ready to use. EE CDB was setup using OMF, as such the plugged SE2 DB's datafile and tempfile will be created in a OMF structure with a new GUID.
SQL> select name from v$tempfile;

NAME
----------------------------------------------------------------------------------------------------
/opt/app/oracle/oradata/CGCDB/3B61A02E318577C4E0536300A8C051DC/datafile/o1_mf_temp_cwg1lqd0_.dbf

1 row selected.

SQL> select name from v$datafile;

NAME
----------------------------------------------------------------------------------------------------
/opt/app/oracle/oradata/CGCDB/datafile/o1_mf_undotbs1_cvchzywd_.dbf
/opt/app/oracle/oradata/CGCDB/3B61A02E318577C4E0536300A8C051DC/datafile/o1_mf_system_cwg1lqcv_.dbf
/opt/app/oracle/oradata/CGCDB/3B61A02E318577C4E0536300A8C051DC/datafile/o1_mf_sysaux_cwg1lqcz_.dbf
/opt/app/oracle/oradata/CGCDB/3B61A02E318577C4E0536300A8C051DC/datafile/o1_mf_users_cwg1lqd0_.dbf
/opt/app/oracle/oradata/CGCDB/3B61A02E318577C4E0536300A8C051DC/datafile/o1_mf_tools_cwg1lqd1_.dbf
The files associated with the undo and temp tablespaces on the non-CDB will not be used in the plugged in PDB.

Method 2. Remote Link
In this method a remote link created from the EE CDB to SE2 non-CDB and used to create the PDB.
stdtns =
  (DESCRIPTION =
    (ADDRESS = (PROTOCOL = TCP)(HOST = 192.168.0.66)(PORT = 1521))
    (CONNECT_DATA =
      (SERVER = DEDICATED)
      (SERVICE_NAME = std12c1)
    )
  )

SQL> create database link std_link connect to system identified by system using 'STDTNS';
Database link created.
Validate the link by running a query against the remote SE2 DB
SQL> select instance_name from v$instance@std_link;

INSTANCE_NAME
----------------
std12c1
As OMF is used no other steps are needed to create the PDB. Put the source non-CDB in to read only mode and then run the create PDB statement
SQL>  create pluggable database stdpdb from std12c1@std_link;
Pluggable database created.

SQL> show pdbs;

    CON_ID CON_NAME                       OPEN MODE  RESTRICTED
---------- ------------------------------ ---------- ----------
         2 PDB$SEED                       READ ONLY  NO
         3 ONEPDB                         READ WRITE NO
         4 TWOPDB                         READ WRITE NO
         5 STDPDB                         MOUNTED
Change to the newly created PDB, run the noncdb_to_pdb script and finally open the PDB
SQL> ALTER SESSION SET CONTAINER=stdpdb;

SQL> show con_name;

CON_NAME
------------------------------
STDPDB

SQL> @?/rdbms/admin/noncdb_to_pdb.sql

SQL> alter pluggable database open;

SQL> show pdbs

    CON_ID CON_NAME                       OPEN MODE  RESTRICTED
---------- ------------------------------ ---------- ----------
         2 PDB$SEED                       READ ONLY  NO
         3 ONEPDB                         READ WRITE NO
         4 TWOPDB                         READ WRITE NO
         5 STDPDB                         READ WRITE NO
Similar to method 1 and plugging violation due to missing options are ignore-able.
Unlike the method 1, in method 2 the undo tablespace within the PDB will be offline.
SQL> select tablespace_name,status from dba_tablespaces order by 2,1;

TABLESPACE_NAME                STATUS
------------------------------ ---------
UNDOTBS1                       OFFLINE
This is due to the fact that both non-CDB and CDB where it's plugged having the same name for undo tablespace and undo tablespaces are not local to PDB but common to entire CDB. Unfortunately it's not possible to drop this within the PDB (2067414.1). However querying data files within the PDB shows the CDB's undo tablespace's data file
SQL> select name from v$datafile;

NAME
----------------------------------------------------------------------------------------------------
/opt/app/oracle/oradata/CGCDB/datafile/o1_mf_undotbs1_cvchzywd_.dbf
/opt/app/oracle/oradata/CGCDB/3BEEE6493046715CE0536300A8C0B4F0/datafile/o1_mf_system_cx0l5krb_.dbf
/opt/app/oracle/oradata/CGCDB/3BEEE6493046715CE0536300A8C0B4F0/datafile/o1_mf_sysaux_cx0l5krf_.dbf
Useful Metlink Notes
"OPTION WARNING Database option mismatch: PDB installed version NULL" in PDB_PLUG_IN_VIOLATIONS [ID 2020172.1]
Unplug an Enterprise Edition PDB and Plug into a Standard Edition CDB says not compatible when using DBMS_PDB.CHECK_PLUG_COMPATIBILITY [ID 1631260.1]

Related Post
Move a PDB Between Servers

Thursday, September 1, 2016

Move a PDB Between Servers

This post list the steps for moving a PDB from it's current CDB to another CDB on a different server. The detonation CDB and the PDB are patched to the same level.
Destination CDB patches
SQL> select con_id,patch_id,action,status,description from cdb_registry_sqlpatch where con_id = 1 order by action_time;

    CON_ID   PATCH_ID ACTION          STATUS          DESCRIPTION
---------- ---------- --------------- --------------- ----------------------------------------------------------------------------------------------------
         1   21555660 APPLY           SUCCESS         Database PSU 12.1.0.2.5, Oracle JavaVM Component (Oct2015)
         1   21359755 APPLY           SUCCESS         Database Patch Set Update : 12.1.0.2.5 (21359755)
         1   21555660 ROLLBACK        SUCCESS         Database PSU 12.1.0.2.5, Oracle JavaVM Component (Oct2015)
         1   22674709 APPLY           SUCCESS         Database PSU 12.1.0.2.160419, Oracle JavaVM Component (Apr2016)
         1   22291127 APPLY           SUCCESS         Database Patch Set Update : 12.1.0.2.160419 (22291127)
PBD patches
SQL> select con_id,patch_id,action,status,description from cdb_registry_sqlpatch where con_id = 5 order by action_time;

    CON_ID   PATCH_ID ACTION          STATUS          DESCRIPTION
---------- ---------- --------------- --------------- ----------------------------------------------------------------------------------------------------
         5   21555660 APPLY           SUCCESS         Database PSU 12.1.0.2.5, Oracle JavaVM Component (Oct2015)
         5   21359755 APPLY           SUCCESS         Database Patch Set Update : 12.1.0.2.5 (21359755)
         5   21555660 ROLLBACK        SUCCESS         Database PSU 12.1.0.2.5, Oracle JavaVM Component (Oct2015)
         5   22674709 APPLY           SUCCESS         Database PSU 12.1.0.2.160419, Oracle JavaVM Component (Apr2016)
         5   22291127 APPLY           SUCCESS         Database Patch Set Update : 12.1.0.2.160419 (22291127)
As seen from above outputs both destination CDB and PDB has gone through the same set of patches and are currently at the same patch level.
The source CDB has three PDBs and the PDB called "PDBTHREE" will be moved.
SQL> show pdbs;

    CON_ID CON_NAME                       OPEN MODE  RESTRICTED
---------- ------------------------------ ---------- ----------
         2 PDB$SEED                       READ ONLY  NO
         3 ONEPDB                         READ WRITE NO
         4 TWOPDB                         READ WRITE NO
         5 PDBTHREE                       READ WRITE NO
Close the PBD and unplug it from the source CDB by specifying the metadata xml file.
SQL> alter pluggable database pdbthree close immediate;
Pluggable database altered.

SQL> alter pluggable database pdbthree unplug into '/home/oracle/backup/pdbthree_desc.xml';
Pluggable database altered.

SQL> select pdb_name, status from cdb_pdbs;

PDB_NAME   STATUS
---------- ---------
PDBTHREE   UNPLUGGED
PDB$SEED   NORMAL
ONEPDB     NORMAL
TWOPDB     NORMAL

SQL> show pdbs

    CON_ID CON_NAME                       OPEN MODE  RESTRICTED
---------- ------------------------------ ---------- ----------
         2 PDB$SEED                       READ ONLY  NO
         3 ONEPDB                         READ WRITE NO
         4 TWOPDB                         READ WRITE NO
         5 PDBTHREE                       MOUNTED
Drop the PDB with keep file option (default)
SQL> drop pluggable database pdbthree keep datafiles;
Pluggable database dropped.

SQL> show pdbs

    CON_ID CON_NAME                       OPEN MODE  RESTRICTED
---------- ------------------------------ ---------- ----------
         2 PDB$SEED                       READ ONLY  NO
         3 ONEPDB                         READ WRITE NO
         4 TWOPDB                         READ WRITE NO
Copy the files over to the destination server. The location of the file is not the same as source location.
[oracle@hpc5 datafile]$ pwd
/opt/app/oracle/oradata/APCDB/3AE6245AE81713AFE0536300A8C02D00/datafile

[oracle@hpc5 datafile]$ scp -C * hpc1:/home/oracle/backup/pdbthree/
oracle@192.168.0.66's password:
o1_mf_pdbthree_cvy3w4f7_.dbf                                                                                                                                100%   10MB  10.0MB/s   00:00
o1_mf_sysaux_cvxvnvnw_.dbf                                                                                                                                  100%  580MB  24.2MB/s   00:24
o1_mf_system_cvxvnvnl_.dbf                                                                                                                                  100%  260MB  18.6MB/s   00:14
On the destination CDB run the compatibility check. In this case the output says that PDB is not compatible.
SQL> SET SERVEROUTPUT ON
DECLARE
SQL>   2  compatible CONSTANT VARCHAR2(3) :=
  3   CASE DBMS_PDB.CHECK_PLUG_COMPATIBILITY(  pdb_descr_file => '/home/oracle/backup/pdbthree_desc.xml',  pdb_name => 'PDBTHREE')
  4    WHEN TRUE THEN 'YES'
  5    ELSE 'NO'
  6    END;
  7     BEGIN
  8     DBMS_OUTPUT.PUT_LINE(compatible);
  9     END;
 10     /
NO
Checking the plugin violation view shows the following (last two lines has been shortened)
SQL> select name,cause,status,type,action from PDB_PLUG_IN_VIOLATIONS;

NAME       CAUSE           STATUS    TYPE      ACTION
---------- --------------- --------- --------- --------------------------------------------------
PDBTHREE   SQL patch error PENDING   ERROR     Call datapatch to reinstall
PDBTHREE   SQL patch error PENDING   ERROR     Call datapatch to reinstall
PDBTHREE   SQL Patch       PENDING   ERROR     Call datapatch to install in the PDB or the CDB
PDBTHREE   SQL Patch       PENDING   ERROR     Call datapatch to install in the PDB or the CDB

SQL> select status,message from PDB_PLUG_IN_VIOLATIONS;

STATUS    MESSAGE
--------- ------------------------------------------------------------------------------------------------------------------------------------------------------
PENDING    (PSU bundle patch 160419 (Database Patch Set Update : 12.1.0.2.160419 (22291127)): APPLY SUCCESS):  with status  in the PDB.
PENDING    (SQL patch ID/UID 22674709/20057886 (Database PSU 12.1.0.2.160419, Oracle JavaVM Component (Apr2016)): APPLY SUCCESS):  with status  in the PDB.
PENDING   ... Update : 12.1.0.2.160419 (22291127)): Installed in the CDB but not in the PDB.
PENDING   ... Oracle JavaVM Component (Apr2016)): Installed in the CDB but not in the PDB.
Since it has been confirmed that both destination CDB and PDB being plugged in are both patched to same level this violation is ignored.



The destination uses OMF but different paths for PDBs and the root container files. This path is also different from the OMF path at the source CDB. Login to the CDB and set the db_create_file_dest used for PDBs (if a separate OMF path is used for PDBs) and create the PDB with clone and move clause. source_file_name_convert has been used to specify the file mapping where ('/opt/app/oracle/oradata/APCDB/3AE6245AE81713AFE0536300A8C02D00/datafile') refers to PDB data file path at source and ('/home/oracle/backup/pdbthree') refers to the location where data files were copied into.
SQL> alter system set db_create_file_dest='/ssdoracle/oradata/pdbs' scope=memory;
System altered.

SQL> create pluggable database pdbthree as clone using '/home/oracle/backup/pdbthree_desc.xml' 
source_file_name_convert=('/opt/app/oracle/oradata/APCDB/3AE6245AE81713AFE0536300A8C02D00/datafile','/home/oracle/backup/pdbthree') move;
Pluggable database created.

SQL> select pdb_name, status from cdb_pdbs;

PDB_NAME        STATUS
--------------- ---------
PDBTHREE        NEW
PDB$SEED        NORMAL
ONEPDB          NORMAL
TWOPDB          NORMAL

SQL> show pdbs

    CON_ID CON_NAME                       OPEN MODE  RESTRICTED
---------- ------------------------------ ---------- ----------
         2 PDB$SEED                       READ ONLY  NO
         3 ONEPDB                         READ WRITE NO
         4 TWOPDB                         READ WRITE NO
         5 PDBTHREE                       MOUNTED

SQL> select con_id,name from v$datafile where con_id=5;

    CON_ID NAME
---------- ----------------------------------------------------------------------------------------------------
         5 /ssdoracle/oradata/pdbs/APCDB/3AE70A7E4DC630F7E0534200A8C0F281/datafile/o1_mf_system_cvxzo517_.dbf
         5 /ssdoracle/oradata/pdbs/APCDB/3AE70A7E4DC630F7E0534200A8C0F281/datafile/o1_mf_sysaux_cvxzo529_.dbf
         5 /ssdoracle/oradata/pdbs/APCDB/3AE70A7E4DC630F7E0534200A8C0F281/datafile/o1_mf_pdbthree_cvxzo52b_.dbf
Finally open the PDB
SQL> alter pluggable database pdbthree open;
Pluggable database altered.

SQL>  select pdb_name, status from cdb_pdbs;

PDB_NAME        STATUS
--------------- ---------
PDBTHREE        NORMAL
PDB$SEED        NORMAL
ONEPDB          NORMAL
TWOPDB          NORMAL

SQL> show pdbs

    CON_ID CON_NAME                       OPEN MODE  RESTRICTED
---------- ------------------------------ ---------- ----------
         2 PDB$SEED                       READ ONLY  NO
         3 ONEPDB                         READ WRITE NO
         4 TWOPDB                         READ WRITE NO
         5 PDBTHREE                       READ WRITE NO
The CDB doesn't list any more plugin violations
SQL> select name,cause,status,type,action from PDB_PLUG_IN_VIOLATIONS;
no rows selected
Verify the patch levels are reflected
SQL> alter session set container=pdbthree;
Session altered.

SQL> select con_id,patch_id,action,status,description from cdb_registry_sqlpatch where con_id = 5 order by action_time;

    CON_ID   PATCH_ID ACTION     STATUS          DESCRIPTION
---------- ---------- ---------- --------------- ----------------------------------------------------------------------------------------------------
         5   21555660 APPLY      SUCCESS         Database PSU 12.1.0.2.5, Oracle JavaVM Component (Oct2015)
         5   21359755 APPLY      SUCCESS         Database Patch Set Update : 12.1.0.2.5 (21359755)
         5   21555660 ROLLBACK   SUCCESS         Database PSU 12.1.0.2.5, Oracle JavaVM Component (Oct2015)
         5   22674709 APPLY      SUCCESS         Database PSU 12.1.0.2.160419, Oracle JavaVM Component (Apr2016)
         5   22291127 APPLY      SUCCESS         Database Patch Set Update : 12.1.0.2.160419 (22291127)
This concludes the steps for moving a PDB between servers.

Related Posts
Remote Cloning of a PDB
Plugging a SE2 non-CDB as an EE PDB Using File Copying and Remote Link

Sunday, August 14, 2016

Resetlogs Fails with ORA-00349

During a database restore on a separate server from the source DB following error occurred during open resetlogs.
RMAN> alter database open resetlogs;

RMAN-00571: ===========================================================
RMAN-00569: =============== ERROR MESSAGE STACK FOLLOWS ===============
RMAN-00571: ===========================================================
RMAN-03002: failure of sql statement command at 08/10/2016 18:48:18
ORA-00349: failure obtaining block size for '/data/oradata/apcdb/redo01.log'
ORA-27041: unable to open file
Linux-x86_64 Error: 2: No such file or directory
Additional information: 9
The source database was on a non-OMF setup and the DB was restored on OMF paths. As such the original redo directory paths does not exists on the restored sever. Typical solution is to clear the unarchived log file groups but option failed in this case.
SQL> alter database clear unarchived logfile group 1 ;
alter database clear unarchived logfile group 1
*
ERROR at line 1:
ORA-00349: failure obtaining block size for '/data/oradata/apcdb/redo01.log'
ORA-27041: unable to open file
Linux-x86_64 Error: 2: No such file or directory
Additional information: 9
However it is possible to drop and add log file groups
SQL> alter database drop logfile group 1;
Database altered.

SQL> alter database add logfile group 1;
Database altered.

SQL>  alter database drop logfile group 3;
Database altered.

SQL> alter database add logfile group 3;
Database altered.
Yet was unable to drop and add one log group as it was considered current
SQL> alter database drop logfile group 2;
alter database drop logfile group 2
*
ERROR at line 1:
ORA-01623: log 2 is current log for instance apcdb (thread 1) - cannot drop
ORA-00312: online log 2 thread 1: '/data/oradata/apcdb/redo02.log'

SQL> select group#,status from v$log;

    GROUP# STATUS
---------- ----------------
         1 UNUSED
         3 UNUSED
         2 CLEARING_CURRENT


Solution in this case is to backup the control file to trace and edit the log file locations.
SQL> ALTER DATABASE BACKUP CONTROLFILE TO TRACE AS '/tmp/control.sql' resetlogs ;
On the trace file the control file create was as follows
CREATE CONTROLFILE REUSE DATABASE "APCDB" RESETLOGS  ARCHIVELOG
    MAXLOGFILES 16
    MAXLOGMEMBERS 3
    MAXDATAFILES 1024
    MAXINSTANCES 8
    MAXLOGHISTORY 292
LOGFILE
  GROUP 1 (
    '/opt/app/oracle/oradata/APCDB/onlinelog/o1_mf_1_ctptbqxp_.log',
    '/opt/app/oracle/flash_recovery_area/APCDB/onlinelog/o1_mf_1_ctptbr4t_.log'
  ) SIZE 100M BLOCKSIZE 512,
  GROUP 2 '/data/oradata/apcdb/redo02.log'  SIZE 50M BLOCKSIZE 512,
  GROUP 3 (
    '/opt/app/oracle/oradata/APCDB/onlinelog/o1_mf_3_ctptc5mn_.log',
    '/opt/app/oracle/flash_recovery_area/APCDB/onlinelog/o1_mf_3_ctptc5tb_.log'
  ) SIZE 100M BLOCKSIZE 512
-- STANDBY LOGFILE
DATAFILE
  '/opt/app/oracle/oradata/APCDB/datafile/o1_mf_system_ctpskmcr_.dbf',
  '/opt/app/oracle/oradata/APCDB/datafile/o1_mf_sysaux_ctpskmcd_.dbf',
  '/opt/app/oracle/oradata/APCDB/datafile/o1_mf_undotbs1_ctpskmgg_.dbf',
  '/opt/app/oracle/oradata/APCDB/datafile/o1_mf_system_ctpsnlq7_.dbf',
  '/opt/app/oracle/oradata/APCDB/datafile/o1_mf_users_ctpskmhd_.dbf',
  '/opt/app/oracle/oradata/APCDB/datafile/o1_mf_sysaux_ctpsnlp4_.dbf',
  '/opt/app/oracle/oradata/APCDB/datafile/o1_mf_system_ctpsm0jd_.dbf',
  '/opt/app/oracle/oradata/APCDB/datafile/o1_mf_sysaux_ctpsm0hp_.dbf',
  '/opt/app/oracle/oradata/APCDB/datafile/o1_mf_system_ctpsmsmb_.dbf',
  '/opt/app/oracle/oradata/APCDB/datafile/o1_mf_sysaux_ctpsmsm4_.dbf',
  '/opt/app/oracle/oradata/APCDB/datafile/o1_mf_test_ctpsm0l1_.dbf',
  '/opt/app/oracle/oradata/APCDB/datafile/o1_mf_test_ctpsmsnt_.dbf',
  '/opt/app/oracle/oradata/APCDB/datafile/o1_mf_newtest_ctpsm0m8_.dbf'
CHARACTER SET AL32UTF8
;
As seen on the trace the log file groups 1 and 3 are using the new OMF paths but group 2 still refer to source DB path where the backups were taken from. Two options are available to deal with the group 2, one is to replace the group 2 with an OMF path, in this case
GROUP 2 '/data/oradata/apcdb/redo02.log'  SIZE 50M BLOCKSIZE 512,
with
GROUP 2 (
    '/opt/app/oracle/oradata/APCDB/onlinelog/o1_mf_2_captbqxp_.log',
    '/opt/app/oracle/flash_recovery_area/APCDB/onlinelog/o1_mf_2_captbr4t_.log'
  ),
Or the second option is to completely omit the group 2 and then add group 2 once the control file is created and before resetlogs is run again. Once the control file create section is edited start the database in nomount and run the script
SQL> startup nomount;
SQL> @/tmp/control.sql
This will leave the DB on mount mode. If group 2 was omitted from the control file create then add the group 2 now.
SQL> select * from v$log;

    GROUP#    THREAD#  SEQUENCE#      BYTES  BLOCKSIZE    MEMBERS ARC STATUS           FIRST_CHANGE# FIRST_TIM NEXT_CHANGE# NEXT_TIME     CON_ID
---------- ---------- ---------- ---------- ---------- ---------- --- ---------------- ------------- --------- ------------ --------- ----------
         3          1          0  104857600        512          2 YES CURRENT                      0                      0                    0
         1          1          0  104857600        512          2 YES UNUSED                       0                      0                    0

SQL> alter database add logfile group 2;
Database altered.

SQL> select * from v$log;

    GROUP#    THREAD#  SEQUENCE#      BYTES  BLOCKSIZE    MEMBERS ARC STATUS           FIRST_CHANGE# FIRST_TIM NEXT_CHANGE# NEXT_TIME     CON_ID
---------- ---------- ---------- ---------- ---------- ---------- --- ---------------- ------------- --------- ------------ --------- ----------
         1          1          0  104857600        512          2 YES UNUSED                       0                      0                    0
         3          1          0  104857600        512          2 YES CURRENT                      0                      0                    0
         2          1          0  104857600        512          2 YES UNUSED                       0                      0                    0
Run a recovery using backup control file until cancel.
SQL> RECOVER DATABASE USING BACKUP CONTROLFILE UNTIL CANCEL
ORA-00279: change 4285165 generated at 08/10/2016 15:52:55 needed for thread 1
ORA-00289: suggestion :
/opt/app/oracle/flash_recovery_area/APCDB/archivelog/2016_08_10/o1_mf_1_206_%u_.
arc
ORA-00280: change 4285165 for thread 1 is in sequence #206

Specify log: {=suggested | filename | AUTO | CANCEL}
cancel
Media recovery cancelled.
Finally run the open resetlogs
SQL> alter database open resetlogs;
Database altered.
Useful metalink note
ALTER DATABASE OPEN RESETLOGS fails with ORA-00392 [ID 1352133.1]

Monday, August 1, 2016

GUID Directory Not Created for PDBs During Full Restore in 12.1.0.2

Each database (non-CDB, CDB, PDB) on 12c has a generic unique identifier (GUID) associated with it. This GUID will not change throughout the life of the container/non-CDB. The 12c Admin guide says "the PDB GUID is primarily used to generate names for directories that store the PDB’s files, including both Oracle Managed Files directories and non-Oracle Managed Files directories". But it seems during RMAN restore the PDB related data files are not restored into a directory structure with GUID. There is a variant of this issue where "RMAN restore does not create GUID directory for pre-12c database" (16874123.8) which is fixed on 12.1.0.2 according to 1683802.1. The version used for testing is 12.1.0.2, so not sure if what's observed is a new bug or a different manifestation of the bug 16874123.
The source database has 3 PDBs (not counting the seed PDB) and data files are contained within a GUID directory.
SQL> show pdbs

    CON_ID CON_NAME                       OPEN MODE  RESTRICTED
---------- ------------------------------ ---------- ----------
         2 PDB$SEED                       READ ONLY  NO
         3 PDBONE                         READ WRITE NO
         4 PDBTWO                         MOUNTED
         5 PDBTHREE                       MOUNTED

SQL> select con_id,name from v$datafile order by 1;

    CON_ID NAME
---------- ----------------------------------------------------------------------------------------------------
         1 /opt/app/oracle/oradata/CDB12C/datafile/o1_mf_system_bocndoww_.dbf
         1 /opt/app/oracle/oradata/CDB12C/datafile/o1_mf_rootbs_bq3ngj79_.dbf
         1 /opt/app/oracle/oradata/CDB12C/datafile/o1_mf_users_bocng186_.dbf
         1 /opt/app/oracle/oradata/CDB12C/datafile/o1_mf_undotbs1_bocnf2j0_.dbf
         1 /opt/app/oracle/oradata/CDB12C/datafile/o1_mf_sysaux_bocndyz0_.dbf
         2 /opt/app/oracle/oradata/CDB12C/161DBD44EAAA7AC0E0535500A8C01F27/datafile/o1_mf_system_bocnds90_.dbf
         2 /opt/app/oracle/oradata/CDB12C/161DBD44EAAA7AC0E0535500A8C01F27/datafile/o1_mf_sysaux_bocnf0mr_.dbf
         3 /opt/app/oracle/oradata/CDB12C/18037D5BA62802CEE0535500A8C0EC8F/datafile/o1_mf_sysaux_bqc9rzvz_.dbf
         3 /opt/app/oracle/oradata/CDB12C/18037D5BA62802CEE0535500A8C0EC8F/datafile/o1_mf_system_bqc9rzvt_.dbf
         3 /opt/app/oracle/oradata/CDB12C/18037D5BA62802CEE0535500A8C0EC8F/datafile/o1_mf_test_bqc9yss9_.dbf
         4 /opt/app/oracle/oradata/CDB12C/1803968083DD030DE0535500A8C08A81/datafile/o1_mf_sysaux_bqcb6642_.dbf
         4 /opt/app/oracle/oradata/CDB12C/1803968083DD030DE0535500A8C08A81/datafile/o1_mf_system_bqcb663x_.dbf
         5 /opt/app/oracle/oradata/CDB12C/3643E46698E543F4E0535D00A8C0BDF3/datafile/o1_mf_system_cq2d54cz_.dbf
         5 /opt/app/oracle/oradata/CDB12C/3643E46698E543F4E0535D00A8C0BDF3/datafile/o1_mf_sysaux_cq2d54dx_.dbf
A full backup of the database is taken and restored in another server which is identical to the source DB (OS, Oracle version including patch level etc). db_create_file_dest parameter is same on both source and restored DBs pfiles.
The restore output is shown below which indicates that PDB related files are restored into a directory structure with GUID.
RMAN> restore controlfile from '/home/oracle/backup/control.ctl';

Starting restore at 28-JUL-16
using target database control file instead of recovery catalog
allocated channel: ORA_DISK_1
channel ORA_DISK_1: SID=243 device type=DISK

channel ORA_DISK_1: restoring control file
channel ORA_DISK_1: restore complete, elapsed time: 00:00:01
output file name=/opt/app/oracle/oradata/CDB12C/controlfile/o1_mf_csn2c773_.ctl
output file name=/opt/app/oracle/fast_recovery_area/CDB12C/controlfile/o1_mf_csn2c7hf_.ctl
Finished restore at 28-JUL-16

RMAN> alter database mount;

Statement processed
released channel: ORA_DISK_1

RMAN> run {
2> restore database;
3> recover database;
4> }

Starting restore at 28-JUL-16
Starting implicit crosscheck backup at 28-JUL-16
allocated channel: ORA_DISK_1
channel ORA_DISK_1: SID=243 device type=DISK
Crosschecked 8 objects
Finished implicit crosscheck backup at 28-JUL-16

Starting implicit crosscheck copy at 28-JUL-16
using channel ORA_DISK_1
Crosschecked 2 objects
Finished implicit crosscheck copy at 28-JUL-16

searching for all files in the recovery area
cataloging files...
no files cataloged

using channel ORA_DISK_1

channel ORA_DISK_1: starting datafile backup set restore
channel ORA_DISK_1: specifying datafile(s) to restore from backup set
channel ORA_DISK_1: restoring datafile 00001 to /opt/app/oracle/oradata/CDB12C/datafile/o1_mf_system_bocndoww_.dbf
channel ORA_DISK_1: restoring datafile 00003 to /opt/app/oracle/oradata/CDB12C/datafile/o1_mf_sysaux_bocndyz0_.dbf
channel ORA_DISK_1: restoring datafile 00005 to /opt/app/oracle/oradata/CDB12C/datafile/o1_mf_undotbs1_bocnf2j0_.dbf
channel ORA_DISK_1: restoring datafile 00006 to /opt/app/oracle/oradata/CDB12C/datafile/o1_mf_users_bocng186_.dbf
channel ORA_DISK_1: restoring datafile 00022 to /opt/app/oracle/oradata/CDB12C/datafile/o1_mf_rootbs_bq3ngj79_.dbf
channel ORA_DISK_1: reading from backup piece /home/oracle/backup/bkp5brboipc_1_1
channel ORA_DISK_1: piece handle=/home/oracle/backup/bkp5brboipc_1_1 tag=TAG20160727T140115
channel ORA_DISK_1: restored backup piece 1
channel ORA_DISK_1: restore complete, elapsed time: 00:00:56
channel ORA_DISK_1: starting datafile backup set restore
channel ORA_DISK_1: specifying datafile(s) to restore from backup set
channel ORA_DISK_1: restoring datafile 00025 to /opt/app/oracle/oradata/CDB12C/18037D5BA62802CEE0535500A8C0EC8F/datafile/o1_mf_system_bqc9rzvt_.dbf
channel ORA_DISK_1: restoring datafile 00026 to /opt/app/oracle/oradata/CDB12C/18037D5BA62802CEE0535500A8C0EC8F/datafile/o1_mf_sysaux_bqc9rzvz_.dbf
channel ORA_DISK_1: restoring datafile 00027 to /opt/app/oracle/oradata/CDB12C/18037D5BA62802CEE0535500A8C0EC8F/datafile/o1_mf_test_bqc9yss9_.dbf
channel ORA_DISK_1: reading from backup piece /home/oracle/backup/bkp5crboir3_1_1
channel ORA_DISK_1: piece handle=/home/oracle/backup/bkp5crboir3_1_1 tag=TAG20160727T140115
channel ORA_DISK_1: restored backup piece 1
channel ORA_DISK_1: restore complete, elapsed time: 00:00:18
channel ORA_DISK_1: starting datafile backup set restore
channel ORA_DISK_1: specifying datafile(s) to restore from backup set
channel ORA_DISK_1: restoring datafile 00028 to /opt/app/oracle/oradata/CDB12C/1803968083DD030DE0535500A8C08A81/datafile/o1_mf_system_bqcb663x_.dbf
channel ORA_DISK_1: restoring datafile 00029 to /opt/app/oracle/oradata/CDB12C/1803968083DD030DE0535500A8C08A81/datafile/o1_mf_sysaux_bqcb6642_.dbf
channel ORA_DISK_1: reading from backup piece /home/oracle/backup/bkp5drboirj_1_1
channel ORA_DISK_1: piece handle=/home/oracle/backup/bkp5drboirj_1_1 tag=TAG20160727T140115
channel ORA_DISK_1: restored backup piece 1
channel ORA_DISK_1: restore complete, elapsed time: 00:00:25
channel ORA_DISK_1: starting datafile backup set restore
channel ORA_DISK_1: specifying datafile(s) to restore from backup set
channel ORA_DISK_1: restoring datafile 00039 to /opt/app/oracle/oradata/CDB12C/3643E46698E543F4E0535D00A8C0BDF3/datafile/o1_mf_system_cq2d54cz_.dbf
channel ORA_DISK_1: restoring datafile 00040 to /opt/app/oracle/oradata/CDB12C/3643E46698E543F4E0535D00A8C0BDF3/datafile/o1_mf_sysaux_cq2d54dx_.dbf
channel ORA_DISK_1: reading from backup piece /home/oracle/backup/bkp5erboisf_1_1
channel ORA_DISK_1: piece handle=/home/oracle/backup/bkp5erboisf_1_1 tag=TAG20160727T140115
channel ORA_DISK_1: restored backup piece 1
channel ORA_DISK_1: restore complete, elapsed time: 00:00:15
channel ORA_DISK_1: starting datafile backup set restore
channel ORA_DISK_1: specifying datafile(s) to restore from backup set
channel ORA_DISK_1: restoring datafile 00002 to /opt/app/oracle/oradata/CDB12C/161DBD44EAAA7AC0E0535500A8C01F27/datafile/o1_mf_system_bocnds90_.dbf
channel ORA_DISK_1: restoring datafile 00004 to /opt/app/oracle/oradata/CDB12C/161DBD44EAAA7AC0E0535500A8C01F27/datafile/o1_mf_sysaux_bocnf0mr_.dbf
channel ORA_DISK_1: reading from backup piece /home/oracle/backup/bkp5frboisv_1_1
channel ORA_DISK_1: piece handle=/home/oracle/backup/bkp5frboisv_1_1 tag=TAG20160727T140115
channel ORA_DISK_1: restored backup piece 1
channel ORA_DISK_1: restore complete, elapsed time: 00:00:25
Finished restore at 28-JUL-16

Starting recover at 28-JUL-16
using channel ORA_DISK_1

starting media recovery

channel ORA_DISK_1: starting archived log restore to default destination
channel ORA_DISK_1: restoring archived log
archived log thread=1 sequence=221
channel ORA_DISK_1: reading from backup piece /home/oracle/backup/arch5grboitr_1_1
channel ORA_DISK_1: piece handle=/home/oracle/backup/arch5grboitr_1_1 tag=TAG20160727T140338
channel ORA_DISK_1: restored backup piece 1
channel ORA_DISK_1: restore complete, elapsed time: 00:00:01
archived log file name=/opt/app/oracle/fast_recovery_area/CDB12C/archivelog/2016_07_28/o1_mf_1_221_csn2jcf4_.arc thread=1 sequence=221
channel default: deleting archived log(s)
archived log file name=/opt/app/oracle/fast_recovery_area/CDB12C/archivelog/2016_07_28/o1_mf_1_221_csn2jcf4_.arc RECID=518 STAMP=918398107
unable to find archived log
archived log thread=1 sequence=222
RMAN-00571: ===========================================================
RMAN-00569: =============== ERROR MESSAGE STACK FOLLOWS ===============
RMAN-00571: ===========================================================
RMAN-03002: failure of recover command at 07/28/2016 14:35:18
RMAN-06054: media recovery requesting unknown archived log for thread 1 with sequence 222 and starting SCN of 3544373
Running report schema shows that all the datafiles for PDBs are created at the same level as the CDB datafiles except for temp files used by PDBs which are listed with a directory path containing GUID.
RMAN> report schema;

RMAN-06139: WARNING: control file is not current for REPORT SCHEMA
Report of database schema for database with db_unique_name CDB12C

List of Permanent Datafiles
===========================
File Size(MB) Tablespace           RB segs Datafile Name
---- -------- -------------------- ------- ------------------------
1    810      SYSTEM               ***     /opt/app/oracle/oradata/CDB12C/datafile/o1_mf_system_csn2cvqv_.dbf
2    260      PDB$SEED:SYSTEM      ***     /opt/app/oracle/oradata/CDB12C/datafile/o1_mf_system_csn2hg6q_.dbf
3    980      SYSAUX               ***     /opt/app/oracle/oradata/CDB12C/datafile/o1_mf_sysaux_csn2cvqo_.dbf
4    595      PDB$SEED:SYSAUX      ***     /opt/app/oracle/oradata/CDB12C/datafile/o1_mf_sysaux_csn2hg6g_.dbf
5    835      UNDOTBS1             ***     /opt/app/oracle/oradata/CDB12C/datafile/o1_mf_undotbs1_csn2cvqr_.dbf
6    5        USERS                ***     /opt/app/oracle/oradata/CDB12C/datafile/o1_mf_users_csn2cvr5_.dbf
22   10       ROOTBS               ***     /opt/app/oracle/oradata/CDB12C/datafile/o1_mf_rootbs_csn2cvr2_.dbf
25   270      PDBONE:SYSTEM        ***     /opt/app/oracle/oradata/CDB12C/datafile/o1_mf_system_csn2fpvd_.dbf
26   615      PDBONE:SYSAUX        ***     /opt/app/oracle/oradata/CDB12C/datafile/o1_mf_sysaux_csn2fpv4_.dbf
27   10       PDBONE:TEST          ***     /opt/app/oracle/oradata/CDB12C/datafile/o1_mf_test_csn2fpvh_.dbf
28   270      PDBTWO:SYSTEM        ***     /opt/app/oracle/oradata/CDB12C/datafile/o1_mf_system_csn2g9g0_.dbf
29   605      PDBTWO:SYSAUX        ***     /opt/app/oracle/oradata/CDB12C/datafile/o1_mf_sysaux_csn2g6mg_.dbf
39   260      PDBTHREE:SYSTEM      ***     /opt/app/oracle/oradata/CDB12C/datafile/o1_mf_system_csn2gz2w_.dbf
40   605      PDBTHREE:SYSAUX      ***     /opt/app/oracle/oradata/CDB12C/datafile/o1_mf_sysaux_csn2gz2c_.dbf

List of Temporary Files
=======================
File Size(MB) Tablespace           Maxsize(MB) Tempfile Name
---- -------- -------------------- ----------- --------------------
1    20       TEMP                 32767       /opt/app/oracle/oradata/CDB12C/datafile/o1_mf_temp_bocnf32d_.tmp
2    20       PDB$SEED:TEMP        32767       /opt/app/oracle/oradata/CDB12C/161DBD44EAAA7AC0E0535500A8C01F27/datafile/o1_mf_temp_bocnf33b_.tmp
3    20       PDBONE:TEMP          32767       /opt/app/oracle/oradata/CDB12C/18037D5BA62802CEE0535500A8C0EC8F/datafile/o1_mf_temp_bqc9rzw2_.dbf
4    20       PDBTWO:TEMP          32767       /opt/app/oracle/oradata/CDB12C/1803968083DD030DE0535500A8C08A81/datafile/o1_mf_temp_bqcb6645_.dbf
5    20       PDBTHREE:TEMP        32767       /opt/app/oracle/oradata/CDB12C/3643E46698E543F4E0535D00A8C0BDF3/datafile/o1_mf_temp_cq2d54f1_.dbf
However these temp files or the GUID directory doesn't exists on the filesystem.
$ ls -l /opt/app/oracle/oradata/CDB12C/161DBD44EAAA7AC0E0535500A8C01F27/datafile/
ls: cannot access /opt/app/oracle/oradata/CDB12C/161DBD44EAAA7AC0E0535500A8C01F27/datafile/: No such file or directory

$ ls -l /opt/app/oracle/oradata/CDB12C/
total 8
drwxr-x--- 2 oracle oinstall 4096 Jul 28 14:32 controlfile
drwxr-x--- 2 oracle oinstall 4096 Jul 28 14:34 datafile


Opening the database in reset log mode, will dynamically create the GUID directories and temp files inside them
RMAN> alter database open resetlogs;

Statement processed

$ ls -l /opt/app/oracle/oradata/CDB12C/
total 28
drwxr-x--- 3 oracle oinstall 4096 Jul 28 14:37 161DBD44EAAA7AC0E0535500A8C01F27
drwxr-x--- 3 oracle oinstall 4096 Jul 28 14:37 18037D5BA62802CEE0535500A8C0EC8F
drwxr-x--- 3 oracle oinstall 4096 Jul 28 14:37 1803968083DD030DE0535500A8C08A81
drwxr-x--- 3 oracle oinstall 4096 Jul 28 14:37 3643E46698E543F4E0535D00A8C0BDF3
drwxr-x--- 2 oracle oinstall 4096 Jul 28 14:32 controlfile
drwxr-x--- 2 oracle oinstall 4096 Jul 28 14:37 datafile
drwxr-x--- 2 oracle oinstall 4096 Jul 28 14:37 onlinelog

$ ls -l /opt/app/oracle/oradata/CDB12C/161DBD44EAAA7AC0E0535500A8C01F27/datafile/
total 1024
-rw-r----- 1 oracle oinstall 20979712 Jul 28 14:37 o1_mf_temp_csn2n97z_.tmp
However the PDBs' datafiles are all in the same OMF path
SQL> show pdbs

    CON_ID CON_NAME                       OPEN MODE  RESTRICTED
---------- ------------------------------ ---------- ----------
         2 PDB$SEED                       READ ONLY  NO
         3 PDBONE                         READ WRITE NO
         4 PDBTWO                         MOUNTED
         5 PDBTHREE                       MOUNTED
SQL>  select con_id,name from v$datafile order by 1;

    CON_ID NAME
---------- ---------------------------------------------------------------------
         1 /opt/app/oracle/oradata/CDB12C/datafile/o1_mf_system_csn2cvqv_.dbf
         1 /opt/app/oracle/oradata/CDB12C/datafile/o1_mf_rootbs_csn2cvr2_.dbf
         1 /opt/app/oracle/oradata/CDB12C/datafile/o1_mf_users_csn2cvr5_.dbf
         1 /opt/app/oracle/oradata/CDB12C/datafile/o1_mf_undotbs1_csn2cvqr_.dbf
         1 /opt/app/oracle/oradata/CDB12C/datafile/o1_mf_sysaux_csn2cvqo_.dbf
         2 /opt/app/oracle/oradata/CDB12C/datafile/o1_mf_system_csn2hg6q_.dbf
         2 /opt/app/oracle/oradata/CDB12C/datafile/o1_mf_sysaux_csn2hg6g_.dbf
         3 /opt/app/oracle/oradata/CDB12C/datafile/o1_mf_sysaux_csn2fpv4_.dbf
         3 /opt/app/oracle/oradata/CDB12C/datafile/o1_mf_system_csn2fpvd_.dbf
         3 /opt/app/oracle/oradata/CDB12C/datafile/o1_mf_test_csn2fpvh_.dbf
         4 /opt/app/oracle/oradata/CDB12C/datafile/o1_mf_sysaux_csn2g6mg_.dbf
         4 /opt/app/oracle/oradata/CDB12C/datafile/o1_mf_system_csn2g9g0_.dbf
         5 /opt/app/oracle/oradata/CDB12C/datafile/o1_mf_system_csn2gz2w_.dbf
         5 /opt/app/oracle/oradata/CDB12C/datafile/o1_mf_sysaux_csn2gz2c_.dbf
If a new tablespace or data file is added to one of the PDBs this will be created in a directory path with GUID.
SQL> alter session set container=pdbone;

SQL>  show con_name

CON_NAME
------------------------------
PDBONE

SQL> select name from v$datafile;

NAME
----------------------------------------------------------------------------------------------------
/opt/app/oracle/oradata/CDB12C/datafile/o1_mf_undotbs1_csn2cvqr_.dbf
/opt/app/oracle/oradata/CDB12C/datafile/o1_mf_system_csn2fpvd_.dbf
/opt/app/oracle/oradata/CDB12C/datafile/o1_mf_sysaux_csn2fpv4_.dbf
/opt/app/oracle/oradata/CDB12C/datafile/o1_mf_test_csn2fpvh_.dbf

SQL> create tablespace newtbs DATAFILE SIZE 10M;

Tablespace created.

SQL>  select name from v$datafile;

NAME
----------------------------------------------------------------------------------------------------
/opt/app/oracle/oradata/CDB12C/datafile/o1_mf_undotbs1_csn2cvqr_.dbf
/opt/app/oracle/oradata/CDB12C/datafile/o1_mf_system_csn2fpvd_.dbf
/opt/app/oracle/oradata/CDB12C/datafile/o1_mf_sysaux_csn2fpv4_.dbf
/opt/app/oracle/oradata/CDB12C/datafile/o1_mf_test_csn2fpvh_.dbf
/opt/app/oracle/oradata/CDB12C/18037D5BA62802CEE0535500A8C0EC8F/datafile/o1_mf_newtbs_csn2s0yz_.dbf
The restore was also tested by pre-creating the directory path with GUID, however there was no difference in the outcome. Also tested with latest PSU (July 2016) applied on source DB's home and new servers oracle home, and using a new backup taken from source DB after PSU applied. No difference in that case either. Also tested with patch 20464614 applied (mentioned in 1576755.1 in relation to OMF/ASM and PDBs). But didn't observe any difference in the final outcome.
A service request has been raised.

Useful metalink notes
Bug 16874123 - RMAN restore does not create GUID directory for pre-12c database [ID 16874123.8]
Step by Step Examples of Migrating non-CDBs and PDBs Using ASM for File Storage [ID 1576755.1]

Update 29 March 2017
This is fixed in 12.2. The PDB datafiles get restored into GUID directories.

Thursday, July 14, 2016

Removing a Failed Node From the Cluster

This post list steps for removing a failed node from a cluster. The steps differs from steps in the previous node deletion posts (11gR1,11gR2 and 12c) such that one node has suffered a catastrophic failure and is not available for any kind of command or script executions. Therefore all the activities involved in removing the failed node are executed from a surviving node.
The environment used in this case is a two node RAC with role separation(11.2.0.4). Under normal operation it has the following resources and status. (formatted status)
Resource Name                       Type                      Target             State              Host
-------------                       ------                    -------            --------           ----------
ora.CLUSTER_DG.dg                   ora.diskgroup.type        ONLINE             ONLINE             rhel6m1
ora.CLUSTER_DG.dg                   ora.diskgroup.type        ONLINE             ONLINE             rhel6m2
ora.DATA.dg                         ora.diskgroup.type        ONLINE             ONLINE             rhel6m1
ora.DATA.dg                         ora.diskgroup.type        ONLINE             ONLINE             rhel6m2
ora.FLASH.dg                        ora.diskgroup.type        ONLINE             ONLINE             rhel6m1
ora.FLASH.dg                        ora.diskgroup.type        ONLINE             ONLINE             rhel6m2
ora.MYLISTENER.lsnr                 ora.listener.type         ONLINE             ONLINE             rhel6m1
ora.MYLISTENER.lsnr                 ora.listener.type         ONLINE             ONLINE             rhel6m2
ora.MYLISTENER_SCAN1.lsnr           ora.scan_listener.type    ONLINE             ONLINE             rhel6m2
ora.asm                             ora.asm.type              ONLINE             ONLINE             rhel6m1
ora.asm                             ora.asm.type              ONLINE             ONLINE             rhel6m2
ora.cvu                             ora.cvu.type              ONLINE             ONLINE             rhel6m2
ora.gsd                             ora.gsd.type              OFFLINE            OFFLINE
ora.gsd                             ora.gsd.type              OFFLINE            OFFLINE
ora.net1.network                    ora.network.type          ONLINE             ONLINE             rhel6m1
ora.net1.network                    ora.network.type          ONLINE             ONLINE             rhel6m2
ora.oc4j                            ora.oc4j.type             ONLINE             ONLINE             rhel6m2
ora.ons                             ora.ons.type              ONLINE             ONLINE             rhel6m1
ora.ons                             ora.ons.type              ONLINE             ONLINE             rhel6m2
ora.registry.acfs                   ora.registry.acfs.type    ONLINE             ONLINE             rhel6m1
ora.registry.acfs                   ora.registry.acfs.type    ONLINE             ONLINE             rhel6m2
ora.rhel6m1.vip                     ora.cluster_vip_net1.type ONLINE             ONLINE             rhel6m1
ora.rhel6m2.vip                     ora.cluster_vip_net1.type ONLINE             ONLINE             rhel6m2
ora.scan1.vip                       ora.scan_vip.type         ONLINE             ONLINE             rhel6m2
ora.std11g2.db                      ora.database.type         ONLINE             ONLINE             rhel6m1
ora.std11g2.db                      ora.database.type         ONLINE             ONLINE             rhel6m2
ora.std11g2.myservice.svc           ora.service.type          ONLINE             ONLINE             rhel6m1
ora.std11g2.myservice.svc           ora.service.type          ONLINE             ONLINE             rhel6m2
ora.std11g2.abx.domain.net.svc      ora.service.type          ONLINE             ONLINE             rhel6m2
ora.std11g2.abx.domain.net.svc      ora.service.type          ONLINE             ONLINE             rhel6m1
After the node 2 (rhel6m2 node in this case) suffers a catastrophic failure, resources and status is as below. There are offline and failed over (vip) resources from rhel6m2.
Resource Name                       Type                      Target             State              Host
-------------                       ------                    -------            --------           ----------
ora.CLUSTER_DG.dg                   ora.diskgroup.type        ONLINE             ONLINE             rhel6m1
ora.DATA.dg                         ora.diskgroup.type        ONLINE             ONLINE             rhel6m1
ora.FLASH.dg                        ora.diskgroup.type        ONLINE             ONLINE             rhel6m1
ora.MYLISTENER.lsnr                 ora.listener.type         ONLINE             ONLINE             rhel6m1
ora.MYLISTENER_SCAN1.lsnr           ora.scan_listener.type    ONLINE             ONLINE             rhel6m1
ora.asm                             ora.asm.type              ONLINE             ONLINE             rhel6m1
ora.cvu                             ora.cvu.type              ONLINE             ONLINE             rhel6m1
ora.gsd                             ora.gsd.type              OFFLINE            OFFLINE
ora.net1.network                    ora.network.type          ONLINE             ONLINE             rhel6m1
ora.oc4j                            ora.oc4j.type             ONLINE             ONLINE             rhel6m1
ora.ons                             ora.ons.type              ONLINE             ONLINE             rhel6m1
ora.registry.acfs                   ora.registry.acfs.type    ONLINE             ONLINE             rhel6m1
ora.rhel6m1.vip                     ora.cluster_vip_net1.type ONLINE             ONLINE             rhel6m1
ora.rhel6m2.vip                     ora.cluster_vip_net1.type ONLINE             INTERMEDIATE       rhel6m1
ora.scan1.vip                       ora.scan_vip.type         ONLINE             ONLINE             rhel6m1
ora.std11g2.db                      ora.database.type         ONLINE             ONLINE             rhel6m1
ora.std11g2.db                      ora.database.type         ONLINE             OFFLINE
ora.std11g2.myservice.svc           ora.service.type          ONLINE             ONLINE             rhel6m1
ora.std11g2.myservice.svc           ora.service.type          ONLINE             OFFLINE
ora.std11g2.abx.domain.net.svc      ora.service.type          ONLINE             OFFLINE
ora.std11g2.abx.domain.net.svc      ora.service.type          ONLINE             ONLINE             rhel6m1
Removing of resources of the failed node begins at database resource level. There are two services running and they both have the DB instance on the failed node as a preferred instance (output is condensed)
srvctl config service -d std11g2
Service name: myservice
Service is enabled
Server pool: std11g2_myservice
Cardinality: 2
...
Preferred instances: std11g21,std11g22
Available instances:
Service name: abx.domain.net
Service is enabled
Server pool: std11g2_abx.domain.net
Cardinality: 2
...
Preferred instances: std11g21,std11g22
Available instances:
Modify the service configuration so that only the surviving instances are set as preferred instances.
$ srvctl modify service -s myservice -d std11g2 -n -i std11g21 -f
$ srvctl modify service -s abx.domain.net -d std11g2 -n -i std11g21 -f

$ srvctl config service -d std11g2
Service name: myservice
Service is enabled
Server pool: std11g2_myservice
Cardinality: 1
..
Preferred instances: std11g21
Available instances:
Service name: abx.domain.net
Service is enabled
Server pool: std11g2_abx.domain.net
Cardinality: 1
..
Preferred instances: std11g21
Available instances:

$ srvctl status service -d std11g2
Service myservice is running on instance(s) std11g21
Service abx.domain.net is running on instance(s) std11g21
Remove the database instance on the failed node
srvctl config database -d std11g2
Database unique name: std11g2
Database name: std11g2
...
Database instances: std11g21,std11g22
Disk Groups: DATA,FLASH
Mount point paths:
Services: myservice,abx.domain.net
Type: RAC
Database is administrator managed
This is done using DBCA's instance management option. If the listener has a non-default name and port then accessing the DB will fail with below message.
To fix this create a default listener (name listener and port 1521). Also if VNCR is used then remove the failed node from the registration list. Proceed to instance deletion by selecting the inactive instance on the failed node.
As node 2 is not available following warning will be issued. Click continue and proceed. During the execution various other warning will appear such as unable to remove /etc/oratab etc all of these could be ignored.
However DBCA didn't run till end, at 67% (observed through repeated runs on this 11.2.0.4 environment) following dialog box appeared. As seen on the screenshot it has no message, just an OK button. Clicking it doesn't end the DBCA session but goes to the beginning and exit the DBCA clicking cancel afterwards.
However this doesn't appear to be a failure on the DBCA to remove the instance. In fact instance is removed as subsequent instance operation only list the instance on the surviving node.
Querying the database also shows that instance 2 (std11g22 in this case) related undo tablespace and redo logs have been removed and only surviving instance related undo tablespace and redo logs are available.
SQL> select name from v$tablespace;

NAME
------------------------------
SYSTEM
SYSAUX
UNDOTBS1
TEMP
USERS
EXAMPLE
TEST

7 rows selected.

SQL> select * from v$log;

    GROUP#    THREAD#  SEQUENCE#      BYTES  BLOCKSIZE    MEMBERS ARC STATUS           FIRST_CHANGE# FIRST_TIM NEXT_CHANGE# NEXT_TIME
---------- ---------- ---------- ---------- ---------- ---------- --- ---------------- ------------- --------- ------------ ---------
         1          1       1598   52428800        512          2 NO  CURRENT               68471125 07-JUL-16   2.8147E+14
         2          1       1597   52428800        512          2 YES INACTIVE              68467762 07-JUL-16     68471125 07-JUL-16

srvctl config database -d std11g2
Database unique name: std11g2
Database name: std11g2
...
Database instances: std11g21
Disk Groups: DATA,FLASH
Mount point paths:
Services: myservice,abx.domain.net
Type: RAC
Database is administrator managed
Once the database resources are removed next step is to remove the Oracle database home entry for the failed node from the inventory.



As the node is unavailable, there's no un-installation involved. Run the inventory update command with surviving nodes. Inventory content for the Oracle home before the failed node is removed.
<HOME NAME="OraDb11g_home2" LOC="/opt/app/oracle/product/11.2.0/dbhome_4" TYPE="O" IDX="4">
   <NODE_LIST>
      <NODE NAME="rhel6m1"/>
      <NODE NAME="rhel6m2"/>
   </NODE_LIST>
</HOME>
After the inventory update
 ./runInstaller -updateNodeList ORACLE_HOME=$ORACLE_HOME "CLUSTER_NODES={rhel6m1}"
Starting Oracle Universal Installer...

Checking swap space: must be greater than 500 MB.   Actual 4095 MB    Passed
The inventory pointer is located at /etc/oraInst.loc
The inventory is located at /opt/app/oraInventory
'UpdateNodeList' was successful.

<HOME NAME="OraDb11g_home2" LOC="/opt/app/oracle/product/11.2.0/dbhome_4" TYPE="O" IDX="4">
   <NODE_LIST>
      <NODE NAME="rhel6m1"/>
   </NODE_LIST>
</HOME>
Next step is to remove the cluster resources and the node itself. If any of the node is in pin stat, unpin them. In this case both nodes are unpinned
olsnodes -s -t
rhel6m1 Active  Unpinned
rhel6m2 Inactive        Unpinned
Stop and remove the VIP resource of the failed node
# srvctl stop  vip -i rhel6m2-vip -f
# srvctl remove vip -i rhel6m2-vip -f
Remove the failed node from the cluster configuration
#  crsctl delete node -n rhel6m2
CRS-4661: Node rhel6m2 successfully deleted.
Finally remove the grid home for the failed node from the inventory. Before inventory update
<HOME NAME="Ora11g_gridinfrahome2" LOC="/opt/app/11.2.0/grid4" TYPE="O" IDX="3" CRS="true">
   <NODE_LIST>
      <NODE NAME="rhel6m1"/>
      <NODE NAME="rhel6m2"/>
   </NODE_LIST>
</HOME>
After inventory update
./runInstaller -updateNodeList ORACLE_HOME=$ORACLE_HOME "CLUSTER_NODES={rhel6m1}" CRS=TRUE
Starting Oracle Universal Installer...

Checking swap space: must be greater than 500 MB.   Actual 4095 MB    Passed
The inventory pointer is located at /etc/oraInst.loc
The inventory is located at /opt/app/oraInventory
'UpdateNodeList' was successful.

<HOME NAME="Ora11g_gridinfrahome2" LOC="/opt/app/11.2.0/grid4" TYPE="O" IDX="3" CRS="true">
   <NODE_LIST>
      <NODE NAME="rhel6m1"/>
   </NODE_LIST>
</HOME>
Validate the node removal with cluvfy
cluvfy stage -post  nodedel -n rhel6m2

Performing post-checks for node removal
Checking CRS integrity...
Clusterware version consistency passed
CRS integrity check passed
Node removal check passed
Post-check for node removal was successful.
Remove the default listener if one was created during instance remove step. The final status of resource is as below.
Resource Name                       Type                      Target             State              Host
-------------                       ------                    -------            --------           ----------
ora.CLUSTER_DG.dg                   ora.diskgroup.type        ONLINE             ONLINE             rhel6m1
ora.DATA.dg                         ora.diskgroup.type        ONLINE             ONLINE             rhel6m1
ora.FLASH.dg                        ora.diskgroup.type        ONLINE             ONLINE             rhel6m1
ora.MYLISTENER.lsnr                 ora.listener.type         ONLINE             ONLINE             rhel6m1
ora.MYLISTENER_SCAN1.lsnr           ora.scan_listener.type    ONLINE             ONLINE             rhel6m1
ora.asm                             ora.asm.type              ONLINE             ONLINE             rhel6m1
ora.cvu                             ora.cvu.type              ONLINE             ONLINE             rhel6m1
ora.gsd                             ora.gsd.type              OFFLINE            OFFLINE
ora.net1.network                    ora.network.type          ONLINE             ONLINE             rhel6m1
ora.oc4j                            ora.oc4j.type             ONLINE             ONLINE             rhel6m1
ora.ons                             ora.ons.type              ONLINE             ONLINE             rhel6m1
ora.registry.acfs                   ora.registry.acfs.type    ONLINE             ONLINE             rhel6m1
ora.rhel6m1.vip                     ora.cluster_vip_net1.type ONLINE             ONLINE             rhel6m1
ora.scan1.vip                       ora.scan_vip.type         ONLINE             ONLINE             rhel6m1
ora.std11g2.db                      ora.database.type         ONLINE             ONLINE             rhel6m1
ora.std11g2.myservice.svc           ora.service.type          ONLINE             ONLINE             rhel6m1
ora.std11g2.abx.domain.net.svc      ora.service.type          ONLINE             ONLINE             rhel6m1
Useful metalink notes
How to remove/delete a node from Grid Infrastructure Clusterware when the node has failed [ID 1262925.1]
Steps to Remove Node from Cluster When the Node Crashes Due to OS/Hardware Failure and cannot boot up [ID 466975.1]
RAC on Windows: How to Remove a Node from a Cluster When the Node Crashes Due to OS/Hardware Failure and Cannot Boot [ID 832054.1]

Related Post
Deleting a Node From 12cR1 RAC
Deleting a Node From 11gR2 RAC
Deleting a 11gR1 RAC Node