Wednesday, December 15, 2021

Loading Data to ATP from Object Storage

There are serveal ways to load data to ATP from object storage. This post shows examples of few of those options.

Copying data from object storage to a table in ATP
1. The copy_data method in dbms_cloud package is used to achieve this. Before the data could be copied over the required policies, permissions and credentails should be craeted. The necessary permissions and credentails creation steps are same as the ones needed to export/import to/from object storage. Implement step 1 to 8 in the earlier post. Even though not directly used in the package, access to data_pump_dir is expected. Without it the copy_data method result in an error as below. So sure to grant read,write access on the data_pump_dir to the user schema.
begin
DBMS_CLOUD.COPY_DATA ('TEST','ATP_TABLES','https://swiftobjectstorage.uk-london-1.oraclecloud.com/v1/#namespace#/tabledata/test2.csv','ASANGA',format => json_object('type' VALUE 'CSV','skipheaders' value '1'));
end;
Error report -
ORA-20000: ORA-06564: object DATA_PUMP_DIR does not exist
ORA-06512: at "C##CLOUD$SERVICE.DBMS_CLOUD", line 1285
ORA-06512: at "C##CLOUD$SERVICE.DBMS_CLOUD", line 4290
ORA-06512: at "C##CLOUD$SERVICE.DBMS_CLOUD", line 4311
ORA-06512: at line 2
20000. 00000 -  "%s"
*Cause:    The stored procedure 'raise_application_error'
           was called which causes this error to be generated.
*Action:   Correct the problem as described in the error message or contact
           the application administrator or DBA for more information.
2. A table called test already exists in the ATP database with three columns.
desc test
Name Type         
---- ------------ 
A    NUMBER       
B    VARCHAR2(10) 
C    NUMBER
3. A csv file exists in the object storage bucket with the data that needs to be copied into the table. First few lines of this file is shown below.
"A","B","C"
998,"abc 998",20958
999,"abc 999",20979
1000,"abc 1000",21000
1,"abc 1",21
2,"abc 2",42
3,"abc 3",63
4. Execute copy data procedure
begin
DBMS_CLOUD.COPY_DATA (table_name => 'TEST',
                    credential_name => 'ATP_TABLES',
                    file_uri_list => 'https://swiftobjectstorage.uk-london-1.oraclecloud.com/v1/#namespace#/tabledata/test2.csv',
                    schema_name => 'ASANGA',
                    format => json_object('type' VALUE 'CSV','skipheaders' value '1'));
end;
/
5. Check the data is available in the table. If for whatever reason the copying of data fails then a log table is shown as below which could be looked up to find the reason for failure.
ORA-20003: Reject limit reached, query table "ASANGA"."COPY$11_LOG" for error details

Creating external table in ATP
1. Another way to load data is to create an external table. It is assumed that all necessary permissions and credentails are in place.
2. Use create_external_table procedure to create the extenral table. The external data file used for this external table resides in the object storage.
BEGIN  
   DBMS_CLOUD.CREATE_EXTERNAL_TABLE(   
      table_name =>'test_ex_table',   
      credential_name =>'ATP_TABLES',   
      file_uri_list => 'https://swiftobjectstorage.uk-london-1.oraclecloud.com/v1/#namespace#/tabledata/test2.csv',
      format => json_object('type' value 'csv', 'skipheaders' value '1'),   
      field_list => 'A,B,C',   
      column_list => 'A number,   
                     B varchar2(20),
                     C number');
   END;
/
3. This will create an external table in the ATP.
desc test_ex_table
Name Type         
---- ------------ 
A    NUMBER       
B    VARCHAR2(20) 
C    NUMBER
4. The external table could be validated with validate_external_table procedure.
exec DBMS_CLOUD.VALIDATE_EXTERNAL_TABLE (table_name =>'test_ex_table', 	schema_name => 'ASANGA');
If there are issues in the data file used for the external table the procedure would give an output similar to below. The validation_log table could be refered to find out the reason for failure.
Error starting at line : 53 in command -
BEGIN DBMS_CLOUD.VALIDATE_EXTERNAL_TABLE (table_name =>'test_ex_table', 	schema_name => 'ASANGA'); END;
Error report -
ORA-20003: Reject limit reached, query table "ASANGA"."VALIDATE$14_LOG" for error details
ORA-06512: at "C##CLOUD$SERVICE.DBMS_CLOUD", line 1259
5. The external table could be queried as any other
select * from test_ex_table;
Once all verification steps are run, the data from external table could be loaded to a ATP "internal" table with create as statement.
create table test as select * from test_ex_table;

Creating external table with dump file
1. Only difference between this method and previous method is the source of the data. In previous case it was a csv whereas here it will be a dump file. The dumpfile must be created using ORACLE_DATAPUMP access driver. In this example the following table's data is exported to dump file using the access driver as below.
SQL> desc DATETEST
 Name                                      Null?    Type
 ----------------------------------------- -------- ----------------------------
 VERSION                                            VARCHAR2(20)
 INDATE                                             DATE

SQL> CREATE TABLE datetest_ex
    ORGANIZATION EXTERNAL
    (
      TYPE ORACLE_DATAPUMP
      DEFAULT DIRECTORY dumpdir
      LOCATION ('datetest.dmp')
    )
    AS SELECT * FROM datetest;
Above SQL will generate a dump file in the database directory dumpdir. Upload this to the object storage.

2. Create the external table using the dump file. For the dump files the type is specified as datapump.
BEGIN
   DBMS_CLOUD.CREATE_EXTERNAL_TABLE(
    table_name =>'test_dmp_table',
    credential_name =>'ATP_TABLES',
    file_uri_list =>'https://swiftobjectstorage.uk-london-1.oraclecloud.com/v1/#namespace#/tabledata/datetest.dmp',
    format => json_object('type' value 'datapump', 'rejectlimit' value '1'),
    column_list => 'VERSION VARCHAR2(20), INDATE date' );
END;
/
3. The validation of the external table could be done the same way as before with validate_external_table procedure.
4. Query and move data to internal table as needed.
select * from test_dmp_table;




Creating external partitioned table in ATP
Variant of the external table is the creation of external partition table. In this example the two csv files contains data for the table. Each file contains a specific range of values. The file test_1000.csv has data from 1 to 1000 in column A while file test_2000.csv has values from 1001 to 2000. Range partitions are created encapsulating this.
1. The create command for external partitioned table is shown below.
BEGIN  
   DBMS_CLOUD.CREATE_EXTERNAL_PART_TABLE(
      table_name =>'test_ex_part_table',  
      credential_name =>'ATP_TABLES',
      format => json_object('type' value 'csv', 'skipheaders' value '1'), 
      column_list => 'A number, B varchar2(20), C number',
      partitioning_clause => 'partition by range (A)
								(
                                partition p1 values less than (1000) location (''https://swiftobjectstorage.uk-london-1.oraclecloud.com/v1/#namespace#/tabledata/test_1000.csv''),
                                partition p2 values less than (2000) location (''https://swiftobjectstorage.uk-london-1.oraclecloud.com/v1/#namespace#/tabledata/test_2000.csv'')
                                 )'
     );
   END;
/
2. Validate the external partitioned table with
exec DBMS_CLOUD.VALIDATE_EXTERNAL_PART_TABLE (table_name => 'test_ex_part_table',	schema_name => 'ASANGA');
Similar to external table, if validation encounters any errors an validation_log table will be shown.
3. Query the external paritioned table. As with normal partitioned tables, the partition pruning would happened depending on the predicates used.

Creating hybrid partitioned table in ATP
Hybrid partitioned table has data sources that are internal and external. For example few partitions could be created for external data files (similar to external tables) while another parition may store its data internally in the table itself. The external location data sources are specified with the key words "external location". Internal paritions do not have any reference to location. 1. Below hybrid partitioned table use the same external data file as previous case and has an additional internal parition for values between 2001-3000.
BEGIN  
   DBMS_CLOUD.CREATE_HYBRID_PART_TABLE(
      table_name =>'test_hyb_part_table',  
      credential_name =>'ATP_TABLES',
      format => json_object('type' value 'csv', 'skipheaders' value '1'), 
      column_list => 'A number, B varchar2(20), C number',
      partitioning_clause => 'partition by range (A)
                                (
                                partition p1 values less than (1000) external location (''https://swiftobjectstorage.uk-london-1.oraclecloud.com/v1/#namespace#/tabledata/test_1000.csv''),
                                 partition p2 values less than (2000) external location (''https://swiftobjectstorage.uk-london-1.oraclecloud.com/v1/#namespace#/tabledata/test_2000.csv''),
                                 partition p3 values less than (3000)
                                 )'
     );
   END;
/
2. Validation of the hybrid partitioned table could be done with
exec DBMS_CLOUD.VALIDATE_HYBRID_PART_TABLE (table_name => 'test_hyb_part_table',	schema_name => 'ASANGA');
3. Similar to partitioned table, based on predicate partition pruning will happen when queried.

Wednesday, December 8, 2021

DBMS_CLOUD Package in DBCS VM DB

The dbms_cloud package comes built in in autonomous databases. Starting with 19.9 the dbms_cloud package could be installed in other databases as well. This has been documented in 2748362.1. There were no issues encountered while setting up the package in DBCS VM DB following the 2748362.1. The downloaded ssl files were stored in commonstore.
define sslwalletdir=/opt/oracle/dcs/commonstore/wallets/ssl
What was noticable is that unlike in autonomous database certain functionality has been restricted.
Files could be uploaded to storage bucket from the DBCS VM DB.
SQL> exec dbms_cloud.put_object('BACKUP_CRED','https://swiftobjectstorage.uk-london-1.oraclecloud.com/v1/#namespace#/tabledata/exptest.dmp','DATA_PUMP_DIR','exp.dmp');

PL/SQL procedure successfully completed.
The files in a bucket could be listed as well.
SQL> select object_name, bytes from dbms_cloud.list_objects('BACKUP_CRED','https://swiftobjectstorage.uk-london-1.oraclecloud.com/v1/#namespace#/tabledata');

OBJECT_NAME BYTES
----------- ---------
exptest.dmp 2932736
The package allows getting files from storage bucket.
SQL> exec dbms_cloud.get_object('BACKUP_CRED','https://swiftobjectstorage.uk-london-1.oraclecloud.com/v1/#namespace#/tabledata/exptest.dmp','DATA_PUMP_DIR','gotfrombucketexp.dmp');

PL/SQL procedure successfully completed.

ls /u01/app/oracle/admin/ORCL/dpdump/BBAE40CB362D1F2CE0535406F40A0C73/
gotfrombucketexp.dmp



However, unlike in ATP importing and exporting from and to storage bucket fails.
Importing
impdp asanga@uatpdb credential=BACKUP_CRED dumpfile='https://swiftobjectstorage.uk-london-1.oraclecloud.com/v1/#namespace#/tabledata/exptest.dmp' directory=data_pump_dir logfile=impnew.log logtime=all metrics=yes

ORA-39001: invalid argument value
ORA-39000: bad dump file specification
ORA-31640: unable to open dump file "https://swiftobjectstorage.uk-london-1.oraclecloud.com/v1/#namespace#/tabledata/exptest.dmp" for read
ORA-27037: unable to obtain file status
Linux-x86_64 Error: 2: No such file or directory
Additional information: 7
Exporting
expdp  asanga@uatpdb  dumpfile='DEFAULT_CREDENTIAL:https://swiftobjectstorage.uk-london-1.oraclecloud.com/v1/#namespace#/tabledata/expnew.dmp' directory=data_pump_dir logfile=newexp.log exclude=STATISTICS logtime=all metrics=yes

Connected to: Oracle Database 19c Enterprise Edition Release 19.0.0.0.0 - Production
ORA-39001: invalid argument value
ORA-39000: bad dump file specification
ORA-39088: file name cannot contain a path specification
After an SR oracle support stated that this feature is only available for autonomous and not available for other databases. Even if, now dbms_cloud package allows creation of credentials and 19c version of expdp/impdp allow credentails.

Another difference is the creation of external tables. It is possible to create external tables using csv files.
SQL>    DBMS_CLOUD.CREATE_EXTERNAL_TABLE(
          table_name =>'test_ex_table',
          credential_name =>'BACKUP_CRED',
          file_uri_list => 'https://swiftobjectstorage.uk-london-1.oraclecloud.com/v1/#namespace#/tabledata/test2.csv',
          format => json_object('type' value 'csv', 'skipheaders' value '1'),
		  field_list => 'A,B,C',
          column_list => 'A number,
                          B varchar2(20),
                     C number');
       END;
      /

PL/SQL procedure successfully completed.

SQL> select * from test_ex_table;

         A B                             C
---------- -------------------- ----------
       998 abc 998                   20958
       999 abc 999                   20979
      1000 abc 1000                  21000
         1 abc 1                        21
         2 abc 2                        42
         3 abc 3                        63
However, querying an external table created using datapump file fails.
SQL> BEGIN
   DBMS_CLOUD.CREATE_EXTERNAL_TABLE(
    table_name =>'test_dmp_table',
        credential_name =>'BACKUP_CRED',
    file_uri_list =>'https://swiftobjectstorage.uk-london-1.oraclecloud.com/v1/#namespace#/tabledata/datetest.dmp',
        format => json_object('type' value 'datapump', 'rejectlimit' value '1'),
    column_list => 'VERSION VARCHAR2(20), INDATE date' );
    END;
/ 

PL/SQL procedure successfully completed.

SQL> select * from test_dmp_table;
select * from test_dmp_table
*
ERROR at line 1:
ORA-29913: error in executing ODCIEXTTABLEOPEN callout
ORA-29400: data cartridge error
KUP-11010: unable to open at least one dump file for fetch
Same dumpfile and bucket works for ATP.

Wednesday, December 1, 2021

Enabling MFA for IDCS User Accounts in OCI

OCI has two types of user accounts IAM native and IDCS (also refered to as federated user accounts). For IAM accounts user has to enable MFA for their own account. Admin cannot enable MFA for another account. For federated accounts either tenancy admin or a user in IDCS_Administrators group must first setup MFA. Afterwards users can enroll themselves for MFA. This post shows steps for setting up MFA on IDCS.
The steps beging with the assumption that user (tenancy admin or user in IDCS_Administrators group) has already login to the IDCS console.
First step is to configure the factors used for MFA. This include deciding on the hashing algorithm used for passcode generation, how long a passcode is valid and etc. During the initial setup MFA with mobile app is setup (it's best practice to have two different type of MFA. Email based MFA will be setup afterwards).
On the IDCS console select the top left menu and expand Security and select Factors menu item. Select the mobile app tab and configure mobile app MFA related properties.

Next select MFA menu item under security menu and check the mobile app passcode. Both mobile app passcode and notification are selected by default. Click save to confirm the changes.

Third step is to enforce the MFA. This is done via a Sign-On policies. There's a default sign-on policy already created. It is possible to edit the existing one or add a new one.

For this post a new policy is created for MFA. Click add to begin the wizard. First item is the policy name. Click next to proceed to creating a sign-on rule.

A sign-on rule is created such that if a login is done using username and password then additional authentication factors are prompted. Since currently only mobile app passcode and notification are setup these options are selected as additonal factors. MFA prompt is set for everytime a login is done and enrollment is set for optional. This is useful until MFA setup is completed and verified.

The saved sign-on rule will be listed as below.

Next step is to add an app to the sign-on policy. For this select the automatically created SAML app which has the OCI-V2-App"tenancy name" as the name. Oracle doc describe this as

In order for MFA to be prompted this app must be selected. Without it no MFA will be prompted. In the apps tab select assign and search for the above app (applicable to the current tenancy) and select.

At this stage the sign-on rule is in inactive state. Select the menu next to the rule and activate it.

Next login to OCI console as a federated user. Even though enrollment was set to optional the console does prompt user to enable MFA (it is always possible to skip this and enable MFA later via IDCS console. Shown later in the post). To enable MFA enable secure verification button.

This will present a QR code that is compatible with Oracle mobile authenticator (OMA) app. If not alreayd done, download OMA app to phone and scan the QR code to add the account (i.e enroll to MFA). This QR code will not work with google authenticator. If google authenticator is desired then select use another authenticator app.

As mentioned earlier it is possible to enroll to MFA later on via IDCS console. Below screenshots show these steps. Login to IDCS console as the user that need MFA enabling. Select My Profile and then the security tab. Click enable button for 2-step verification.

Since only mobile app is enabled for MFA, the 2-step vericiation list mobile app. Click the mobile app button and scan the QR code as mentioned before.

Once done the device will be listed under mobile app.

Next time when the IDCS (federated) user login the MFA is prompted. Could use both mobile app notification (select allow on the mobile phone) or entering the passcode (alternative method).




As mentioned in the begining it is considered best practice to have two MFA methods. If only mobile app based MFA is configured then loss of the phone means unable to access the account and have to go through the recovery option. Below steps shows how to add email based MFA after adding the mobile app bsaed MFA (it is possible to setup both these options at the same time as well. No need to do one after another).
In the IDCS console expand security and then select factors. In the email tab configure MFA related properties.

Select MFA and check the email checkbox. Since mobile app is already configured those would have been already checked.

Next need to edit the sign-on policy created earlier. Select the policy menu and click on edit.

Select the sign-on rule inside the policy and select edit.

In the rule select email as an additional factor for authentication. This will already have mobile app selected.

To enroll email based MFA login as the user enabling MFA to the IDCS console (My profile -> security) and select configure on email option.

This will send a passcode to the email in the account. Enter the passcode to verify the email address. Once verified email will be added to MFA.

Next time when user login email is also presented as an alternative login method. When selected passcode is sent to the email.


Update 26 September 2023
With the recent changes to IDCS and IAM domains the above MFA setup will not work. To get MFA working again enable the "Security Policy for OCI Console" policy. This is provided by Oracle and will be in disable state.

Inside are two rules, one for admins and one for all users. If needed tweak them if not leave as it is.

This policy has the "OCI Console" as the app. No need to change it leave it as it is. "MFA Policy" created earlier could be deactivated now. All the previously registered MFA methods (mobile app, email) will work as before without re-regise