How to Extract and Load Data From an Oracle Database
Export the schemas using the exp utility on the ROWS=N parameter., Export the data from the small code tables either using FastReader or native exp utility., Set up the destination database with all the same global roles, users, user privileges...
Step-by-Step Guide
-
Step 1: Export the schemas using the exp utility on the ROWS=N parameter.
During this stage all the objects which are non data, such as PL/SQL, Sequences, Views, Table definitions, Object privileges etc. will be exported.
There will be one such exp file per each schema exported.
FastReader allows exporting of the schema definitions such as tables, constraints, sequences, triggers and generating scripts.
The data from the large tables (fact tables and the large dimensions) should be exported using FastReader. -
Step 2: Export the data from the small code tables either using FastReader or native exp utility.
Native exp should be used with the TABLES parameter specifying only the small tables to be extracted.
There will be one exp file per each schema exported. , all the settings which are not exported using exp).
If there is a need for a different tablespace/storage configuration then this should be taken into account in the latter stages. , FastReader allows exporting of the schema definitions such as tables, constraints, sequences, triggers, etc. to be created on destination database , These definitions are not exported during the schema export by exp. , FastReader is utilizing the existing high-speed loaders for each supported database vendor (for instance, when data to be loaded into Oracle, then sqlldr will be utilized).
FastReader creates automatically the control files for every supported target database vendor that reflect what tables/columns/data to be loaded and the scripts for actually executing the load.
Before the load all the constraints and triggers on the large tables which can prevent “direct load” mode or hurt load performance should be disabled.
Constraints on target database can be disabled / enabled also from FastReader GUI (Configurator).
If materialized views are used with “refresh on commit” option, these should be disabled and refreshed completely at a later phase.
For better control of the process the indexes maybe be dropped and recreated after the load.
If the indexes remain then during the direct load, Oracle will build them using an efficient method similar in performance to creating an index on a full table.
It is possible to load several tables and/or partitions simultaneously, by running several SQL loaders at once.
This can improve load performance significantly on a multi-processor machine with good I/O. ,,, It is also possible that materialized views can be moved with their data intact and then their status changed to TRUSTED or created as if from pre-built tables but this step is more complex in terms of correct execution and setup , -
Step 3: Set up the destination database with all the same global roles
-
Step 4: user privileges
-
Step 5: system triggers (on_logon) and tablespaces (i.e.
-
Step 6: Import the schemas using imp (schemas and small tables).
-
Step 7: Create global object privileges and object synonyms as they were on the original instance.
-
Step 8: Load the data from tables which were extracted using FastReader into the target (destination) database.
-
Step 9: Recreate indexes (if removed) on the large loaded tables
-
Step 10: this is a major
-
Step 11: computationally intensive step
-
Step 12: comparable to the unload/load phase
-
Step 13: Care should be invested in improving the performance of this step by increasing the sort area or PGA memory sizes
-
Step 14: performing several builds simultaneously under the available constraints (for instance you cannot effectively build two indexes on the same table at the same time)
-
Step 15: possible creating partitioned indexes in an unusable state and the rebuilding several partitions simultaneously
-
Step 16: Enable constraints and triggers on the large tables with the INVALIDATE clause for a performance improvement.
-
Step 17: Recreate and/or rebuild materialized views using “Complete Refresh”.
-
Step 18: Recreate (if not already present) indexes on the materialized views.
Detailed Guide
During this stage all the objects which are non data, such as PL/SQL, Sequences, Views, Table definitions, Object privileges etc. will be exported.
There will be one such exp file per each schema exported.
FastReader allows exporting of the schema definitions such as tables, constraints, sequences, triggers and generating scripts.
The data from the large tables (fact tables and the large dimensions) should be exported using FastReader.
Native exp should be used with the TABLES parameter specifying only the small tables to be extracted.
There will be one exp file per each schema exported. , all the settings which are not exported using exp).
If there is a need for a different tablespace/storage configuration then this should be taken into account in the latter stages. , FastReader allows exporting of the schema definitions such as tables, constraints, sequences, triggers, etc. to be created on destination database , These definitions are not exported during the schema export by exp. , FastReader is utilizing the existing high-speed loaders for each supported database vendor (for instance, when data to be loaded into Oracle, then sqlldr will be utilized).
FastReader creates automatically the control files for every supported target database vendor that reflect what tables/columns/data to be loaded and the scripts for actually executing the load.
Before the load all the constraints and triggers on the large tables which can prevent “direct load” mode or hurt load performance should be disabled.
Constraints on target database can be disabled / enabled also from FastReader GUI (Configurator).
If materialized views are used with “refresh on commit” option, these should be disabled and refreshed completely at a later phase.
For better control of the process the indexes maybe be dropped and recreated after the load.
If the indexes remain then during the direct load, Oracle will build them using an efficient method similar in performance to creating an index on a full table.
It is possible to load several tables and/or partitions simultaneously, by running several SQL loaders at once.
This can improve load performance significantly on a multi-processor machine with good I/O. ,,, It is also possible that materialized views can be moved with their data intact and then their status changed to TRUSTED or created as if from pre-built tables but this step is more complex in terms of correct execution and setup ,
About the Author
Joyce Price
A passionate writer with expertise in practical skills topics. Loves sharing practical knowledge.
Rate This Guide
How helpful was this guide? Click to rate: