Filtering Data and Metadata During A Job: See Also
Filtering Data and Metadata During A Job: See Also
Therefore, that user must have sufficient tablespace quota for its creation. The name of the master table is
the same as the name of the job that created it. Therefore, you cannot explicitly give a Data Pump job the
same name as a preexisting table or view.
For all operations, the information in the master table is used to restart a job.
The master table is either retained or dropped, depending on the circumstances, as follows:
See Also:
JOB_NAME for more information about how job names are formed.
See Also:
Filtering During Export Operations
Filtering During Import Operations
See Also:
REMAP_DATAFILE
REMAP_SCHEMA
REMAP_TABLESPACE
TRANSFORM
Note:
The ability to adjust the degree of parallelism is available only in the Enterprise Edition of Oracle Database.
Note:
The value of PARALLEL is restricted to 1 in the Standard Edition of Oracle Database 10g.
When a worker process is assigned the task of loading or unloading a very large table or partition, it may
choose to use the external tables access method to make maximum use of parallel execution. In such a
case, the worker process becomes a parallel execution coordinator. The actual loading and unloading work
is divided among some number of parallel I/O execution processes (sometimes called slaves) allocated
from the instancewide pool of parallel I/O execution processes.
Job status can be displayed on request in interactive-command mode. The information displayed can
include the job description and state, a description of the current operation or item being processed, files
being written, and a cumulative status.
See Also:
STATUS for information about the STATUS command in interactive Export.
STATUS for information about the STATUS command in interactive Import
A log file can also be optionally written during the execution of a job. The log file summarizes the
progress of the job, lists any errors that were encountered along the way, and records the completion status
of the job.
See Also:
LOGFILE for information about how to set the file specification for a log file for Export
LOGFILE for information about how to set the file specification for a log file for Import
An alternative way to determine job status or to get other information about Data Pump jobs, would be to
query the DBA_DATAPUMP_JOBS, USER_DATAPUMP_JOBS, or DBA_DATAPUMP_SESSIONS views.
See Oracle Database SQL Reference for descriptions of these views.
Note:
The usefulness of the estimate value for export operations depends on the type of estimation requested when
initiated, and it is updated as required if exceeded by the actual transfer amount. The estimate value for impo
The V$SESSION_LONGOPS columns that are relevant to a Data Pump job are as follows:
USERNAME - job owner
OPNAME - job name
TARGET_DESC - job operation
SOFAR - megabytes (MB) transferred thus far during the job
TOTALWORK - estimated number of megabytes (MB) in the job
UNITS - 'MB'
MESSAGE - a formatted status message of the form:
'<job_name>: <operation_name> : nnn out of mmm MB done'
File Allocation
There are three types of files managed by Data Pump jobs:
Dump files to contain the data and metadata that is being moved
Log files to record the messages associated with an operation
SQL files to record the output of a SQLFILE operation. A SQLFILE operation is invoked using
the Data Pump Import SQLFILE parameter and results in all of the SQL DDL that Import will be
executing based on other parameters, being written to a SQL file. See SQLFILE for more
information.
An understanding of how Data Pump allocates and handles these files will help you to use Export and
Import to their fullest advantage.
For import operations, all dump files must be specified at the time the job is defined.
Log files and SQL files will overwrite previously existing files. Dump files will never overwrite previously
existing files. Instead, an error will be generated.