0% found this document useful (0 votes)
39 views

Filtering Data and Metadata During A Job: See Also

fdsjkfkdsnjfjdks njvnbfvjk nkjnj FDGFDSknjfkn bsFGSDGjfk nb DSFGSDvkjnjkbn nvf ksjGSDFGSDnFGSDGgjkfngbkn jgk jskf nsjfkgdkGSDF

Uploaded by

naser
Copyright
© © All Rights Reserved
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
39 views

Filtering Data and Metadata During A Job: See Also

fdsjkfkdsnjfjdks njvnbfvjk nkjnj FDGFDSknjfkn bsFGSDGjfk nb DSFGSDvkjnjkbn nvf ksjGSDFGSDnFGSDGgjkfngbkn jgk jskf nsjfkgdkGSDF

Uploaded by

naser
Copyright
© © All Rights Reserved
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 4

The master table is created in the schema of the current user performing the export or import operation.

Therefore, that user must have sufficient tablespace quota for its creation. The name of the master table is
the same as the name of the job that created it. Therefore, you cannot explicitly give a Data Pump job the
same name as a preexisting table or view.

For all operations, the information in the master table is used to restart a job.

The master table is either retained or dropped, depending on the circumstances, as follows:

 Upon successful job completion, the master table is dropped.


 If a job is stopped using the STOP_JOB interactive command, the master table is retained for use
in restarting the job.
 If a job is killed using the KILL_JOB interactive command, the master table is dropped and the
job cannot be restarted.
 If a job terminates unexpectedly, the master table is retained. You can delete it if you do not intend
to restart the job.
 If a job stops before it starts running (that is, it is in the Defining state), the master table is
dropped.

See Also:
JOB_NAME for more information about how job names are formed.

Filtering Data and Metadata During a Job


Within the master table, specific objects are assigned attributes such as name or owning schema. Objects
also belong to a class of objects (such as TABLE, INDEX, or DIRECTORY). The class of an object is called
its object type. You can use the EXCLUDE and INCLUDE parameters to restrict the types of objects that are
exported and imported. The objects can be based upon the name of the object or the name of the schema
that owns the object. You can also specify data-specific filters to restrict the rows that are exported and
imported.

See Also:
 Filtering During Export Operations
 Filtering During Import Operations

Transforming Metadata During a Job


When you are moving data from one database to another, it is often useful to perform transformations on
the metadata for remapping storage between tablespaces or redefining the owner of a particular set of
objects. This is done using the following Data Pump Import
parameters: REMAP_DATAFILE, REMAP_SCHEMA, REMAP_TABLESPACE, and TRANSFORM.

See Also:
 REMAP_DATAFILE
 REMAP_SCHEMA
 REMAP_TABLESPACE
 TRANSFORM

Maximizing Job Performance


To improve throughput of a job, you can use the PARALLEL parameter to set a degree of parallelism that
takes maximum advantage of current conditions. For example, to limit the effect of a job on a production
system, the database administrator (DBA) might wish to restrict the parallelism. The degree of parallelism
can be reset at any time during a job. For example, PARALLEL could be set to 2 during production hours to
restrict a particular job to only two degrees of parallelism, and during nonproduction hours it could be reset
to 8. The parallelism setting is enforced by the master process, which allocates work to be executed to
worker processes that perform the data and metadata processing within an operation. These worker
processes operate in parallel. In general, the degree of parallelism should be set to more than twice the
number of CPUs on an instance.

Note:
The ability to adjust the degree of parallelism is available only in the Enterprise Edition of Oracle Database.

Loading and Unloading of Data


The worker processes are the ones that actually unload and load metadata and table data in parallel.
Worker processes are created as needed until the number of worker processes is equal to the value supplied
for the PARALLEL command-line parameter. The number of active worker processes can be reset
throughout the life of a job.

Note:
The value of PARALLEL is restricted to 1 in the Standard Edition of Oracle Database 10g.

When a worker process is assigned the task of loading or unloading a very large table or partition, it may
choose to use the external tables access method to make maximum use of parallel execution. In such a
case, the worker process becomes a parallel execution coordinator. The actual loading and unloading work
is divided among some number of parallel I/O execution processes (sometimes called slaves) allocated
from the instancewide pool of parallel I/O execution processes.

Monitoring Job Status


The Data Pump Export and Import utilities can be attached to a job in either interactive-command mode or
logging mode. In logging mode, real-time detailed status about the job is automatically displayed during
job execution. The information displayed can include the job and parameter descriptions, an estimate of the
amount of data to be exported, a description of the current operation or item being processed, files used
during the job, any errors encountered, and the final job state (Stopped or Completed).
See Also:
 STATUS for information about changing the frequency of the status display in command-line Export
 STATUS for information about changing the frequency of the status display in command-line Import

Job status can be displayed on request in interactive-command mode. The information displayed can
include the job description and state, a description of the current operation or item being processed, files
being written, and a cumulative status.

See Also:
 STATUS for information about the STATUS command in interactive Export.
 STATUS for information about the STATUS command in interactive Import

A log file can also be optionally written during the execution of a job. The log file summarizes the
progress of the job, lists any errors that were encountered along the way, and records the completion status
of the job.

See Also:
 LOGFILE for information about how to set the file specification for a log file for Export
 LOGFILE for information about how to set the file specification for a log file for Import

An alternative way to determine job status or to get other information about Data Pump jobs, would be to
query the DBA_DATAPUMP_JOBS, USER_DATAPUMP_JOBS, or DBA_DATAPUMP_SESSIONS views.
See Oracle Database SQL Reference for descriptions of these views.

Monitoring the Progress of Executing Jobs


Data Pump operations that transfer table data (export and import) maintain an entry in
the V$SESSION_LONGOPS dynamic performance view indicating the job progress (in megabytes of table
data transferred). The entry contains the estimated transfer size and is periodically updated to reflect the
actual amount of data transferred.

Note:
The usefulness of the estimate value for export operations depends on the type of estimation requested when
initiated, and it is updated as required if exceeded by the actual transfer amount. The estimate value for impo

The V$SESSION_LONGOPS columns that are relevant to a Data Pump job are as follows:
 USERNAME - job owner
 OPNAME - job name
 TARGET_DESC - job operation
 SOFAR - megabytes (MB) transferred thus far during the job
 TOTALWORK - estimated number of megabytes (MB) in the job
 UNITS - 'MB'
 MESSAGE - a formatted status message of the form:
 '<job_name>: <operation_name> : nnn out of mmm MB done'

File Allocation
There are three types of files managed by Data Pump jobs:

 Dump files to contain the data and metadata that is being moved
 Log files to record the messages associated with an operation
 SQL files to record the output of a SQLFILE operation. A SQLFILE operation is invoked using
the Data Pump Import SQLFILE parameter and results in all of the SQL DDL that Import will be
executing based on other parameters, being written to a SQL file. See SQLFILE for more
information.

An understanding of how Data Pump allocates and handles these files will help you to use Export and
Import to their fullest advantage.

Specifying Files and Adding Additional Dump Files


For export operations, you can specify dump files at the time the job is defined, as well as at a later time
during the operation. For example, if you discover that space is running low during an export operation,
you can add additional dump files by using the Data Pump Export ADD_FILE command in interactive
mode.

For import operations, all dump files must be specified at the time the job is defined.

Log files and SQL files will overwrite previously existing files. Dump files will never overwrite previously
existing files. Instead, an error will be generated.

Default Locations for Dump, Log, and SQL Files

You might also like