Oracle8i Admin Guide
Oracle8i Admin Guide
Administrator’s Guide
Release 8.1.5
February 1999
Part No. A67772-01
Administrator’s Guide, Release 8.1.5
Contributing Authors: Alex Tsukerman, Andre Kruglikov, Ann Rhee, Ashwini Surpur, Bhaskar
Himatsingka, Harvey Eneman, Jags Srinivasan, Lois Price, Robert Jenkins, Sophia Yeung, Vinay Srihari,
Wei Huang, Jonathan Klein, Mike Hartstein, Bill Lee, Diana Lorentz, Lance Ashdown, Phil Locke, Ekrem
Soylemez, Connie Dialaris, Steven Wertheimer, Val Kane, Mary Rhodes, Archna Kalra, Nina Lewis
Graphic Designer: Valarie Moore
The Programs are not intended for use in any nuclear, aviation, mass transit, medical, or other
inherently dangerous applications. It shall be the licensee’s responsibility to take all appropriate
fail-safe, backup, redundancy and other measures to ensure the safe use of such applications if the
Programs are used for such purposes, and Oracle disclaims liability for any damages caused by such
use of the Programs.
The Programs (which include both the software and documentation) contain proprietary information of
Oracle Corporation; they are provided under a license agreement containing restrictions on use and
disclosure and are also protected by copyright, patent, and other intellectual and industrial property
laws. Reverse engineering, disassembly, or decompilation of the Programs is prohibited.
The information contained in this document is subject to change without notice. If you find any
problems in the documentation, please report them to us in writing. Oracle Corporation does not
warrant that this document is error free. Except as may be expressly permitted in your license agreement
for these Programs, no part of these Programs may be reproduced or transmitted in any form or by any
means, electronic or mechanical, for any purpose, without the express written permission of Oracle
Corporation.
If the Programs are delivered to the U.S. Government or anyone licensing or using the Programs on
behalf of the U.S. Government, the following notice is applicable:
Restricted Rights Notice Programs delivered subject to the DOD FAR Supplement are "commercial
computer software" and use, duplication, and disclosure of the Programs including documentation, shall
be subject to the licensing restrictions set forth in the applicable Oracle license agreement. Otherwise,
Programs delivered subject to the Federal Acquisition Regulations are "restricted computer software"
and use, duplication, and disclosure of the Programs shall be subject to the restrictions in FAR 52.227-19,
Commercial Computer Software - Restricted Rights (June, 1987). Oracle Corporation, 500 Oracle
Parkway, Redwood City, CA 94065.
Oracle is a registered trademark, and Net8, Oracle Call Interface, Oracle7, Oracle8, Oracle8i, Oracle
Designer, Oracle Enterprise Manager, Oracle Forms, Oracle Parallel Server, Oracle Server Manager,
Oracle SQL*Loader, LogMiner, PL/SQL, Pro*C, SQL*Net and SQL*Plus, and Trusted Oracle are
trademarks or registered trademarks of Oracle Corporation. All other company or product names
mentioned are used for identification purposes only and may be trademarks of their respective owners.
Contents
Preface........................................................................................................................................................ xxiii
iii
Using ORAPWD ......................................................................................................................... 1-10
Setting REMOTE_LOGIN_ PASSWORDFILE........................................................................ 1-11
Adding Users to a Password File ............................................................................................. 1-12
Connecting with Administrator Privileges............................................................................. 1-14
Maintaining a Password File..................................................................................................... 1-15
Database Administrator Utilities................................................................................................... 1-17
SQL*Loader ................................................................................................................................. 1-17
Export and Import ...................................................................................................................... 1-17
Priorities of a Database Administrator ......................................................................................... 1-17
Step 1: Install the Oracle Software............................................................................................ 1-18
Step 2: Evaluate the Database Server Hardware.................................................................... 1-18
Step 3: Plan the Database........................................................................................................... 1-18
Step 4: Create and Open the Database..................................................................................... 1-19
Step 5: Implement the Database Design .................................................................................. 1-20
Step 6: Back Up the Database.................................................................................................... 1-20
Step 7: Enroll System Users ....................................................................................................... 1-20
Step 8: Tune Database Performance......................................................................................... 1-20
Identifying Oracle Software Releases .......................................................................................... 1-21
Release Number Format ............................................................................................................ 1-21
Versions of Other Oracle Software........................................................................................... 1-22
Checking Your Current Release Number ............................................................................... 1-22
iv
DB_BLOCK_SIZE ....................................................................................................................... 2-11
DB_BLOCK_BUFFERS............................................................................................................... 2-11
PROCESSES................................................................................................................................. 2-12
ROLLBACK_SEGMENTS ......................................................................................................... 2-12
License Parameters ..................................................................................................................... 2-12
LICENSE_MAX_SESSIONS and LICENSE_SESSIONS _WARNING................................ 2-13
LICENSE_MAX_USERS ............................................................................................................ 2-13
Considerations After Creating a Database .................................................................................. 2-14
Initial Tuning Guidelines ............................................................................................................... 2-14
Allocating Rollback Segments .................................................................................................. 2-14
Choosing the Number of DB_BLOCK_LRU_LATCHES...................................................... 2-15
Distributing I/O.......................................................................................................................... 2-15
v
4 Managing Oracle Processes
Setting Up Server Processes.............................................................................................................. 4-2
When to Connect to a Dedicated Server Process ..................................................................... 4-2
Configuring Oracle for Multi-Threaded Server Architecture .................................................... 4-3
MTS_DISPATCHERS: Setting the Initial Number of Dispatchers (Required) .................... 4-5
Modifying Server Processes.............................................................................................................. 4-6
Changing the Minimum Number of Shared Server Processes .............................................. 4-6
Adding and Removing Dispatcher Processes .......................................................................... 4-7
Tracking Oracle Processes ................................................................................................................. 4-7
Monitoring the Processes of an Oracle Instance....................................................................... 4-8
Trace Files, the ALERT File, and Background Processes ...................................................... 4-10
Starting the Checkpoint Process ............................................................................................... 4-12
Managing Processes for the Parallel Query Option................................................................... 4-12
Managing the Query Servers .................................................................................................... 4-13
Variations in the Number of Query Server Processes ........................................................... 4-13
Managing Processes for External Procedures.............................................................................. 4-14
Terminating Sessions ....................................................................................................................... 4-15
Identifying Which Session to Terminate ................................................................................. 4-16
Terminating an Active Session ................................................................................................. 4-16
Terminating an Inactive Session............................................................................................... 4-17
vi
Dropping Control Files...................................................................................................................... 5-9
vii
Performing Manual Archiving ................................................................................................. 7-10
Specifying the Archive Destination .............................................................................................. 7-11
Specifying Archive Destinations .............................................................................................. 7-11
Understanding Archive Destination States ............................................................................ 7-13
Specifying the Mode of Log Transmission .................................................................................. 7-14
Normal Transmission Mode ..................................................................................................... 7-15
Standby Transmission Mode..................................................................................................... 7-15
Managing Archive Destination Failure ........................................................................................ 7-16
Specifying the Minimum Number of Successful Destinations ............................................ 7-17
Re-Archiving to a Failed Destination....................................................................................... 7-19
Tuning Archive Performance .......................................................................................................... 7-20
Specifying Multiple ARCn Processes....................................................................................... 7-20
Setting Archive Buffer Parameters........................................................................................... 7-22
Displaying Archived Redo Log Information............................................................................... 7-23
Using LogMiner to Analyze Online and Archived Redo Logs................................................ 7-25
How Can You Use LogMiner? .................................................................................................. 7-26
Restrictions................................................................................................................................... 7-26
Creating a Dictionary File.......................................................................................................... 7-27
Specifying Redo Logs for Analysis .......................................................................................... 7-29
Using LogMiner .......................................................................................................................... 7-30
Using LogMiner: Scenarios ....................................................................................................... 7-32
viii
Viewing Job Queue Information ................................................................................................... 8-15
9 Managing Tablespaces
Guidelines for Managing Tablespaces ........................................................................................... 9-2
Using Multiple Tablespaces ........................................................................................................ 9-2
Specifying Tablespace Storage Parameters............................................................................... 9-3
Assigning Tablespace Quotas to Users ..................................................................................... 9-3
Creating Tablespaces.......................................................................................................................... 9-3
Creating Locally Managed Tablespaces.................................................................................... 9-5
Creating a Temporary Tablespace ............................................................................................. 9-6
Managing Tablespace Allocation..................................................................................................... 9-8
Altering Storage Settings for Tablespaces................................................................................. 9-8
Coalescing Free Space .................................................................................................................. 9-8
Altering Tablespace Availability ................................................................................................... 9-10
Bringing Tablespaces Online .................................................................................................... 9-10
Taking Tablespaces Offline ....................................................................................................... 9-10
Making a Tablespace Read-Only................................................................................................... 9-12
Prerequisites ................................................................................................................................ 9-13
Making a Read-Only Tablespace Writeable ........................................................................... 9-14
Creating a Read-Only Tablespace on a WORM Device........................................................ 9-14
Dropping Tablespaces...................................................................................................................... 9-14
Using the DBMS_SPACE_ADMIN Package ............................................................................... 9-16
Scenario 1 ..................................................................................................................................... 9-16
Scenario 2 ..................................................................................................................................... 9-17
Scenario 3 ..................................................................................................................................... 9-17
Scenario 4 ..................................................................................................................................... 9-17
Transporting Tablespaces Between Databases............................................................................ 9-18
Introduction to Transportable Tablespaces ............................................................................ 9-18
Current Limitations.................................................................................................................... 9-20
Step 1: Pick a Self-contained Set of Tablespaces .................................................................... 9-20
Step 2: Generate a Transportable Tablespace Set................................................................... 9-22
Step 3: Transport the Tablespace Set ....................................................................................... 9-23
Step 4: Plug In the Tablespace Set ............................................................................................ 9-23
ix
Object Behaviors ......................................................................................................................... 9-24
Transporting and Attaching Partitions for Data Warehousing: Example.......................... 9-27
Publishing Structured Data on CDs......................................................................................... 9-29
Mounting the Same Tablespace Read-only on Multiple Databases .................................... 9-29
Archive Historical Data via Transportable Tablespaces ....................................................... 9-30
Using Transportable Tablespaces to Perform TSPITR .......................................................... 9-30
Viewing Information About Tablespaces .................................................................................... 9-31
10 Managing Datafiles
Guidelines for Managing Datafiles............................................................................................... 10-2
Determine the Number of Datafiles......................................................................................... 10-2
Set the Size of Datafiles .............................................................................................................. 10-4
Place Datafiles Appropriately ................................................................................................... 10-4
Store Datafiles Separate From Redo Log Files........................................................................ 10-4
Creating and Adding Datafiles to a Tablespace .......................................................................... 10-5
Changing a Datafile’s Size............................................................................................................... 10-5
Enabling and Disabling Automatic Extension for a Datafile ............................................... 10-5
Manually Resizing a Datafile .................................................................................................... 10-6
Altering Datafile Availability ......................................................................................................... 10-7
Bringing Datafiles Online in ARCHIVELOG Mode .............................................................. 10-8
Taking Datafiles Offline in NOARCHIVELOG Mode .......................................................... 10-8
Renaming and Relocating Datafiles .............................................................................................. 10-9
Renaming and Relocating Datafiles for a Single Tablespace ............................................... 10-9
Renaming and Relocating Datafiles for Multiple Tablespaces .......................................... 10-10
Verifying Data Blocks in Datafiles .............................................................................................. 10-12
Viewing Information About Datafiles ........................................................................................ 10-13
x
12 Guidelines for Managing Schema Objects
Managing Space in Data Blocks .................................................................................................... 12-2
The PCTFREE Parameter........................................................................................................... 12-2
The PCTUSED Parameter.......................................................................................................... 12-4
Selecting Associated PCTUSED and PCTFREE Values ........................................................ 12-6
Setting Storage Parameters ............................................................................................................. 12-7
Storage Parameters You Can Specify....................................................................................... 12-7
Setting INITRANS and MAXTRANS ...................................................................................... 12-9
Setting Default Storage Parameters for Segments in a Tablespace ................................... 12-10
Setting Storage Parameters for Data Segments .................................................................... 12-10
Setting Storage Parameters for Index Segments .................................................................. 12-10
Setting Storage Parameters for LOB Segments .................................................................... 12-11
Changing Values for Storage Parameters ............................................................................. 12-11
Understanding Precedence in Storage Parameters.............................................................. 12-11
Deallocating Space ......................................................................................................................... 12-13
Viewing the High Water Mark ............................................................................................... 12-13
Issuing Space Deallocation Statements ................................................................................. 12-13
Understanding Space Use of Datatypes ..................................................................................... 12-17
Summary of Oracle Datatypes................................................................................................ 12-19
xi
Merging Partitions .................................................................................................................... 13-18
Exchanging Table Partitions.................................................................................................... 13-18
Rebuilding Index Partitions .................................................................................................... 13-20
Moving the Time Window in a Historical Table.................................................................. 13-20
Quiescing Applications During a Multi-Step Maintenance Operation ............................ 13-21
14 Managing Tables
Guidelines for Managing Tables ................................................................................................... 14-2
Design Tables Before Creating Them ...................................................................................... 14-2
Specify How Data Block Space Is to Be Used ......................................................................... 14-3
Specify Transaction Entry Parameters..................................................................................... 14-3
Specify the Location of Each Table........................................................................................... 14-3
Parallelize Table Creation.......................................................................................................... 14-4
Consider Creating UNRECOVERABLE Tables ..................................................................... 14-4
Estimate Table Size and Set Storage Parameters.................................................................... 14-5
Plan for Large Tables.................................................................................................................. 14-5
Table Restrictions........................................................................................................................ 14-6
Creating Tables .................................................................................................................................. 14-9
Altering Tables ................................................................................................................................ 14-10
Manually Allocating Storage for a Table.................................................................................... 14-11
Dropping Tables.............................................................................................................................. 14-12
Dropping Columns................................................................................................................... 14-13
Index-Organized Tables................................................................................................................. 14-13
What Are Index-Organized Tables?....................................................................................... 14-14
Creating Index-Organized Tables .......................................................................................... 14-16
Maintaining Index-Organized Tables.................................................................................... 14-19
Analyzing Index-Organized Tables ....................................................................................... 14-21
Using the ORDER BY Clause with Index-Organized Tables ............................................. 14-22
Converting Index-Organized Tables to Regular Tables...................................................... 14-22
xii
Dropping Views.......................................................................................................................... 15-9
Managing Sequences ....................................................................................................................... 15-9
Creating Sequences .................................................................................................................. 15-10
Altering Sequences ................................................................................................................... 15-10
Initialization Parameters Affecting Sequences..................................................................... 15-11
Dropping Sequences ................................................................................................................ 15-11
Managing Synonyms ..................................................................................................................... 15-11
Creating Synonyms .................................................................................................................. 15-12
Dropping Synonyms ................................................................................................................ 15-12
16 Managing Indexes
Guidelines for Managing Indexes................................................................................................. 16-2
Create Indexes After Inserting Table Data.............................................................................. 16-3
Limit the Number of Indexes per Table .................................................................................. 16-3
Specify Transaction Entry Parameters..................................................................................... 16-4
Specify Index Block Space Use ................................................................................................. 16-4
Specify the Tablespace for Each Index .................................................................................... 16-4
Parallelize Index Creation ......................................................................................................... 16-5
Consider Creating Indexes with NOLOGGING.................................................................... 16-5
Estimate Index Size and Set Storage Parameters ................................................................... 16-5
Considerations Before Disabling or Dropping Constraints ................................................. 16-7
Creating Indexes ............................................................................................................................... 16-7
Creating an Index Associated with a Constraint ................................................................... 16-8
Creating an Index Explicitly ..................................................................................................... 16-8
Creating an Index Online .......................................................................................................... 16-9
Creating a Function-Based Index ............................................................................................. 16-9
Re-creating an Existing Index ................................................................................................. 16-12
Creating a Key-Compressed Index ........................................................................................ 16-12
Altering Indexes.............................................................................................................................. 16-13
Monitoring Space Use of Indexes................................................................................................ 16-14
Dropping Indexes ........................................................................................................................... 16-15
17 Managing Clusters
Guidelines for Managing Clusters................................................................................................ 17-2
Choose Appropriate Tables for the Cluster ............................................................................ 17-4
xiii
Choose Appropriate Columns for the Cluster Key ............................................................... 17-4
Specify Data Block Space Use ................................................................................................... 17-5
Specify the Space Required by an Average Cluster Key and Its Associated Rows .......... 17-5
Specify the Location of Each Cluster and Cluster Index Rows............................................ 17-5
Estimate Cluster Size and Set Storage Parameters................................................................. 17-6
Creating Clusters............................................................................................................................... 17-6
Creating Clustered Tables ......................................................................................................... 17-7
Creating Cluster Indexes ........................................................................................................... 17-7
Altering Clusters ............................................................................................................................... 17-8
Altering Cluster Tables and Cluster Indexes.......................................................................... 17-9
Dropping Clusters .......................................................................................................................... 17-10
Dropping Clustered Tables ..................................................................................................... 17-10
Dropping Cluster Indexes ....................................................................................................... 17-11
xiv
Implications when Skipping Corrupt Blocks ......................................................................... 19-5
Step 4: Repair Corruptions and Rebuild Lost Data ................................................................... 19-6
Recover Data Using the dump_orphan_keys Procedures.................................................... 19-6
Repair Freelists Using the rebuild_freelists Procedure......................................................... 19-6
Limitations and Restrictions........................................................................................................... 19-6
DBMS_REPAIR Procedures............................................................................................................ 19-7
check_object................................................................................................................................. 19-7
fix_corrupt_blocks ...................................................................................................................... 19-8
dump_orphan_keys ................................................................................................................... 19-9
rebuild_freelists ........................................................................................................................ 19-10
skip_corrupt_blocks ................................................................................................................. 19-11
admin_tables ............................................................................................................................. 19-11
DBMS_REPAIR Exceptions .......................................................................................................... 19-12
xv
Manually Recompiling Procedures and Functions.............................................................. 20-25
Manually Recompiling Packages ........................................................................................... 20-25
Managing Object Name Resolution............................................................................................ 20-25
Changing Storage Parameters for the Data Dictionary ........................................................... 20-26
Structures in the Data Dictionary ........................................................................................... 20-27
Errors that Require Changing Data Dictionary Storage ..................................................... 20-29
Displaying Information About Schema Objects ...................................................................... 20-29
Oracle Dictionary Storage Packages ...................................................................................... 20-30
Example 1: Displaying Schema Objects By Type ................................................................. 20-31
Example 2: Displaying Column Information........................................................................ 20-31
Example 3: Displaying Dependencies of Views and Synonyms........................................ 20-32
Example 4: Displaying General Segment Information........................................................ 20-32
Example 5: Displaying General Extent Information............................................................ 20-32
Example 6: Displaying the Free Space (Extents) of a Database ......................................... 20-33
Example 7: Displaying Segments that Cannot Allocate Additional Extents ................... 20-33
xvi
Explicitly Assigning a Transaction to a Rollback Segment .................................................... 21-13
Dropping Rollback Segments ...................................................................................................... 21-13
Monitoring Rollback Segment Information.............................................................................. 21-14
Displaying Rollback Segment Information........................................................................... 21-14
xvii
Viewing Licensing Limits and Current Values ...................................................................... 23-6
User Authentication.......................................................................................................................... 23-7
Database Authentication ........................................................................................................... 23-8
External Authentication............................................................................................................. 23-8
Enterprise Authentication ....................................................................................................... 23-10
Oracle Users ..................................................................................................................................... 23-11
Creating Users ........................................................................................................................... 23-11
Altering Users............................................................................................................................ 23-15
Dropping Users ......................................................................................................................... 23-16
Managing Resources with Profiles .............................................................................................. 23-17
Creating Profiles ....................................................................................................................... 23-18
Assigning Profiles ..................................................................................................................... 23-18
Altering Profiles ........................................................................................................................ 23-19
Using Composite Limits .......................................................................................................... 23-19
Dropping Profiles ..................................................................................................................... 23-21
Enabling and Disabling Resource Limits .............................................................................. 23-21
Listing Information About Database Users and Profiles ........................................................ 23-22
Listing Information about Users and Profiles: Examples ................................................... 23-23
Examples ........................................................................................................................................... 23-26
xviii
Revoking Object Privileges and Roles ................................................................................... 24-12
Effects of Revoking Privileges ................................................................................................ 24-14
Granting to and Revoking from the User Group PUBLIC ................................................. 24-15
Granting Roles Using the Operating System or Network ..................................................... 24-16
Using Operating System Role Identification ........................................................................ 24-17
Using Operating System Role Management ........................................................................ 24-18
Granting and Revoking Roles When OS_ROLES=TRUE ................................................... 24-18
Enabling and Disabling Roles When OS_ROLES=TRUE ................................................... 24-19
Using Network Connections with Operating System Role Management ....................... 24-19
Listing Privilege and Role Information ..................................................................................... 24-19
Listing Privilege and Role Information: Examples.............................................................. 24-20
Index
xix
xx
Send Us Your Comments
Administrator’s Guide, Release 8.1.5
Oracle Corporation welcomes your comments and suggestions on the quality and usefulness of this
publication. Your input is an important part of the information used for revision.
■ Did you find any errors?
■ Is the information clearly presented?
■ Do you need more information? If so, where?
■ Are the examples correct? Do you need more examples?
■ What features did you like most about this manual?
If you find any errors or have any other suggestions for improvement, please indicate the chapter,
section, and page number (if available). Please send your comments to:
Server Technologies Documentation Manager
Oracle Corporation
500 Oracle Parkway
Redwood Shores, CA 94065
Fax: (650) 506-7228
or e-mail comments to the Information Development department at the following e-mail address:
[email protected]
xxi
xxii
Preface
This guide is for people who administer the operation of an Oracle database system.
These people, referred to as "database administrators" (DBAs), are assumed to be
responsible for ensuring the smooth operation of an Oracle database system and for
monitoring its use. The responsibilities of database administrators are described in
Chapter 1.
xxiii
Audience
Readers of this guide are assumed to be familiar with relational database concepts.
They are also assumed to be familiar with the operating system environment under
which they are running Oracle.
Structure
This guide contains the following parts and chapters.
xxiv
Part I: Basic Database Administration
xxv
Part III: Database Storage
xxvi
Chapter 18, "Managing Hash Clusters" Consult this chapter for general
guidelines to follow when altering or
dropping hash clusters.
Chapter 19, "Detecting and Repairing This chapter describes how to use the
Data Block Corruption" procedures in the DBMS_REPAIR
package to detect and correct data block
corruption.
Chapter 20, "General Management of This chapter covers more specific aspects
Schema Objects" of schema management than those
identified in Chapter 12. Consult this
chapter for information about table
analysis, truncation of tables and clusters,
database triggers, integrity constraints,
object dependencies. You will also find a
number of specific examples.
Chapter 21, "Managing Rollback Consult this chapter for guidelines to
Segments" follow when working with rollback
segments.
xxvii
Conventions
This section explains the conventions used in this manual including the following:
■ text
■ syntax diagrams and notation
■ code examples
Text
This section explains the conventions used within the text.
UPPERCASE Characters
Uppercase text is used to call attention to command keywords, object names,
parameters, filenames, and so on.
For example, "If you create a private rollback segment, the name must be included
in the ROLLBACK_SEGMENTS parameter of the parameter file."
Italicized Characters
Italicized words within text are book titles or emphasized words.
Keywords
Keywords are words that have special meanings in the SQL language. In the syntax
diagrams in this manual, keywords appear in uppercase. You must use keywords in
your SQL statements exactly as they appear in the syntax diagram, except that they
can be either uppercase or lowercase. For example, you must use the CREATE
keyword to begin your CREATE TABLE statements just as it appears in the
CREATE TABLE syntax diagram.
Parameters
Parameters act as place holders in syntax diagrams. They appear in lowercase.
Parameters are usually names of database objects, Oracle datatype names, or
expressions. When you see a parameter in a syntax diagram, substitute an object or
expression of the appropriate type in your SQL statement. For example, to write a
xxviii
CREATE TABLE statement, use the name of the table you want to create, such as
EMP, in place of the table parameter in the syntax diagram. (Note that parameter
names appear in italics in the text.)
This list shows parameters that appear in the syntax diagrams in this manual and
examples of the values you might substitute for them in your statements:
Code Examples
SQL and SQL*Plus commands and statements are separated from the text of
paragraphs in a monospaced font as follows:
INSERT INTO emp (empno, ename) VALUES (1000, ’JFEE);
xxix
ALTER TABLESPACE users ADD DATAFILE ’users2.ora’ SIZE 50K;
xxx
Part I
Basic Database Administration
1
The Oracle Database Administrator
This chapter describes the responsibilities of the person who administers the Oracle
server, the database administrator.
The following topics are included:
■ Types of Oracle Users
■ Database Administrator Security and Privileges
■ Database Administrator Authentication
■ Password File Administration
■ Database Administrator Utilities
■ Priorities of a Database Administrator
■ Identifying Oracle Software Releases
Database Administrators
Because an Oracle database system can be quite large and have many users,
someone or some group of people must manage this system. The database
administrator (DBA) is this manager. Every database requires at least one person to
perform administrative duties.
A database administrator’s responsibilities can include the following tasks:
■ installing and upgrading the Oracle server and application tools
■ allocating system storage and planning future storage requirements for the
database system
■ creating primary database storage structures (tablespaces) after application
developers have designed an application
■ creating primary objects (tables, views, indexes) once application developers
have designed an application
■ modifying the database structure, as necessary, from information given by
application developers
■ enrolling users and maintaining system security
■ ensuring compliance with your Oracle license agreement
■ controlling and monitoring user access to the database
■ monitoring and optimizing the performance of the database
Security Officers
In some cases, a database might also have one or more security officers. A security
officer is primarily concerned with enrolling users, controlling and monitoring user
access to the database, and maintaining system security. You might not be
responsible for these duties if your site has a separate security officer.
Application Developers
An application developer designs and implements database applications An
application developer’s responsibilities include the following tasks:
■ designing and developing the database application
■ designing the database structure for an application
■ estimating storage requirements for an application
■ specifying modifications of the database structure for an application
■ relaying the above information to a database administrator
■ tuning the application during development
■ establishing an application’s security measures during development
Application Administrators
An Oracle site might also have one or more application administrators. An
application administrator is responsible for the administration needs of a particular
application.
Database Users
Database users interact with the database via applications or utilities. A typical
user’s responsibilities include the following tasks:
■ entering, modifying, and deleting data, where permitted
Network Administrators
At some sites there may be one or more network administrators. Network
administrators may be responsible for administering Oracle networking products,
such as Net8.
See Also: "Network Administration" in Oracle8i Distributed Database Systems
You will probably want to create at least one additional administrator username to
use when performing daily administrative tasks.
SYS
When any database is created, the user SYS, identified by the password
CHANGE_ON_INSTALL, is automatically created and granted the DBA role.
All of the base tables and views for the database’s data dictionary are stored in the
schema SYS. These base tables and views are critical for the operation of Oracle. To
maintain the integrity of the data dictionary, tables in the SYS schema are
manipulated only by Oracle; they should never be modified by any user or database
administrator, and no one should create any tables in the schema of the user SYS.
(However, you can change the storage parameters of the data dictionary settings if
necessary.)
Most database users should never be able to connect using the SYS account. You can
connect to the database using this account but should do so only when instructed
by Oracle personnel or documentation.
SYSTEM
When a database is created, the user SYSTEM, identified by the password
MANAGER, is also automatically created and granted all system privileges for the
database.
The SYSTEM username creates additional tables and views that display
administrative information, and internal tables and views used by Oracle tools.
Never create in the SYSTEM schema tables of interest to individual users.
No No
Use a
password file
CONNECT / AS SYSDBA
OSOPER and OSDBA can have different names and functionality, depending on
your operating system.
The OSOPER and OSDBA roles can only be granted to a user through the operating
system. They cannot be granted through a GRANT statement, nor can they be
revoked or dropped. When a user logs on with administrator privileges and
REMOTE_LOGIN_PASSWORDFILE is set to NONE, Oracle communicates with the
operating system and attempts to enable first OSDBA and then, if unsuccessful,
OSOPER. If both attempts fail, the connection fails. How you grant these privileges
through the operating system is operating system specific.
If you are performing remote database administration, you should consult your
Net8 documentation to determine if you are using a secure connection. Most
popular connection protocols, such as TCP/IP and DECnet, are not secure,
regardless of which version of Net8 you are using.
See Also: For information about OS authentication of database administrators, see
your operating system-specific Oracle documentation.
The privilege SYSDBA permits the user to perform the same operations as
OSDBA. Likewise, the privilege SYSOPER permits the user to perform the same
operations as OSOPER.
4. Privileged users should now be able to connect to the database by using a
command similar to the one shown below.
CONNECT scott/[email protected] AS SYSDBA
See Also: See your operating system-specific Oracle documentation for information
on using the installer utility to install the password file.
Using ORAPWD
When you invoke the password file creation utility without supplying any
parameters, you receive a message indicating the proper use of the command as
shown in the following sample output:
orapwd
Usage: orapwd file=<fname> password=<password> entries=<users>
where
file - name of password file (mand),
password - password for SYS and INTERNAL (mand),
entries - maximum number of distinct DBAs and OPERs (opt),
There are no spaces around the equal-to (=) character.
For example, the following command creates a password file named ACCT.PWD
that allows up to 30 privileged users with different passwords. The file is initially
created with the password SECRET for users connecting as SYSOPER or SYSDBA:
ORAPWD FILE=acct.pwd PASSWORD=secret ENTRIES=30
FILE
This parameter sets the name of the password file being created. You must specify
the full pathname for the file. The contents of this file are encrypted, and the file is
not user-readable. This parameter is mandatory.
The types of file names allowed for the password file are operating system specific.
Some platforms require the password file to be a specific format and located in a
specific directory. Other platforms allow the use of environment variables to specify
the name and location of the password file. See your operating system-specific
Oracle documentation for the names and locations allowed on your platform.
If you are running multiple instances of Oracle using the Oracle Parallel Server, the
environment variable for each instance should point to the same password file.
PASSWORD
This parameter sets the password for SYSOPER and SYSDBA. If you issue the
ALTER USER command to change the password after connecting to the database,
both the password stored in the data dictionary and the password stored in the
password file are updated. The INTERNAL user is supported for backwards
compatibility only. This parameter is mandatory.
ENTRIES
This parameter sets the maximum number of entries allowed in the password file.
This corresponds to the maximum number of distinct users allowed to connect to
the database as SYSDBA or SYSOPER. Entries can be reused as users are added to
and removed from the password file. This parameter is required if you ever want
this password file to be EXCLUSIVE.
WARNING: If you ever need to exceed this limit, you must create
a new password file. It is safest to select a number larger than you
think you will ever need.
See Also: Consult your operating system-specific Oracle documentation for the
exact name of the password file or for the name of the environment variable used to
specify this name for your operating system.
NONE
Setting this parameter to NONE causes Oracle to behave as if the password file does
not exist. That is, no privileged connections are allowed over non-secure
connections. NONE is the default value for this parameter.
EXCLUSIVE
An EXCLUSIVE password file can be used with only one database. Only an
EXCLUSIVE file can contain the names of users other than SYSOPER and SYSDBA.
Using an EXCLUSIVE password file allows you to grant SYSDBA and SYSOPER
system privileges to individual users and have them connect as themselves.
SHARED
A SHARED password file can be used by multiple databases. However, the only
users recognized by a SHARED password file are SYSDBA and SYSOPER; you
cannot add users to a SHARED password file. All users needing SYSDBA or
SYSOPER system privileges must connect using the same name, SYS, and
password. This option is useful if you have a single DBA administering multiple
databases.
4. Start up the instance and create the database if necessary, or mount and open an
existing database.
5. Create users as necessary. Grant SYSOPER or SYSDBA privileges to yourself
and other users as appropriate.
6. These users are now added to the password file and can connect to the database
as SYSOPER or SYSDBA with a username and password (instead of using SYS).
The use of a password file does not prevent OS authenticated users from
connecting if they meet the criteria for OS authentication.
Use the REVOKE command to revoke the SYSDBA or SYSOPER system privilege
from a user, as shown in the following example:
REVOKE SYSDBA FROM scott;
Because SYSDBA and SYSOPER are the most powerful database privileges, the
ADMIN OPTION is not used. Only users currently connected as SYSDBA (or
INTERNAL) can grant SYSDBA or SYSOPER system privileges to another user. This
is also true of REVOKE. These privileges cannot be granted to roles, since roles are
only available after database startup. Do not confuse the SYSDBA and SYSOPER
database privileges with operating system roles, which are a completely
independent feature.
See Also: For more information about system privileges, see Chapter 24, "Managing
User Privileges and Roles".
USERNAME
The name of the user that is recognized by the password file.
SYSDBA
If the value of this column is TRUE, the user can log on with SYSDBA system
privileges.
SYSOPER
If the value of this column is TRUE, the user can log on with SYSOPER system
privileges.
He receives an error that SCOTT_TEST does not exist. That is because SCOTT now
references the SYS schema by default, whereas the table was created in the SCOTT
schema.
SQL*Loader
SQL*Loader is used by both database administrators and users of Oracle. It loads
data from standard operating system files (files in text or C data format) into Oracle
database tables.
See Also: Oracle8i Utilities
have been created during the installation procedure for Oracle. If so, all you need to
do is start an instance and mount and open the initial database.
To determine if your operating system creates an initial database during the
installation of Oracle, check your installation or user’s guide. If no database is
created during installation or you want to create an additional database, see
Chapter 2 of this book for this procedure.
See Also: See Chapter 3 for database and instance startup and shutdown
procedures.
See Also: Oracle8i Tuning for information about tuning your database and
applications.
8.1.5.1
Version Number Port–Specific Patch
Release Number
Maintenance Release Patch Release
Number Number
Version Number
The version number, such as 8, is the most general identifier. A version is a major
new edition of the software, which usually contains significant new functionality.
8.2.0 the second maintenance release (the third release in all) of Oracle8i
8.2.2 the second patch release after the second maintenance release
This chapter lists the steps necessary to create an Oracle database, and includes the
following topics:
■ Considerations Before Creating a Database
■ Creating an Oracle Database
■ Parameters
■ Considerations After Creating a Database
■ Initial Tuning Guidelines
For information about the online and archive redo logs, and database backup and
recovery see Chapter 6, "Managing the Online Redo Log" and Chapter 7, "Managing
Archived Redo Logs".
Creation Prerequisites
To create a new database, you must have the following:
■ the operating system privileges associated with a fully operational database
administrator
■ sufficient memory to start the Oracle instance
■ sufficient disk storage space for the planned database on the computer that
executes Oracle
■ Dropping a Database
Step 1: Back up any existing databases. Oracle Corporation strongly recommends that
you make complete backups of all existing databases before creating a new
database, in case database creation accidentally affects some existing files. Backup
should include parameter files, datafiles, redo log files, and control files.
Step 2: Create parameter files. The instance (System Global Area and background
processes) for any Oracle database is started using a parameter file.
Each database on your system should have at least one customized parameter file
that corresponds only to that database. Do not use the same file for several
databases.
To create a parameter file for the database you are about to make, use your
operating system to make a copy of the parameter file that Oracle provided on the
distribution media. Give this copy a new filename. You can then edit and customize
this new file for the new database.
See Also: For more information about copying the parameter file, see your
operating system-specific Oracle documentation.
Step 3: Edit new parameter files. To create a new database, inspect and edit the
following parameters of the new parameter file:
Parameter Described
DB_NAME on page 2-9
DB_DOMAIN on page 2-9
CONTROL_FILES on page 2-10
DB_BLOCK_SIZE on page 2-11
DB_BLOCK_BUFFERS on page 2-11
PROCESSES on page 2-12
ROLLBACK_SEGMENTS on page 2-12
Step 4: Check the instance identifier for your system. If you have other databases, check
the Oracle instances identifier. The Oracle instance identifier should match the name
of the database (the value of DB_NAME) to avoid confusion with other Oracle
instances that are running concurrently on your system.
See your operating system-specific Oracle documentation for more information.
Step 5: Start SQL*Plus and connect to Oracle as SYSDBA. Connect to the database as
SYSDBA.
$ SQLPLUS /nolog
connect username/password as sysdba
Step 6: Start an instance. You can start an instance without mounting a database;
typically, you do so only during database creation. Use the STARTUP command
with the NOMOUNT option:
STARTUP NOMOUNT;
At this point, there is no database. Only an SGA and background processes are
started in preparation for the creation of a new database.
Step 7: Create the database. To create the new database, use the SQL CREATE
DATABASE statement, optionally setting parameters within the statement to name
the database, establish maximum numbers of files, name the files and set their sizes,
and so on.
When you execute a CREATE DATABASE statement, Oracle performs the following
operations:
■ creates the datafiles for the database
■ creates the control files for the database
■ creates the redo log files for the database
■ creates the SYSTEM tablespace and the SYSTEM rollback segment
■ creates the data dictionary
■ creates the users SYS and SYSTEM
■ specifies the character set that stores data in the database
■ mounts and opens the database for use
WARNING: Make sure that the datafile and redo log file names
that you specify do not conflict with files of another database.
See Also: You can also create a database with a locally managed SYSTEM
tablespace; for more information, see "Creating a Database with a Locally Managed
SYSTEM Tablespace" on page 9-5.
Step 8: Back up the database. You should make a full backup of the database to ensure
that you have a complete set of files from which to recover if a media failure occurs.
See Also: The Oracle8i Backup and Recovery Guide.
For more information about parameter files see "Using Parameter Files" on
page 3-13.
For information about the CREATE DATABASE statement, character sets, and
database creation see the Oracle8i SQL Reference.
■ The new database does not overwrite any existing control files specified in the
parameter file.
Note: You can set several limits during database creation. Some of
these limits are also subject to superseding limits of the operating
system and can affect each other. For example, if you set
MAXDATAFILES, Oracle allocates enough space in the control file
to store MAXDATAFILES filenames, even if the database has only
one datafile initially; because the maximum control file size is
limited and operating system-dependent, you might not be able to
set all CREATE DATABASE parameters at their theoretical
maximums.
See Also: For more information about setting limits during database creation, see
the Oracle8i SQL Reference.
See your operating system-specific Oracle documentation for information about
operating system limits.
Dropping a Database
To drop a database, remove its datafiles, redo log files, and all other associated files
(control files, parameter files, archived log files).
To view the names of the database’s datafiles and redo log files, query the data
dictionary views V$DATAFILE and V$LOGFILE.
See Also: For more information about these views, see the Oracle8i Reference.
Parameters
As described in Step 3 of the section "Creating an Oracle Database", Oracle suggests
you alter a minimum set of parameters. These parameters are described in the
following sections:
■ DB_NAME and DB_DOMAIN
■ CONTROL_FILES
■ DB_BLOCK_SIZE
■ PROCESSES
■ ROLLBACK_SEGMENTS
■ License Parameters
■ DB_BLOCK_BUFFERS
■ LICENSE_MAX_SESSIONS and LICENSE_SESSIONS _WARNING
■ LICENSE_MAX_USERS
DB_NAME must be set to a text string of no more than eight characters. During
database creation, the name provided for DB_NAME is recorded in the datafiles,
redo log files, and control file of the database. If during database instance startup
the value of the DB_NAME parameter (of the parameter file) and the database
name in the control file are not the same, the database does not start.
DB_DOMAIN is a text string that specifies the network domain where the database
is created; this is typically the name of the organization that owns the database. If
the database you are about to create will ever be part of a distributed database
system, pay special attention to this initialization parameter before database
creation.
See Also: For more information about distributed databases, see Oracle8i Distributed
Database Systems.
CONTROL_FILES
Include the CONTROL_FILES parameter in your new parameter file and set its
value to a list of control filenames to use for the new database. If you want Oracle to
create new operating system files when creating your database’s control files, make
sure that the filenames listed in the CONTROL_FILES parameter do not match any
filenames that currently exist on your system. If you want Oracle to reuse or
overwrite existing files when creating your database’s control files, make sure that
the filenames listed in the CONTROL_FILES parameter match the filenames that
currently exist.
If no filenames are listed for the CONTROL_FILES parameter, Oracle uses a default
filename.
Oracle Corporation strongly recommends you use at least two control files stored
on separate physical disk drives for each database. Therefore, when specifying the
CONTROL_FILES parameter of the new parameter file, follow these guidelines:
■ List at least two filenames for the CONTROL_FILES parameter.
■ Place each control file on a separate physical disk drives by fully specifying
filenames that refer to different disk drives for each filename.
When you execute the CREATE DATABASE statement (in Step 7), the control files
listed in the CONTROL_FILES parameter of the parameter file will be created.
See Also: The default filename for the CONTROL_FILES parameter is operating
system-dependent. See your operating system-specific Oracle documentation for
details.
DB_BLOCK_SIZE
The default data block size for every Oracle server is operating system-specific. The
Oracle data block size is typically either 2K or 4K. Generally, the default data block
size is adequate. In some cases, however, a larger data block size provides greater
efficiency in disk and memory I/O (access and storage of data). Such cases include:
■ Oracle is on a large computer system with a large amount of memory and fast
disk drives. For example, databases controlled by mainframe computers with
vast hardware resources typically use a data block size of 4K or greater.
■ The operating system that runs Oracle uses a small operating system block size.
For example, if the operating system block size is 1K and the data block size
matches this, Oracle may be performing an excessive amount of disk I/O
during normal operation. For best performance in this case, a database block
should consist of multiple operating system blocks.
Each database’s block size is set during database creation by the initialization
parameter DB_BLOCK_SIZE. The block size cannot be changed after database creation
except by re-creating the database. If a database’s block size is different from the
operating system block size, make the database block size a multiple of the operating
system’s block size.
For example, if your operating system’s block size is 2K (2048 bytes), the following
setting for the DB_BLOCK_SIZE initialization parameter would be valid:
DB_BLOCK_SIZE=4096
DB_BLOCK_SIZE also determines the size of the database buffers in the buffer
cache of the System Global Area (SGA).
See Also: For details about your default block size, see your operating system-
specific Oracle documentation.
DB_BLOCK_BUFFERS
This parameter determines the number of buffers in the buffer cache in the System
Global Area (SGA). The number of buffers affects the performance of the cache.
Larger cache sizes reduce the number of disk writes of modified data. However, a
large cache may take up too much memory and induce memory paging or
swapping.
Estimate the number of data blocks that your application accesses most frequently,
including tables, indexes, and rollback segments. This estimate is a rough
approximation of the minimum number of buffers the cache should have. Typically,
1000 to 2000 is a practical minimum for the number of buffers.
See Also: For more information about tuning the buffer cache, see Oracle8i Tuning.
PROCESSES
This parameter determines the maximum number of operating system processes
that can be connected to Oracle concurrently. The value of this parameter must
include 5 for the background processes and 1 for each user process. For example, if
you plan to have 50 concurrent users, set this parameter to at least 55.
ROLLBACK_SEGMENTS
This parameter is a list of the rollback segments an Oracle instance acquires at
database startup. List your rollback segments as the value of this parameter.
See Also: For more information about how many rollback segments you need, see
Oracle8i Tuning.
License Parameters
Oracle helps you ensure that your site complies with its Oracle license agreement. If
your site is licensed by concurrent usage, you can track and limit the number of
sessions concurrently connected to an instance. If your site is licensed by named
users, you can limit the number of named users created in a database. To use this
facility, you need to know which type of licensing agreement your site has and what
the maximum number of sessions or named users is. Your site might use either type
of licensing (session licensing or named user licensing), but not both.
See Also: For more information about managing licensing, see "Session and User
Licensing" on page 23-2.
In addition to setting a maximum number of sessions, you can set a warning limit
on the number of concurrent sessions. Once this limit is reached, additional users
can continue to connect (up to the maximum limit), but Oracle sends a warning for
each connecting user. To set the warning limit for an instance, set the parameter
LICENSE_SESSIONS_WARNING. Set the warning limit to a value lower than
LICENSE_MAX_SESSIONS.
For instances running with the Parallel Server, each instance can have its own
concurrent usage limit and warning limit. However, the sum of the instances’ limits
must not exceed the site’s session license.
See Also: For more information about setting these limits when using the Parallel
Server, see Oracle8i Parallel Server Concepts and Administration.
LICENSE_MAX_USERS
You can set a limit on the number of users created in the database. Once this limit is
reached, you cannot create more users.
For instances running with the Parallel Server, all instances connected to the same
database should have the same named user limit.
See Also: For more information about setting this limit when using the Parallel
Server see Oracle8i Parallel Server Concepts and Administration.
transactions on your Oracle server. These guidelines are appropriate for most application
mixes.
To create rollback segments, use the CREATE ROLLBACK SEGMENT statement.
See Also: For information about the CREATE ROLLBACK SEGMENT statement,
see the Oracle8i SQL Reference.
Distributing I/O
Proper distribution of I/O can improve database performance dramatically. I/O can
be distributed during installation of Oracle. Distributing I/O during installation can
reduce the need to distribute I/O later when Oracle is running.
There are several ways to distribute I/O when you install Oracle:
■ redo log file placement
■ datafile placement
■ separation of tables and indexes
This chapter describes the procedures for starting and stopping an Oracle database,
and includes the following topics:
■ Starting Up a Database
■ Altering Database Availability
■ Shutting Down a Database
■ Suspending and Resuming a Database
■ Using Parameter Files
Starting Up a Database
This section includes the following topics:
■ Preparing to Start an Instance
■ Starting an Instance: Scenarios
To start up a database or an instance from the command line, use SQL*Plus to
connect to Oracle with administrator privileges and then issue the STARTUP
command. You can also use Recovery Manager to execute STARTUP and
SHUTDOWN commands. If you are using the Enterprise Manager GUI and prefer
not to use the command line, refer to the Oracle Enterprise Manager Administrator’s
Guide for instructions.
You can start an instance and database in a variety of ways:
■ start the instance without mounting a database
■ start the instance and mount the database, but leave it closed
■ start the instance, and mount and open the database in:
– unrestricted mode (accessible to all users)
– restricted mode (accessible to database administrators only)
In addition, you can force the instance to start, or start the instance and have
complete media recovery begin immediately. If your operating system supports the
Oracle Parallel Server, you may start an instance and mount the database in either
exclusive or shared mode.
See Also: For more information about starting a database in an OPS environment,
see Oracle8i Parallel Server Concepts and Administration.
For more information on SQL*Plus command syntax, see SQL*Plus User’s Guide and
Reference.
For more information about Recovery Manager commands, see the Oracle8i Backup
and Recovery Guide.
If you do not specify the PFILE option, Oracle uses the standard parameter file.
If you do not specify a database name Oracle uses the value for DB_NAME in
the parameter file that starts the instance.
See Also: The use of filenames is specific to your operating system. See your
operating system-specific Oracle documentation.
For information about the DB_NAME parameter, see Oracle8i Reference.
Start an instance (and, optionally, mount and open the database) in restricted mode
by using the STARTUP command with the RESTRICT option:
STARTUP RESTRICT;
Later, use the ALTER SYSTEM statement to disable the RESTRICTED SESSION
feature.
See Also: For more information on the ALTER SYSTEM statement, see the Oracle8i
SQL Reference.
See Also: For a list of operations that require the database to be mounted and
closed (and procedures to start an instance and mount a database in one step), see
"Starting an Instance and Mounting a Database" on page 3-4.
After executing this statement, any valid Oracle user with the CREATE SESSION
system privilege can connect to the database.
Note: You cannot use the RESETLOGS clause with a READ ONLY
clause.
See Also: For more information about the ALTER DATABASE statement, see the
Oracle8i SQL Reference.
For more conceptual details about opening a database in read-only mode, see
Oracle8i Concepts.
Check account
balances Database
down
Insert new
funds
Remove funds
from old
account
Commit
Database
down
Database
down
Logout
■ The next startup of the database will not require any instance recovery
procedures.
To shut down a database in normal situations, use the SHUTDOWN command with
the NORMAL option:
SHUTDOWN NORMAL;
After submitting this statement, no client can start a new transaction on this
instance. If clients attempt to start a new transaction, they are disconnected. After
all transactions have completed, any client still connected to the instance is
disconnected. At this point, the instance shuts down just as it would when a
SHUTDOWN IMMEDIATE statement is submitted.
A transactional shutdown prevents clients from losing work, and at the same time,
does not require all users to log off.
See Also: For more information about the ALTER SYSTEM SUSPEND/RESUME
and ALTER TABLESPACE statements, see the Oracle8i SQL Reference.
Note: If you are using Oracle Enterprise Manager, see the Oracle
Enterprise Manager Administrator’s Guide information about using
stored configurations as an alternative to the initialization
parameter file.
You can edit parameter values in a parameter file with a basic text editor; however,
editing methods are operating system-specific. For detailed information about
initialization parameters, see the Oracle8i Reference.
Oracle treats string literals defined for National Language Support (NLS)
parameters in the file as if they are in the database character set.
See Also: For more information about initialization parameter file, see your
operating system-specific Oracle documentation.
This chapter describes how to manage the processes of an Oracle instance, and
includes the following topics:
■ Setting Up Server Processes
■ Configuring Oracle for Multi-Threaded Server Architecture
■ Modifying Server Processes
■ Tracking Oracle Processes
■ Managing Processes for the Parallel Query Option
■ Managing Processes for External Procedures
■ Terminating Sessions
User User
Process Process
Application Application
Code Code
Client Workstation
Database Server
Dedicated
Server
Process
Oracle Oracle
Server Code Server Code
Program
Interface
Code
Code
Code
Application
Code
Code
Code
Code
Code
Code
7 Client Workstation
Database Server
1
Dispatcher Processes
6
Shared
Server
Oracle Processes
Oracle
Oracle
Oracle
Server
ServerCode
Code
Server
Server Code
Code
3
4
Request
Queue Response
Queues
See Also: For more information about starting and managing the network listener
process, see Oracle8i Distributed Database Systems and the Oracle Net8 Administrator’s
Guide.
For example, assume that your system typically has 900 users concurrently
connected via TCP/IP and 600 users connected via SPX, and supports 255
connections per process. In this case, the MTS_DISPATCHERS parameter should be
set as follows:
MTS_DISPATCHERS = "(PROTOCOL=TCP) (DISPATCHERS=4)"
MTS_DISPATCHERS = "(PROTOCOL=SPX) (DISPATCHERS=3)"
Examples
Example 1 To force the IP address used for the dispatchers, enter the following:
MTS_DISPATCHERS="(ADDRESS=(PARTIAL=TRUE)(PROTOCOL=TCP)\
(HOST=144.25.16.201))(DISPATCHERS=2)"
This will start two dispatchers that will listen in on HOST=144.25.16.201, which
must be a card that is accessible to the dispatchers.
Example 2 To force the exact location of dispatchers, add the PORT as follows:
MTS_DISPATCHERS="(ADDRESS=(PARTIAL=TRUE)(PROTOCOL=TCP)\
(HOST=144.25.16.201)(PORT=5000))(DISPATCHERS=1)"
MTS_DISPATCHERS="(ADDRESS=(PARTIAL=TRUE)(PROTOCOL=TCP)\
(HOST=144.25.16.201)(PORT=5001))(DISPATCHERS=1)"
The following statement sets the number of shared server processes to two:
ALTER SYSTEM SET MTS_SERVERS = 2
See Also: For more information about tuning the multi-threaded server, see Oracle8i
Tuning.
Monitoring Locks
Table 4–2 describes two methods of monitoring locking information for ongoing
transactions within an instance:
■ ORA_TEST_DBWR
■ ORA_TEST_LGWR
■ ORA_TEST_SMON
■ ORA_TEST_PMON
■ ORA_TEST_RECO
■ ORA_TEST_LCK0
■ ORA_TEST_ARCH
■ ORA_TEST_D000
■ ORA_TEST_S000
■ ORA_TEST_S001
See Also: For more information about views and dynamic performance tables see
the Oracle8i Reference.
For more information about the instance identifier and the format of the Oracle
process names, see your operating system-specific Oracle documentation.
■ the values of all initialization parameters at the time the database and instance
start
Oracle uses the ALERT file to keep a log of these special operations as an alternative
to displaying such information on an operator’s console (although many systems
display information on the console). If an operation is successful, a "completed"
message is written in the ALERT file, along with a timestamp.
For the multi-threaded server, each session using a dispatcher is routed to a shared
server process, and trace information is written to the server’s trace file only if the
session has enabled tracing (or if an error is encountered). Therefore, to track tracing
for a specific session that connects using a dispatcher, you might have to explore
several shared server’s trace files. Because the SQL trace facility for server processes
can cause significant system overhead, enable this feature only when collecting
statistics.
See Also: For information about the names of trace files, see your operating system-
specific Oracle documentation.
For complete information about the ALTER SESSION command, see the Oracle8i
SQL Reference.
See Also: For more information about the parallel query option, see Oracle8i Tuning.
In this example, and all callouts for external procedures, the entry name
extproc_connection_data cannot be changed; it must be entered exactly as it
appears here. The key you specify—in this case extproc_key—must match the
KEY you specify in the listener.ora file. Additionally, the SID name you specify—in
this case extproc_agent—must match the SID_NAME entry in the listener.ora file.
(ADDRESS_LIST =
(ADDRESS = (PROTOCOL=ipc)
(KEY=extproc_key)
)
)
...
SID_LIST_EXTERNAL_PROCEDURE_LISTENER =
(SID_LIST =
(SID_DESC = (SID_NAME=extproc_agent)
(ORACLE_HOME=/oracle)
(PROGRAM=extproc)
)
)
In this example, the PROGRAM must be extproc, and cannot be changed; it must be
entered exactly as it appears in this example. The SID_NAME must match the SID
name in the tnsnames.ora file. The ORACLE_HOME must be set to the directory
where your Oracle software is installed. The extproc executable must reside in
$ORACLE_HOME/bin.
See Also: For more information about external procedures, see the PL/SQL User’s
Guide and Reference.
Terminating Sessions
In some situations, you might want to terminate current user sessions. For example,
you might want to perform an administrative operation and need to terminate all
non-administrative sessions.
This section describes the various aspects of terminating sessions, and includes the
following topics:
If, after receiving the ORA-00028 message, a user submits additional statements
before reconnecting to the database, Oracle returns the following message:
ORA-01012: not logged on
12 63 INACTIVE DEDICATED
2 rows selected.
This chapter explains how to create and maintain the control files for your database,
and includes the following topics:
■ Guidelines for Control Files
■ Creating Control Files
■ Troubleshooting After Creating Control Files
■ Dropping Control Files
The only disadvantage of having multiple control files is that all operations that
update the control files (such as adding a datafile or checkpointing the database)
can take slightly longer. However, this difference is usually insignificant (especially
for operating systems that can perform multiple, concurrent writes) and does not
justify using only a single control file.
The control file of an Oracle database is created at the same time as the database. By
default, at least one copy of the control file must be created during database
creation. On some operating systems, Oracle creates multiple copies. You should
create two or more copies of the control file during database creation. You might
also need to create control files later, if you lose control files or want to change
particular settings in the control files.
This section describes ways to create control files, and includes the following topics:
■ Creating Initial Control Files
■ Creating Additional Control File Copies, and Renaming and Relocating Control
Files
■ New Control Files
■ Creating New Control Files
Creating Additional Control File Copies, and Renaming and Relocating Control Files
You add a new control file by copying an existing file to a new location and adding
the file’s name to the list of control files.
Similarly, you rename an existing control file by copying the file to its new name or
location, and changing the file’s name in the control file list.
In both cases, to guarantee that control files do not change during the procedure,
shut down the instance before copying the control file.
To Multiplex or Move Additional Copies of the Current Control Files
1. Shut down the database.
2. Copy an existing control file to a different location, using operating system
commands.
3. Edit the CONTROL_FILES parameter in the database’s parameter file to add
the new control file’s name, or to change the existing control filename.
4. Restart the database.
NORESETLOGS
DATAFILE ’datafile1’ SIZE 3M, ’datafile2’ SIZE 5M
MAXLOGFILES 50
MAXLOGMEMBERS 3
MAXDATAFILES 200
MAXINSTANCES 6
ARCHIVELOG;
See Also: For more information about the CREATE CONTROLFILE statement, see
the Oracle8i SQL Reference.
5. Create a new control file for the database using the CREATE CONTROLFILE
statement.
When creating the new control file, select the RESETLOGS option if you have
lost any online redo log groups in addition to the control files. In this case, you
will need to recover from the loss of the redo logs (Step 8). You must also
specify the RESETLOGS option if you have renamed the database. Otherwise,
select the NORESETLOGS option.
6. Store a backup of the new control file on an offline storage device.
7. Edit the parameter files of the database.
Edit the parameter files of the database to indicate all of the control files created
in Step 5 and Step 6 (not including the backup control file) in the
CONTROL_FILES parameter.
8. Recover the database if necessary.
If you are creating the control file as part of recovery, recover the database. If the
new control file was created using the NORESETLOGS option (Step 5), you can
recover the database with complete, closed database recovery.
If the new control file was created using the RESETLOGS option, you must
specify USING BACKUP CONTROL FILE. If you have lost online or archived
redo logs or datafiles, use the procedures for recovering those files.
9. Open the database.
Open the database using one of the following methods:
■ If you did not perform recovery, open the database normally.
■ If you performed complete, closed database recovery in Step 8, start up the
database.
■ If you specified RESETLOGS when creating the control file, use the ALTER
DATABASE statement, indicating RESETLOGS.
The database is now open and available for use.
See Also: See the Oracle8i Backup and Recovery Guide for more information about:
■ listing database files
■ backing up all datafiles and online redo log files of the database
■ recovering online or archived redo log files
■ closed database recovery
Oracle includes an explanatory message in the ALERT file to let you know what it
found.
This chapter explains how to manage the online redo log and includes the following
topics:
■ What Is the Online Redo Log?
■ Planning the Online Redo Log
■ Creating Online Redo Log Groups and Members
■ Renaming and Relocating Online Redo Log Members
■ Dropping Online Redo Log Groups and Members
■ Forcing Log Switches
■ Verifying Blocks in Redo Log Files
■ Clearing an Online Redo Log File
■ Listing Information about the Online Redo Log
See Also: For more information about managing the online redo logs of the
instances when using Oracle Parallel Server, see Oracle8i Parallel Server Concepts and
Administration.
To learn how checkpoints and the redo log impact instance recovery, see Oracle8i
Tuning.
Note: Oracle does not recommend backing up the online redo log.
Redo Threads
Each database instance has its own online redo log groups. These online redo log
groups, multiplexed or not, are called an instance’s thread of online redo. In typical
configurations, only one database instance accesses an Oracle database, so only one
thread is present. When running the Oracle Parallel Server, however, two or more
instances concurrently access a single database; each instance has its own thread.
This chapter describes how to configure and manage the online redo log when the
Oracle Parallel Server is not used. Hence, the thread number can be assumed to be 1
in all discussions and examples of commands.
See Also: For complete information about configuring the online redo log with the
Oracle Parallel Server, see Oracle8i Parallel Server Concepts and Administration.
file, and a system change number (SCN) is assigned to identify the redo records for
each committed transaction. Only once all redo records associated with a given
transaction are safely on disk in the online logs is the user process notified that the
transaction has been committed.
Redo records can also be written to an online redo log file before the corresponding
transaction is committed. If the redo log buffer fills, or another transaction commits,
LGWR flushes all of the redo log entries in the redo log buffer to an online redo log
file, even though some redo records may not be committed. If necessary, Oracle can
roll back these changes.
LGWR
Note: Oracle recommends that you multiplex your redo log files;
the loss of the log file data can be catastrophic if recovery is
required.
, ,,
Figure 6–2 Multiplexed Online Redo Log Files
Disk A Disk B
1, 3, 5, ...
Group 1
A_LOG1 B_LOG1
LGWR
Group 2
A_LOG2 2, 4, 6, ... B_LOG2
Group 1
Group 2
The corresponding online redo log files are called groups. Each online redo log file in
a group is called a member. In Figure 6–2, A_LOG1 and B_LOG1 are both members
of Group 1; A_LOG2 and B_LOG2 are both members of Group 2, and so forth. Each
member in a group must be exactly the same size.
Notice that each member of a group is concurrently active, or, concurrently written
to by LGWR, as indicated by the identical log sequence numbers assigned by
LGWR. In Figure 6–2, first LGWR writes to A_LOG1 in conjunction with B_LOG1,
then A_LOG2 in conjunction with B_LOG2, etc. LGWR never writes concurrently to
members of different groups (for example, to A_LOG1 and B_LOG2).
If Then
LGWR can successfully write to at Writing proceeds as normal; LGWR simply writes to
least one member in a group the available members of a group and ignores the
unavailable members.
LGWR cannot access the next Database operation temporarily halts until the group
group at a log switch because the becomes available, or, until the group is archived.
group needs to be archived
All members of the next group are Oracle returns an error and the database instance
inaccessible to LGWR at a log shuts down. In this case, you may need to perform
switch because of media failure media recovery on the database from the loss of an
online redo log file.
If the database checkpoint has moved beyond the lost
redo log (which is not the current log in this
example), media recovery is not necessary since
Oracle has saved the data recorded in the redo log to
the datafiles. Simply drop the inaccessible redo log
group. If Oracle did not archive the bad log, use
ALTER DATABASE CLEAR UNARCHIVED LOG to
disable archiving before the log can be dropped.
If all members of a group suddenly Oracle returns an error and the database instance
become inaccessible to LGWR immediately shuts down. In this case, you may need
while it is writing to them to perform media recovery. If the media containing
the log is not actually lost— for example, if the drive
for the log was inadvertently turned off — media
recovery may not be needed. In this case, you only
need to turn the drive back on and let Oracle perform
instance recovery.
Figure 6–3 Legal and Illegal Multiplexed Online Redo Log Configuration
LEGAL
Disk A Disk B
,
Group 1 A_LOG1 B_LOG1
Group 2 A_LOG2 B_LOG2
Group 3
, ,
A_LOG3 B_LOG3
ILLEGAL
Disk A Disk B
Group 2
Group 3
Group 1
Group 2
Group 3
the fewest groups possible without hampering LGWR’s writing redo log
information.
In some cases, a database instance may require only two groups. In other situations,
a database instance may require additional groups to guarantee that a recycled
group is always available to LGWR. During testing, the easiest way to determine if
the current online redo log configuration is satisfactory is to examine the contents of
the LGWR trace file and the database’s alert log. If messages indicate that LGWR
frequently has to wait for a group because a checkpoint has not completed or a
group has not been archived, add groups.
Consider the parameters that can limit the number of online redo log files before
setting up or altering the configuration of an instance’s online redo log. The
following parameters limit the number of online redo log files that you can add to a
database:
■ The MAXLOGFILES parameter used in the CREATE DATABASE statement
determines the maximum number of groups of online redo log files per
database; group values can range from 1 to MAXLOGFILES. The only way to
override this upper limit is to re-create the database or its control file; thus, it is
important to consider this limit before creating a database. If MAXLOGFILES is not
specified for the CREATE DATABASE statement, Oracle uses an operating system
default value.
■ The LOG_FILES initialization parameter (in the parameter file) can temporarily
decrease the maximum number of groups of online redo log files for the
duration of the current instance. Nevertheless, LOG_FILES cannot override
MAXLOGFILES to increase the limit. If LOG_FILES is not set in the database’s
parameter file, Oracle uses an operating system-specific default value.
■ The MAXLOGMEMBERS parameter used in the CREATE DATABASE
statement determines the maximum number of members per group. As with
MAXLOGFILES, the only way to override this upper limit is to re-create the
database or control file; thus, it is important to consider this limit before creating a
database. If no MAXLOGMEMBERS parameter is specified for the CREATE
DATABASE statement, Oracle uses an operating system default value.
See Also: For the default and legal values of the MAXLOGFILES and
MAXLOGMEMBERS parameters, and the LOG_FILES initialization parameter, see
your operating system-specific Oracle documentation.
Using the ALTER DATABASE statement with the ADD LOGFILE option, you can
specify the number that identifies the group with the GROUP option:
ALTER DATABASE ADD LOGFILE GROUP 10 (’/oracle/dbs/log1c.rdo’, ’/oracle/dbs/log2c.rdo’)
SIZE 500K;
Using group numbers can make administering redo log groups easier. However, the
group number must be between 1 and MAXLOGFILES; do not skip redo log file
group numbers (that is, do not number your groups 10, 20, 30, and so on), or you
will consume space in the control files of the database.
members of the group were dropped (for example, because of a disk failure). In this
case, you can add new members to an existing group.
To create new online redo log members for an existing group, use the SQL
statement ALTER DATABASE with the ADD LOG MEMBER parameter.
The following statement adds a new redo log member to redo log group number 2:
ALTER DATABASE ADD LOGFILE MEMBER ’/oracle/dbs/log2b.rdo’ TO GROUP 2;
Notice that filenames must be specified, but sizes need not be; the size of the new
members is determined from the size of the existing members of the group.
When using the ALTER DATABASE command, you can alternatively identify the
target group by specifying all of the other members of the group in the TO
parameter, as shown in the following example:
ALTER DATABASE ADD LOGFILE MEMBER ’/oracle/dbs/log2c.rdo’ TO
(’/oracle/dbs/log2a.rdo’, ’/oracle/dbs/log2b.rdo’);
■ The online redo log files located on diska must be relocated to diskc. The
new filenames will reflect the new location: /diskc/logs/log1c.rdo and /
diskc/logs/log2c.rdo.
The files /diska/logs/log1a.rdo and /diska/logs/log2a.rdo on diska
must be copied to the new files /diskc/logs/log1c.rdo and /diskc/logs/
log2c.rdo on diskc.
ALTER DATABASE RENAME FILE ’/diska/logs/log1a.rdo’, ’/diska/logs/log2a.rdo’
TO ’/diskc/logs/log1c.rdo’, ’/diskc/logs/log2c.rdo’;
When an online redo log group is dropped from the database, the operating system
files are not deleted from disk. Rather, the control files of the associated database are
updated to drop the members of the group from the database structure. After
dropping an online redo log group, make sure that the drop completed successfully,
and then use the appropriate operating system command to delete the dropped
online redo log files.
When an online redo log member is dropped from the database, the operating
system file is not deleted from disk. Rather, the control files of the associated
database are updated to drop the member from the database structure. After
dropping an online redo log file, make sure that the drop completed successfully,
and then use the appropriate operating system command to delete the dropped
online redo log file.
See Also: For information on dropping a member of an active group, see "Forcing
Log Switches" on page 6-16.
For more information about SQL*Plus command syntax, see the SQL*Plus User’s
Guide and Reference.
See Also: For information on forcing log switches with the Oracle Parallel Server,
see Oracle8i Parallel Server Concepts and Administration.
If you enable redo log block checking, Oracle computes a checksum for each redo log
block written to the current log. Oracle writes the checksums in the header of the
block.
Oracle uses the checksum to detect corruption in a redo log block. Oracle tries to
verify the redo log block when it writes the block to an archive log file and when the
block is read from an archived log during recovery.
If Oracle detects a corruption in a redo log block while trying to archive it, the
system tries to read the block from another member in the group. If the block is
corrupted in all members the redo log group, then archiving cannot proceed.
Restrictions
You can clear a redo log file whether it is archived or not. When it is not archived,
however, you must include the keyword UNARCHIVED in your ALTER
DATABASE CLEAR LOGFILE statement.
If you clear a log file that is needed for recovery of a backup, then you can no longer
recover from that backup. Oracle writes a message in the alert log describing the
backups from which you cannot recover.
Note: If you clear an unarchived redo log file, you should make
another backup of the database.
If you want to clear an unarchived redo log that is needed to bring an offline
tablespace online, use the clause UNRECOVERABLE DATAFILE in the ALTER
DATABASE CLEAR LOGFILE statement.
If you clear a redo log needed to bring an offline tablespace online, you will not be
able to bring the tablespace online again. You will have to drop the tablespace or
perform an incomplete recovery. Note that tablespaces taken offline normal do not
require recovery.
See Also: For a complete description of the ALTER DATABASE statement, see the
Oracle8i SQL Reference.
To see the names of all of the member of a group, use a query similar to the
following:
SELECT * FROM sys.v$logfile
WHERE group# = 2;
This chapter describes how to archive redo data. It includes the following topics:
■ What Is the Archived Redo Log?
■ Choosing Between NOARCHIVELOG and ARCHIVELOG Mode
■ Turning Archiving On and Off
■ Specifying the Archive Destination
■ Specifying the Mode of Log Transmission
■ Managing Archive Destination Failure
■ Tuning Archive Performance
■ Displaying Archived Redo Log Information
■ Using LogMiner to Analyze Online and Archived Redo Logs
See Also: If you are using Oracle with the Parallel Server, see Oracle8i Parallel Server
Concepts and Administration for additional information about archiving in the OPS
environment.
LGWR
Destination
1
Destination
2
log files cannot be used by LGWR until the group is archived. A filled group is
immediately available for archiving after a redo log switch occurs.
The archiving of filled groups has these advantages:
■ A database backup, together with online and archived redo log files, guarantees
that you can recover all committed transactions in the event of an operating
system or disk failure.
■ You can use a backup taken while the database is open and in normal system
use if you keep an archived log.
■ You can keep a standby database current with its original database by
continually applying the original’s archived redo logs to the standby.
Decide how you plan to archive filled groups of the online redo log. You can
configure an instance to archive filled online redo log files automatically, or you can
archive manually. For convenience and efficiency, automatic archiving is usually
best. Figure 7–2 illustrate how the process archiving the filled groups (ARCn in this
illustration) generates the database’s online redo log.
0001
0001 Archived
Redo Log
0001 0002
0002 Files
0001
0001 0002
0002 0003
0003
Online
Log Log Log Log Redo Log
0001 0002 0003 0004 Files
TIME
Before switching the database’s archiving mode, perform the following operations:
1. Shut down the database instance.
An open database must be closed and dismounted and any associated instances
shut down before you can switch the database’s archiving mode. You cannot
disable archiving if any datafiles need media recovery.
2. Back up the database.
Before making any major change to a database, always back up the database to
protect against any problems.
3. Start a new instance and mount but do not open the database.
To enable or disable archiving, the database must be mounted but not open.
Note: If you are using the Oracle Parallel Server, you must mount
the database exclusively, using one instance, to switch the
database’s archiving mode.
See Also: Always specify an archived redo log destination and filename format
when enabling automatic archiving; see "Specifying Archive Destinations" on
page 7-11. If automatic archiving is enabled, you can still perform manual archiving;
see "Performing Manual Archiving" on page 7-10.
The new value takes effect the next time you start the database.
If you use the ALTER SYSTEM method, you do not need to shut down the instance
to enable automatic archiving. If an instance is shut down and restarted after
automatic archiving is enabled, however, the instance is reinitialized using the
settings of the parameter file, which may or may not enable automatic archiving.
The new value takes effect the next time the database is started.
If ARCn is archiving a redo log group when you attempt to disable automatic
archiving, ARCn finishes archiving the current group, but does not begin archiving
the next filled online redo log group.
The instance does not have to be shut down to disable automatic archiving. If an
instance is shut down and restarted after automatic archiving is disabled, however,
the instance is reinitialized using the settings of the parameter file, which may or
may not enable automatic archiving.
See Also: With both manual or automatic archiving, you need to specify a thread
only when you are using the Oracle Parallel Server. See Oracle8i Parallel Server
Concepts and Administration for more information.
If you use the LOCATION keyword, specify a valid pathname for your operating
system. If you specify SERVICE, Oracle translates the net service name through the
tnsnames.ora file to a connect descriptor. The descriptor contains the information
necessary for connecting to the remote database. Note that the service name must
have an associated database SID, so that Oracle correctly updates the log history of
the control file for the standby database.
The second method, which allows you to specify a maximum of two locations, is to
use the LOG_ARCHIVE_DEST parameter to specify a primary archive destination
and the LOG_ARCHIVE_DUPLEX_DEST to determine an optional secondary
location. Whenever Oracle archives a redo log, it archives it to every destination
specified by either set of parameters.
If you are archiving to a standby database, use the SERVICE keyword to specify
a valid net service name from the tnsnames.ora file. For example, enter:
LOG_ARCHIVE_DEST_4 = ’SERVICE = standby1’
For example, the above settings will generate archived logs as follows for log
sequence numbers 100, 101, and 102:
/disk1/archive/arch100.arc, /disk1/archive/arch101.arc, /disk1/archive/arch102.arc
/disk2/archive/arch100.arc, /disk2/archive/arch101.arc, /disk2/archive/arch102.arc
/disk3/archive/arch100.arc, /disk3/archive/arch101.arc, /disk3/archive/arch102.arc
To Set the Destination for Archived Redo Logs Using LOG_ARCHIVE_DEST and
LOG_ARCHIVE_DUPLEX_DEST:
1. Use SQL*Plus to shut down the database.
SHUTDOWN IMMEDIATE;
For example, the above settings will generate archived logs as follows for log
sequence numbers 100 and 101 in thread 1:
/disk1/archive/arch_1_100.arc, /disk1/archive/arch_1_101.arc
/disk2/archive/arch_1_100.arc, /disk2/archive/arch_1_100.arc
See Also: For more information about archiving to standby databases, see Oracle8i
Backup and Recovery Guide.
See Also: For detailed information about V$ARCHIVE_DEST as well as the archive
destination parameters, see the Oracle8i Reference.
If you are operating your standby database in managed recovery mode, you can keep
your standby database in sync with your source database by automatically
applying transmitted archive logs.
To transmit files successfully to a standby database, either ARCn or a server process
must do the following:
■ Recognize a remote location.
■ Transmit the archived logs by means of a remote file server (RFS) process.
Each ARCn process creates a corresponding RFS for each standby destination. For
example, if three ARCn processes are archiving to two standby databases, then
Oracle establishes six RFS connections.
You can transmit archived logs through a network to a remote location by using
Net8. Indicate a remote archival by specifying a Net8 service name as an attribute of
the destination. Oracle then translates the service name, which you set by means of
Sample Scenarios
You can see the relationship between the LOG_ARCHIVE_DEST_n and
LOG_ARCHIVE_MIN_SUCCEED_DEST parameters most easily through sample
scenarios. In example 1, you archive to three local destinations, each of which you
declare as OPTIONAL. Table 7–2 illustrates the possible values for
LOG_ARCHIVE_MIN_SUCCEED_DEST=n in our example.
This example shows that even though you do not explicitly set any of your
destinations to MANDATORY using the LOG_ARCHIVE_DEST_n parameter,
Oracle must successfully archive to these locations when
LOG_ARCHIVE_MIN_SUCCEED_DEST is set to 1, 2, or 3.
In example 2, consider a case in which:
■ You specify two MANDATORY destinations.
■ You specify two OPTIONAL destinations.
■ No destination is a standby database.
This example shows that Oracle must archive to the destinations you specify as
MANDATORY, regardless of whether you set
LOG_ARCHIVE_MIN_SUCCEED_DEST to archive to a smaller number.
See Also: For additional information about
LOG_ARCHIVE_MIN_SUCCEED_DEST=n or any other parameters that relate to
archiving, see the Oracle8i Reference.
intended to allow you to specify the initial number of ARCn processes or to increase
or decrease the current number.
Creating multiple processes is especially useful when you:
■ Use more than two online redo logs.
■ Archive to more than one destination.
Multiple ARCn processing prevents the bottleneck that occurs when LGWR
switches through the multiple online redo logs faster than a single ARCn process
can write inactive logs to multiple destinations. Note that each ARCn process works
on only one inactive log at a time, but must archive to each specified destination.
For example, if you maintain five online redo log files, then you may decide to start
the instance using three ARCn processes. As LGWR actively writes to one of the log
files, the ARCn processes can simultaneously archive up to three of the inactive log
files to various destinations. As Figure illustrates, each instance of ARCn assumes
responsibility for a single log file and archives it to all of the defined destinations.
LGWR
Destination
1
Destination
2
Note: If you want to set archiving to be very slow, but find that
Oracle frequently has to wait for redo log files to be archived before
they can be reused, you can create additional redo log file groups.
Adding groups can ensure that a group is always available for
Oracle to use.
For example, the following query displays which online redo log group requires
archiving:
SELECT group#, archived
FROM sys.v$log;
GROUP# ARC
---------- ---
1 YES
2 NO
LOG_MODE
------------
NOARCHIVELOG
The SQL statement ARCHIVE LOG LIST also shows archiving information for the
connected instance:
ARCHIVE LOG LIST;
This display tells you all the necessary information regarding the archived redo log
settings for the current instance:
■ The database is currently operating in ARCHIVELOG mode.
■ Automatic archiving is enabled.
■ The destination of the archived redo log (operating system specific).
■ The oldest filled online redo log group has a sequence number of 30.
■ The next filled online redo log group to archive has a sequence number of 32.
■ The current online redo log file has a sequence number of 33.
You must archive all redo log groups with a sequence number equal to or greater
than the Next log sequence to archive, yet less than the Current log sequence number. For
example, the display above indicates that the online redo log group with sequence
number 32 needs to be archived.
See Also: For more information on the data dictionary views, see the Oracle8i
Reference.
Restrictions
LogMiner has the following usage and compatibility requirements. LogMiner only:
■ Runs in Oracle version 8.1 or later.
■ Analyzes redo log files from any version 8.0 or later database that uses the same
database character set and runs on the same hardware platform as the
analyzing instance.
■ Analyzes the contents of the redo log files completely with the aid of a
dictionary created by a PL/SQL package. The dictionary allows LogMiner to
translate internal object identifiers and data types to object name and external
data formats.
■ Obtains information about DML operations on conventional tables. It does not
support operations on:
■ Index-organized tables
■ Clustered tables/indexes
■ Non-scalar data types
■ Chained rows
2. Use SQL*Plus to mount and then open the database whose files you want to
analyze. For example, enter:
STARTUP
2. Use SQL*Plus to mount and then open the database whose files you want to
analyze. For example, enter:
STARTUP
3. Execute the copied dbmslogmnrd.sql script on the 8.0 database to create the
DBMS_LOGMNR_D package. For example, enter:
@dbmslogmnrd.sql
4. Specify a directory for use by the PL/SQL package by setting the init.ora
parameter UTL_FILE_DIR. If you do not reference this parameter, the
procedure will fail. For example, set the following to use /8.0/oracle/logs:
UTL_FILE_DIR = /8.0/oracle/logs
See Also: For information about DBMS_LOGMNR_D, see the Oracle8i Supplied
Packages Reference.
2. Create a list of logs by specifying the NEW option when executing the
DBMS_LOGMNR.ADD_LOGFILE procedure. For example, enter the following
to specify /oracle/logs/log1.f:
execute dbms_logmnr.add_logfile(
LogFileName => '/oracle/logs/log1.f',
Options => dbms_logmnr.NEW);
3. If desired, add more logs by specifying the ADDFILE option. For example,
enter the following to add /oracle/logs/log2.f:
execute dbms_logmnr.add_logfile(
LogFileName => '/oracle/logs/log2.f',
Options => dbms_logmnr.ADDFILE);
See Also: For information about DBMS_LOGMNR, see the Oracle8i Supplied
Packages Reference.
Using LogMiner
Once you have created a dictionary file and specified which logs to analyze, you can
start LogMiner and begin your analysis. Use the following options to narrow the
range of your search at start time:
Once you have started LogMiner, you can make use of the following data dictionary
views for analysis:
Optionally, set the StartTime and EndTime parameters to filter data by time.
Note that the procedure expects date values: use the TO_DATE function to
specify date and time, as in this example:
execute dbms_logmnr.start_logmnr(
DictFileName => ‘/oracle/dictionary.ora’,
StartTime => to_date(‘01-Jan-98 08:30:00’, 'DD-MON-YYYY HH:MI:SS')
EndTime => to_date('01-Jan-1998 08:45:00', 'DD-MON-YYYY HH:MI:SS'));
Use the StartScn and EndScn parameters to filter data by SCN, as in this
example:
execute dbms_logmnr.start_logmnr(
DictFileName => '/oracle/dictionary.ora',
StartScn => 100,
EndScn => 150);
2. View the output via the V$LOGMNR_CONTENTS table. LogMiner returns all
rows in SCN order, which is the same order applied in media recovery. For
example, the following query lists information about operations:
See Also: For information about DBMS_LOGMNR, see the Oracle8i Supplied
Packages Reference.
For more information about the LogMiner data dictionary views, see Oracle8i
Reference.
Analyzing Archived Redo Log Files from Other Databases You can run LogMiner on an
instance of a database while analyzing redo log files from a different database. To
analyze archived redo log files from other databases, LogMiner must:
■ Access a dictionary file that is both created from the same database as the redo
log files and created with the same database character set.
■ Run on the same hardware platform that generated the log files, although it
does not need to be on the same system.
■ Use redo log files that can be applied for recovery from Oracle version 8.0 and
later.
Tracking a User
In this example, you are interested in seeing all changes to the database in a specific
time range by one of yours users: JOEDEVO. You perform this operation in the
following steps:
■ Step 1: Creating the Dictionary
■ Step 2: Adding Logs and Limiting the Search Range
■ Step 3: Starting the LogMiner and Analyzing the Data
Step 1: Creating the Dictionary To use the LogMiner to analyze JOEDEVO’s data, you
must create a dictionary file before starting LogMiner.
You decide to do the following:
■ Call the dictionary file orc1dict.ora.
■ Place the dictionary in directory /user/local/dbs.
■ Set the initialization parameter UTL_FILE_DIR to /user/local/dbs.
# Set the initialization parameter UTL_FILE_DIR in the init.ora file
UTL_FILE_DIR = /user/local/dbs
Step 2: Adding Logs and Limiting the Search Range Now that the dictionary is created,
you decide to view the changes that happened at a specific time. You do the
following:
■ Create a list of log files for use and specify log log1orc1.ora.
■ Add log log2orc1.ora to the list.
■ Start LogMiner and limit the search to the range between 8:30 a.m. and 8:45
a.m. on January 1, 1998.
# Start SQL*Plus, connect as SYSTEM, then start the instance
connect system/manager
startup nomount
# Supply the list of logfiles to the reader. The Options flag is set to indicate this is a
# new list.
# Add a file to the existing list. The Options flag is clear to indicate that you are
# adding a file to the existing list
Step 3: Starting the LogMiner and Analyzing the Data At this point the
V$LOGMNR_CONTENTS table is available for queries. You decide to find all
changes made by user JOEDEVO to the salary table. As you discover, JOEDEVO
requested two operations: he deleted his old salary and then inserted a new, higher
salary. You now have the data necessary to undo this operation (and perhaps to
justify firing JOEDEVO!).
# Start the LogMiner. Limit the search to the specified time range.
execute dbms_logmnr.start_logmnr(
DictFileName => ‘orcldict.ora’,
StartTime => to_date(‘01-Jan-98 08:30:00’, 'DD-MON-YYYY HH:MI:SS')
EndTime => to_date('01-Jan-1998 08:45:00', 'DD-MON-YYYY HH:MI:SS'));
SQL_REDO SQL_UNDO
-------- --------
delete * from SALARY insert into SALARY(NAME,EMPNO, SAL)
This chapter describes how to use job queues to schedule periodic execution of
PL/SQL code, and includes the following topics:
■ SNP Background Processes
■ Managing Job Queues
■ Viewing Job Queue Information
When you ENABLE a restricted session, SNP background processes do not execute
jobs; when you DISABLE a restricted session, SNP background processes execute
jobs.
See Also: For more information on SNP background processes, see Oracle8i
Concepts.
■ Altering a Job
■ Broken Jobs
■ Forcing a Job to Execute
■ Terminating a Job
DBMS_JOB Package
To schedule and manage jobs in the job queue, use the procedures in the DBMS_JOB
package. There are no database privileges associated with using job queues. Any
user who can execute the job queue procedures can use the job queue. Table 8–2 lists
the job queue procedures in the DBMS_JOB package.
The SUBMIT procedure returns the number of the job you submitted. Table 8–3
describes the procedure’s parameters.
As an example, let’s submit a new job to the job queue. The job calls the procedure
DBMS_DDL.ANALYZE_OBJECT to generate optimizer statistics for the table
DQUON.ACCOUNTS. The statistics are based on a sample of half the rows of the
ACCOUNTS table. The job is run every 24 hours:
VARIABLE jobno number;
begin
2> DBMS_JOB.SUBMIT(:jobno,
3> ’dbms_ddl.analyze_object(’’TABLE’’,
4> ’’DQUON’’, ’’ACCOUNTS’’,
5> ’’ESTIMATE’’, NULL, 50);’
6> SYSDATE, ’SYSDATE + 1’);
7> commit;
8> end;
9> /
Statement processed.
print jobno
JOBNO
----------
14144
Job Environment
When you submit a job to the job queue or alter a job’s definition, Oracle records the
following environment characteristics:
■ the current user
■ the user submitting or altering a job
■ the current schema
■ MAC privileges (if appropriate)
Oracle also records the following NLS parameters:
■ NLS_LANGUAGE
■ NLS_TERRITORY
■ NLS_CURRENCY
■ NLS_ISO_CURRENCY
■ NLS_NUMERIC_CHARACTERS
■ NLS_DATE_FORMAT
■ NLS_DATE_LANGUAGE
■ NLS_SORT
Oracle restores these environment characteristics every time a job is executed.
NLS_LANGUAGE and NLS_TERRITORY parameters are defaults for unspecified
NLS parameters.
You can change a job’s environment by using the DBMS_SQL package and the
ALTER SESSION command.
Note: If the job number of a job you want to import matches the
number of a job already existing in the database, you will not be
allowed to import that job. Submit the job as a new job in the
database.
Job Owners
When you submit a job to the job queue, Oracle identifies you as the owner of the
job. Only a job’s owner can alter the job, force the job to run, or remove the job from
the queue.
Job Numbers
A queued job is identified by its job number. When you submit a job, its job number
is automatically generated from the sequence SYS.JOBSEQ.
Once a job is assigned a job number, that number does not change. Even if the job is
exported and imported, its job number remains the same.
Job Definitions
The job definition is the PL/SQL code specified in the WHAT parameter of the SUBMIT
procedure.
Normally the job definition is a single call to a procedure. The procedure call can
have any number of parameters.
Note: In the job definition, use two single quotation marks around
strings. Always include a semicolon at the end of the job definition.
There are special parameter values that Oracle recognizes in a job definition.
Table 8–4 lists these parameters.
Table 8–5 lists some common date expressions used for job execution intervals.
Interpreting Information about JQ Locks You can use the Enterprise Manager Lock
Monitor or the locking views in the data dictionary to examine information about
locks currently held by sessions.
The following query lists the session identifier, lock type, and lock identifiers for all
sessions holding JQ locks:
SELECT sid, type, id1, id2
FROM v$lock
WHERE type = ’JQ’;
In the query above, the identifier for the session holding the lock is 12. The ID1 lock
identifier is always 0 for JQ locks. The ID2 lock identifier is the job number of the
job the session is running.
Job Failure and Execution Times If a job returns an error while Oracle is attempting to
execute it, Oracle tries to execute it again. The first attempt is made after one minute, the
second attempt after two minutes, the third after four minutes, and so on, with the interval
doubling between each attempt. When the retry interval exceeds the execution interval,
Oracle continues to retry the job at the normal execution interval. However, if the job fails
16 times, Oracle automatically marks the job as broken and no longer tries to execute it.
Thus, if you can correct the problem that is preventing a job from running before
the job has failed sixteen times, Oracle will eventually run that job again.
See Also: For more information about the locking views, see the Oracle8i Reference.
For more information about locking, see Oracle8i Concepts.
The following statement removes job number 14144 from the job queue:
DBMS_JOB.REMOVE(14144);
Restrictions
You can remove currently executing jobs from the job queue. However, the job will
not be interrupted, and the current execution will be completed.
You can remove only jobs you own. If you try to remove a job that you do not own,
you receive a message that states the job is not in the job queue.
Altering a Job
To alter a job that has been submitted to the job queue, use the procedures
CHANGE, WHAT, NEXT_DATE, or INTERVAL in the DBMS_JOB package.
In this example, the job identified as 14144 is now executed every three days:
DBMS_JOB.CHANGE(14144, null, null, ’SYSDATE + 3’);
Restrictions
You can alter only jobs that you own. If you try to alter a job that you do not own,
you receive a message that states the job is not in the job queue.
If you specify NULL for WHAT, NEXT_DATE, or INTERVAL when you call the
procedure CHANGE, the current value remains unchanged.
Broken Jobs
A job is labeled as either broken or not broken. Oracle does not attempt to run
broken jobs. However, you can force a broken job to run by calling the procedure
DBMS_JOB.RUN.
When you submit a job it is considered not broken.
The following example marks job 14144 as not broken and sets its next execution
date to the following Monday:
DBMS_JOB.BROKEN(14144, FALSE, NEXT_DAY(SYSDATE, ’MONDAY’));
Once a job has been marked as broken, Oracle will not attempt to execute the job
until you either mark the job as not broken, or force the job to be executed by calling
the procedure DBMS_JOB.RUN.
Restrictions
You can mark as broken only jobs that you own. If you try to mark a job you do not
own, you receive a message stating that the job is not in the job queue.
When you run a job using DBMS_JOB.RUN, Oracle recomputes the next execution
date. For example, if you create a job on a Monday with a NEXT_DATE value of
’SYSDATE’ and an INTERVAL value of ’SYSDATE + 7’, the job is run every 7 days
starting on Monday. However, if you execute RUN on Wednesday, the next
execution date will be the next Wednesday.
Note: When you force a job to run, the job is executed in your
current session. Running the job reinitializes your session’s
packages.
Restrictions
You can only run jobs that you own. If you try to run a job that you do not own, you
receive a message that states the job is not in the job queue.
The following statement runs job 14144 in your session and recomputes the next
execution date:
DBMS_JOB.RUN(14144);
The procedure RUN contains an implicit commit. Once you execute a job using
RUN, you cannot roll back.
Terminating a Job
You can terminate a running job by marking the job as broken, identifying the
session running the job, and disconnecting that session. You should mark the job as
broken, so that Oracle does not attempt to run the job again.
After you have identified the session running the job (via V$SESSION), you can
disconnect the session using the SQL statement ALTER SYSTEM.
See Also: For examples of viewing information about jobs and sessions, see the
following section, "Viewing Job Queue Information".
For example, you can display information about a job’s status and failed executions.
The following sample query creates a listing of the job number, next execution time,
failures, and broken status for each job you have submitted:
SELECT job, next_date, next_sec, failures, broken
FROM user_jobs;
You can also display information about jobs currently running. The following
sample query lists the session identifier, job number, user who submitted the job,
and the start times for all currently running jobs:
SELECT sid, r.job, log_user, r.this_date, r.this_sec
FROM dba_jobs_running r, dba_jobs j
WHERE r.job = j.job;
See Also: For more information on data dictionary views, see the Oracle8i Reference.
This chapter describes the various aspects of tablespace management, and includes
the following topics:
■ Guidelines for Managing Tablespaces
■ Creating Tablespaces
■ Managing Tablespace Allocation
■ Altering Tablespace Availability
■ Making a Tablespace Read-Only
■ Dropping Tablespaces
■ Using the DBMS_SPACE_ADMIN Package
■ Transporting Tablespaces Between Databases
■ Viewing Information About Tablespaces
See Also: For information about estimating the sizes of objects, see Chapters 11
through 17.
Creating Tablespaces
The steps for creating tablespaces vary by operating system. On most operating
systems you indicate the size and fully specified filenames when creating a new
tablespace or altering a tablespace by adding datafiles. In each situation Oracle
automatically allocates and formats the datafiles as specified. However, on some
operating systems, you must create the datafiles before installation.
The first tablespace in any database is always the SYSTEM tablespace. Therefore,
the first datafiles of any database are automatically allocated for the SYSTEM
tablespace during database creation.
You might create a new tablespace for any of the following reasons:
■ You want to allocate more disk storage space for the associated database,
thereby enlarging the database.
■ You need to create a logical storage structure in which to store a specific type of
data separate from other database data.
To increase the total size of the database you can alternatively add a datafile to an
existing tablespace, rather than adding a new tablespace.
To create a new tablespace, use the SQL statement CREATE TABLESPACE. You
must have the CREATE TABLESPACE system privilege to create a tablespace.
As an example, let’s create the tablespace RB_SEGS (to hold rollback segments for
the database), with the following characteristics:
■ The data of the new tablespace is contained in a single datafile, 50M in size.
■ The default storage parameters for any segments created in this tablespace are
explicitly set.
■ After the tablespace is created, it is left offline.
The following statement creates the tablespace RB_SEGS:
CREATE TABLESPACE rb_segs
DATAFILE ’datafilers_1’ SIZE 50M
DEFAULT STORAGE (
INITIAL 50K
NEXT 50K
MINEXTENTS 2
MAXEXTENTS 50
PCTINCREASE 0)
OFFLINE;
If you do not fully specify filenames when creating tablespaces, the corresponding
datafiles are created in the ORACLE_HOME/dbs directory.
See Also: See your operating system-specific Oracle documentation for information
about initially creating a tablespace.
For more information about adding a datafile, see "Creating and Adding Datafiles
to a Tablespace" on page 10-5.
For more information about the CREATE TABLESPACE statement, see the Oracle8i
SQL Reference.
See Also: For detailed syntax on creating locally managed tablespaces, see the
Oracle8i SQL Reference.
See Also: For more information about creating a database with a locally managed
SYSTEM tablespace, see the Oracle8i SQL Reference.
See Also: For more information about the CREATE TABLESPACE and ALTER
TABLESPACE statements, see the Oracle8i SQL Reference.
For more information about V$SORT_SEGMENT, see the Oracle8i Reference.
For more information about Oracle space management, see Oracle8i Concepts.
Temporary Datafiles
Temporary datafiles differ from permanent datafiles in that they do not appear in
the DBA_DATA_FILES view. Instead, they appear in the DBA_TEMP_FILES view,
which is similar to DBA_DATA_FILES view except that it contains information
about temporary datafiles. In SQL, files belonging to temporary tablespaces are also
identified as TEMPFILES, rather than DATAFILES.
See Also: For more information about temporary datafiles and DBA_TEMP_FILES,
see the Oracle8i Reference.
See Also: For more information about creating a locally managed temporary
tablespace, see the Oracle8i SQL Reference.
The following statements take offline and bring online temporary files:
ALTER DATABASE TEMPFILE ’temp_file_1.f’ OFFLINE;
ALTER DATABASE TEMPFILE ’temp_file_1.f’ ONLINE;
See Also: For details and restrictions about statements used to alter locally
managed temporary tablespaces, see the Oracle8i SQL Reference.
New values for the default storage parameters of a tablespace affect only future
extents allocated for the segments within the tablespace.
Input
F F U F F F U F F EXTENT 1
JFEE1.ORA Output
F U F U F
Input
U U F F F F F U F F U EXTENT 2
Output
U U F U F U
If you find that fragmentation of space is high (contiguous space on your disk
appears as non-contiguous), you can coalesce your free space in a single space
transaction. After every eight coalesces the space transaction commits and other
transactions can allocate or deallocate space. You must have ALTER TABLESPACE
privileges to coalesce tablespaces. You can coalesce all available free space extents in
a tablespace into larger contiguous extents on a per tablespace basis by using the
following command:
ALTER TABLESPACE tablespace COALESCE;
You can also use this command to supplement SMON and extent allocation
coalescing, thereby improving space allocation performance in severely fragmented
tablespaces. Issuing this command does not effect the performance of other users
accessing the same tablespace. Like other options of the ALTER TABLESPACE
statement, the COALESCE option is exclusive; when specified, it should be the only
option.
Take a tablespace offline temporarily only when you cannot take it offline normally;
in this case, only the files taken offline because of errors need to be recovered before
the tablespace can be brought online. Take a tablespace offline immediately only
after trying both the normal and temporary options.
The following example takes the USERS tablespace offline normally:
ALTER TABLESPACE users OFFLINE NORMAL;
See Also: Before taking an online tablespace offline, verify that the tablespace
contains no active rollback segments. For more information see "Taking Rollback
Segments Offline" on page 21-12.
After a tablespace is read-only, you can copy its files to read-only media. You must
then rename the datafiles in the control file to point to the new location by using the
SQL statement ALTER DATABASE RENAME.
Prerequisites
Before you can make a tablespace read-only, the following conditions must be met.
It may be easiest to meet these restrictions by performing this function in restricted
mode, so that only users with the RESTRICTED SESSION system privilege can be
logged on.
■ The tablespace must be online.
This is necessary to ensure that there is no undo information that needs to be
applied to the tablespace.
■ The tablespace must not contain any active rollback segments.
For this reason, the SYSTEM tablespace can never be made read-only, since it
contains the SYSTEM rollback segment. Additionally, because the rollback
segments of a read-only tablespace are not accessible, it is recommended that
you drop the rollback segments before you make a tablespace read-only.
■ The tablespace must not currently be involved in an online backup, since the
end of a backup updates the header file of all datafiles in the tablespace.
■ The COMPATIBLE initialization parameter must be set to 7.1.0 or greater.
For better performance while accessing data in a read-only tablespace, you might
want to issue a query that accesses all of the blocks of the tables in the tablespace
just before making it read-only. A simple query, such as SELECT COUNT (*),
executed against each table will ensure that the data blocks in the tablespace can be
subsequently accessed most efficiently. This eliminates the need for Oracle to check
the status of the transactions that most recently modified the blocks.
See Also: For more information about read-only tablespaces, see Oracle8i Concepts.
Making a read-only tablespace writeable updates the control file for the datafiles, so
that you can use the read-only version of the datafiles as a starting point for
recovery.
Prerequisites
To issue this command, all of the datafiles in the tablespace must be online. Use the
DATAFILE ONLINE option of the ALTER DATABASE command to bring a datafile
online. The V$DATAFILE view lists the current status of a datafile.
Dropping Tablespaces
You can drop a tablespace and its contents (the segments contained in the
tablespace) from the database if the tablespace and its contents are no longer
required. Any tablespace in an Oracle database, except the SYSTEM tablespace, can
be dropped. You must have the DROP TABLESPACE system privilege to drop a
tablespace.
When you drop a tablespace, only the file pointers in the control files of the
associated database are dropped. The datafiles that constituted the dropped
tablespace continue to exist. To free previously used disk space, delete the datafiles
of the dropped tablespace using the appropriate commands of your operating
system after completing this procedure.
You cannot drop a tablespace that contains any active segments. For example, if a
table in the tablespace is currently being used or the tablespace contains an active
rollback segment, you cannot drop the tablespace. For simplicity, take the
tablespace offline before dropping it.
After a tablespace is dropped, the tablespace’s entry remains in the data dictionary
(see the DBA_TABLESPACES view), but the tablespace’s status is changed to
INVALID.
To drop a tablespace, use the SQL command DROP TABLESPACE. The following
statement drops the USERS tablespace, including the segments in the tablespace:
DROP TABLESPACE users INCLUDING CONTENTS;
If the tablespace is empty (does not contain any tables, views, or other structures),
you do not need to check the Including Contained Objects checkbox. If the
tablespace contains any tables with primary or unique keys referenced by foreign
keys of tables in other tablespaces and you want to cascade the drop of the
FOREIGN KEY constraints of the child tables, select the Cascade Drop of Integrity
Constraints checkbox to drop the tablespace.
Use the CASCADE CONSTRAINTS option of the DROP TABLESPACE statement to
cascade the drop of the FOREIGN KEY constraints in the child tables.
See Also: For more information about taking tablespaces offline, see "Taking
Tablespaces Offline" on page 9-10.
For more information about the DROP TABLESPACE statement, see the Oracle8i SQL
Reference.
Scenario 1
The TABLESPACE_VERIFY procedure discovers that a segment has allocated
blocks that are marked "free" in the bit map, but no overlap between segments was
reported.
In this scenario, perform the following tasks:
■ Call the SEGMENT_EXTENT_MAP_DUMP procedure to dump the ranges that
the administrator allocated to the segment.
Scenario 2
You cannot drop a segment because the bit map has segment blocks marked "free."
The system has automatically marked it corrupt.
In this scenario, perform the following tasks:
■ Call the SEGMENT_VERIFY procedure with the SEGMENT_CHECK_ALL
option. If no overlaps are reported, perform the following:
■ Call the SEGMENT_EXTENT_MAP_DUMP procedure to dump the ranges
that the administrator allocated to the segment.
■ For each range, call the TABLESPACE_FIX_BITMAPS procedure with the
TABLESPACE_MAKE_FREE option to mark the space as "free."
■ Call the SEGMENT_DROP_CORRUPT procedure to drop the SEG$ entry.
Scenario 3
The TABLESPACE_VERIFY procedure has reported some overlapping. Some of the
real data must be sacrificed based on previous internal errors.
After choosing the object to be sacrificed, say table T1, perform the following tasks:
■ Make a list of all objects that T1 overlaps.
■ Drop table T1. If necessary, follow up by calling the
SEGMENT_DROP_CORRUPT procedure.
■ Call the SEGMENT_VERIFY procedure on all objects that T1 overlapped. If
necessary, call the TABLESPACE_FIX_BITMAPS procedure to mark appropriate
bit maps as used.
■ Rerun the TABLESPACE_VERIFY procedure to verify the problem is resolved.
Scenario 4
A set of bitmap blocks has media corruption.
In this scenario, perform the following tasks:
■ Call the TABLESPACE_REBUILD_MAPS procedure, either on all bitmap
blocks, or on a single block if only one is corrupt.
■ Call the TABLESPACE_VERIFY procedure to verify that the bit maps are
consistent.
See Also: For more information about the DBMS_SPACE_ADMIN package, see the
Oracle8i Supplied Packages Reference.
You can use transportable tablespaces to move a subset of an Oracle database and
"plug" it in to another Oracle database, essentially moving tablespaces between the
databases. Transporting tablespaces is particularly useful for:
Current Limitations
Be aware of the following limitations as you plan for and use transportable
tablespaces:
■ The source and target database must be on the same hardware platform. For
example, you can transport tablespaces between Sun Solaris Oracle databases,
or you can transport tablespaces between NT Oracle databases. However, you
cannot transport a tablespace from a SUN Solaris Oracle database to an NT
Oracle database.
■ The source and target database must have the same database block size.
■ The source and target database must use the same character set.
■ You cannot transport a tablespace to a target database in which a tablespace
with the same name already exists.
■ Currently, transportable tablespaces do not support:
– snapshot/replication
– function-based indexes
– Scoped REFs
– domain indexes (a new type of index provided by extensible indexing)
– 8.0-compatible advanced queues with multiple recipients
■ An index inside the set of tablespaces is for a table outside of the set of
tablespaces.
■ A partitioned table is partially contained in the set of tablespaces.
■ A table inside the set of tablespaces contains a LOB column that points to LOBs
outside the set of tablespaces.
To determine whether a set of tablespaces is self-contained, you can invoke a built-
in PL/SQL procedure, giving it the list of the tablespace names and indicating that
you wish to transport referential integrity constraints. For example, suppose you
want to determine whether tablespaces ts1 and ts2 are self-contained (with
constraints taken into consideration). You can issue the following command:
execute dbms_tts.transport_set_check(’ts1,ts2’, TRUE)
After invoking this PL/SQL routine, you can see all violations by selecting from the
TRANSPORT_SET_VIOLATIONS view. If the set of tablespaces is self-contained,
this view will be empty. If the set of tablespaces is not self-contained, this view lists
all the violations. For example, suppose there are two violations: a foreign key
constraint, dept_fk, across the tablespace set boundary, and a partitioned table,
sales, that is partially contained in the tablespace set. Querying
TRANSPORT_SET_VIOLATIONS results in the following:
select * from transport_set_violations;
VIOLATIONS
------------------------------------
Constraint DEPT_FK between table JIM.EMP in tablespace FOO and table JIM.DEPT in
tablespace OTHER
Partitioned table JIM.SALES is partially contained in the transportable set
Object references (such as REFs) across the tablespace set are not considered
violations. REFs are not checked by the TRANSPORT_SET_CHECK routine. When
a tablespace containing dangling REFs is plugged into a database, queries following
that dangling REF indicate user error.
See Also: For more information about REFs, see the Oracle8i Application Developer’s
Guide - Fundamentals.
2. Invoke the Export utility and specify which tablespaces are in the transportable
set, as follows:
EXP TRANSPORT_TABLESPACE=y TABLESPACES=sales_1,sales_2
TRIGGERS=y/n CONSTRAINTS=y/n GRANTS=y/n FILE=expdat.dmp
If the tablespace sets being transported are not self-contained, export will fail and
indicate that the transportable set is not self-contained. You must then return to Step
1 to resolve all violations.
You can use FROMUSER and TOUSER to change the owners of objects. For
example, if you specify FROMUSER=dcranney,jfee TOUSER=smith,
williams, objects in the tablespace set owned by dcranney in the source
database will be owned by smith in the target database after the tablespace set
is plugged in. Similarly, objects owned by jfee in the source database will be
owned by williams in the target database. In this case, the target database
does not have to have users dcranney and jfee, but must have users smith
and williams.
After this statement successfully executes, all tablespaces in the set being copied
remain in read-only mode. You should check the import logs to ensure no error
has occurred. At this point, you can issue the ALTER TABLESPACE...READ
WRITE statement to place the new tablespaces in read-write mode.
When dealing with a large number of datafiles, specifying the list of datafile names
in the command line can be a laborious process; it may even exceed the command
line limit. In this situation, you may use an import parameter file. For example, one
of the commands in this step is equivalent to the following:
IMP PARFILE=’par.f’
To transport a tablespace between databases, both the source and target database
must be running Oracle8i, with the init.ora compatibility parameter set to 8.1.
Object Behaviors
Most objects, whether data in a tablespace or structural information associated with
the tablespace, behave normally after being transported to a different database.
However, the following objects are exceptions:
■ ROWIDs
■ REFs
■ Privileges
■ Partitioned Tables
■ Objects
■ Advanced Queues
■ Indexes
■ Triggers
■ Snapshots/Replication
ROWIDs
When a database contains tablespaces that have been plugged in (from other
databases), the ROWIDs in that database are no longer unique. A ROWID is
guaranteed unique only within a table.
REFs
REFs are not checked when Oracle determines if a set of tablespaces is self-
contained. As a result, a plugged-in tablespace may contain dangling REFs. Any
query following dangling REFs returns a user error.
Privileges
Privileges are transported if you specify GRANTS=y during export. During import,
some grants may fail. For example, the user being granted a certain right may not
exist, or a role being granted a particular right may not exist.
Partitioned Tables
You cannot move a partitioned table via transportable tablespaces when only a
subset of the partitioned table is contained in the set of tablespaces. You must
ensure that all partitions in a table are in the tablespace set, or exchange the
partitions into tables before copying the tablespace set. However, you should note
that exchanging partitions with tables invalidates the global index of the partitioned
table.
At the target database, you can exchange the tables back into partitions if there is
already a partitioned table that exactly matches the column in the target database. If
all partitions of that table come from the same foreign database, the exchange
operation is guaranteed to succeed. If they do not, in rare cases, the exchange
operation may return an error indicating that there is a data object number conflict.
If you receive a data object conflict number error when exchanging tables back into
partitions, you can move the offending partition using the ALTER TABLE MOVE
PARTITION statement. After doing so, retry the exchange operation.
If you specify the WITHOUT VALIDATION option of the exchange statement, the
statement will return immediately because it only manipulates structural
information. Moving partitions, however, may be slow because the data in the
partition can be copied. See "Transporting and Attaching Partitions for Data
Warehousing: Example" on page 9-27 for an example using partitioned tables.
Objects
A transportable tablespace set can contain:
■ tables
■ indexes
■ bitmap indexes
■ index-organized tables
■ LOBs
■ nested tables
■ varrays
■ tables with user-defined type columns
If the tablespace set contains a pointer to a BFILE, you must move the BFILE and set
the directory correctly in the target database.
Advanced Queues
You can use transportable tablespaces to move or copy Oracle advanced queues, as
long as these queues are not 8.0-compatible queues with multiple recipients. After a
queue is transported to a target database, the queue is initially disabled. After
making the transported tablespaces read-write in the target database, you can
enable the queue by starting it up via the built-in PL/SQL routine
dbms_aqadm.start_queue().
Indexes
You can transport regular indexes and bitmap indexes. When the transportable set
fully contains a partitioned table, you can also transport the global index of the
partitioned table.
Function-based indexes and domain indexes are not supported. If they exist in a
tablespace, you must drop them before you can transport the tablespace.
Triggers
Triggers are exported without a validity check. In other words, Oracle does not
verify that the trigger refers only to objects within the transportable set. Invalid
triggers will cause a compilation error during the subsequent import.
Snapshots/Replication
Transporting snapshot or replication structural information is not supported. If a
table in the tablespace you want to transport is replicated, you must drop the
replication structural information and convert the table into a normal table before
you can transport the tablespace.
Initially, all partitions are empty, and are in the same default tablespace. Each
month, you wish to create one partition and attach it to the partitioned sales
table.
Suppose it is July 1998, and you would like to load the July sales data into the
partitioned table. In a staging database, you create a new tablespace, ts_jul. You
also create a table, jul_sales, in that tablespace with exactly the same column
types as the sales table. You can create the table jul_sales using the CREATE
TABLE...AS SELECT statement. After creating and populating jul_sales, you can
also create an index, jul_sale_index, for the table, indexing the same column as
the local indexes in the sales table. After building the index, transport the
tablespace ts_jul to the data warehouse.
In the data warehouse, add a partition to the sales table for the July sales data.
This also creates another partition for the local nonprefixed index:
ALTER TABLE sales ADD PARTITION jul98 VALUES LESS THAN (1998, 8, 1);
Attach the transported table jul_sales to the table sales by exchanging it with
the new partition:
ALTER TABLE sales EXCHANGE PARTITION jul98 WITH TABLE jul_sales INCLUDING INDEXES
WITHOUT VALIDATION;
This statement places the July sales data into the new partition jul98, attaching the
new data to the partitioned table. This statement also converts the index
jul_sale_index into a partition of the local index for the sales table. This
statement should return immediately, because it only operates on the structural
information; it simply switches database pointers. If you know that the data in the
new partition does not overlap with data in previous partitions, you are advised to
specify the WITHOUT VALIDATION option; otherwise the statement will go
through all the new data in the new partition in an attempt to validate the range of
that partition.
If all partitions of the sales table came from the same staging database (the staging
database is never destroyed), the exchange statement will always succeed. In
general, however, if data in a partitioned table comes from different databases, it’s
possible that the exchange operation may fail. For example, if the jan98 partition
of sales did not come from the same staging database, the above exchange
operation can fail, returning the following error:
ORA-19728: data object number conflict between table JUL_SALES and partition JAN98 in
table SALES
To resolve this conflict, move the offending partition by issuing the following
statement:
ALTER TABLE sales MOVE PARTITION jan98;
After the exchange succeeds, you can safely drop jul_sales and
jul_sale_index (both are now empty). Thus you have successfully loaded the
July sales data into your data warehouse.
You can remove the CD while the database is still up. Subsequent queries to the
tablespace will return an error indicating that Oracle cannot open the datafiles on
the CD. However, operations to other parts of the datafile are not affected. Placing
the CD back into the drive makes the tablespace readable again.
Removing the CD is the same as removing the datafiles for a read-only tablespace.
If you shut down and restart the database, Oracle will indicate that it cannot find
the removed datafile and will not open the database (unless you set the
initialization parameter READ_ONLY_OPEN_DELAYED to true). When
READ_ONLY_OPEN_DELAYED is set to TRUE, Oracle reads the file only when
someone queries the plugged-in tablespace. Thus, when plugging in a tablespace on
a CD, you should always set the READ_ONLY_OPEN_DELAYED initialization
parameter to TRUE, unless the CD is permanently attached to the database.
■ Plug the tablespaces into each of the databases you wish to mount the
tablespace. Generate a transportable set in a single database. Put the datafiles in
the transportable set on a disk accessible to all databases. Import the structural
information into each database.
■ Generate the transportable set in one of the databases and plug it into other
databases. If you use this approach, it is assumed that the datafiles are already
on the shared disk, and they belong to an existing tablespace in one of the
databases. You can make the tablespace read-only, generate the transportable
set, and then plug the tablespace in to other databases while the datafiles
remain in the same location on the shared disk.
You can make the disk accessible by multiple computers via several ways. You may
use either a clustered file system or raw disk, as that is required by Oracle Parallel
Server. Because Oracle will only read these type of datafiles on shared disk, you can
also use NFS. Be aware, however, that if a user queries the shared tablespace while
NFS is down, the database may hang until the NFS operation times out.
Later, you can drop the read-only tablespace in some of the databases. Doing so will
not modify the datafiles for the tablespace; thus the drop operation will not corrupt
the tablespace. Do not make the tablespace read-write unless only one database is
mounting the tablespace.
TOTAL shows the amount of free space in each tablespace, PIECES shows the
amount of fragmentation in the datafiles of the tablespace, and MAXIMUM shows
the largest contiguous area of space. This query is useful when you are going to
create a new object or you know that a segment is about to extend, and you want to
make sure that there is enough space in the containing tablespace.
This chapter describes the various aspects of datafile management, and includes the
following topics:
■ Guidelines for Managing Datafiles
■ Creating and Adding Datafiles to a Tablespace
■ Changing a Datafile’s Size
■ Altering Datafile Availability
■ Renaming and Relocating Datafiles
■ Verifying Data Blocks in Datafiles
■ Viewing Information About Datafiles
See Also: Datafiles can also be created as part of database recovery from a media
failure. For more information, see the Oracle8i Backup and Recovery Guide.
When determining a value for DB_FILES, take the following into consideration:
■ If the value of DB_FILES is too low, you will be unable to add datafiles beyond
the DB_FILES limit without first shutting down the database.
■ IF the value of DB_FILES is too high, memory is unnecessarily consumed.
Theoretically, an Oracle database can have an unlimited number of datafiles.
Nevertheless, you should consider the following when determining the number of
datafiles:
■ Performance is better with a small number of datafiles rather than a large
number of small datafiles. Large files also increase the granularity of a
recoverable unit.
■ Operating systems often impose a limit on the number of files a process can
open simultaneously. Oracle’s DBW0 process can open all online datafiles.
Oracle is also capable of treating open file descriptors as a cache, automatically
closing files when the number of open file descriptors reaches the operating
system-defined limit.
Oracle allows more datafiles in the database than the operating system-defined
limit; this can have a negative performance impact. When possible, adjust the
operating system limit on open file descriptors so that it is larger than the number
of online datafiles in the database.
The operating system specific limit on the maximum number of datafiles allowed in
a tablespace is typically 1023 files.
See Also: For more information on operating system limits, see your operating
system-specific Oracle documentation.
For information about Parallel Server operating system limits, see Oracle8i Parallel
Server Concepts and Administration.
For more information about MAXDATAFILES, see the Oracle8i SQL Reference.
If you add new datafiles to a tablespace and do not fully specify the filenames,
Oracle creates the datafiles in the default directory of the database server. Unless
you want to reuse existing files, make sure the new filenames do not conflict with
other files; the old files that have been previously dropped will be overwritten.
The value of NEXT is the minimum size of the increments added to the file when it
extends. The value of MAXSIZE is the maximum size to which the file can
automatically extend.
The next example disables automatic extension for the datafile FILENAME2:
ALTER DATABASE DATAFILE ’filename2’
AUTOEXTEND OFF;
See Also: For more information about the SQL statements for creating or altering
datafiles, see the Oracle8i SQL Reference.
In this example, assume that the datafile FILENAME2 has extended up to 250M.
However, because its tablespace now stores smaller objects, the datafile can be
reduced in size.
The following command decreases the size of datafile FILENAME2:
ALTER DATABASE DATAFILE ’filename2’
RESIZE 100M;
See Also: For more information about the implications resizing files has for
downgrading, see Oracle8i Migration.
For more information about the ALTER DATABASE statement, see the Oracle8i SQL
Reference.
Note: You can make all datafiles in a tablespace, other than the
files in the SYSTEM tablespace, temporarily unavailable by taking
the tablespace offline. You must leave these files in the tablespace to
bring the tablespace back online.
independently be taken online or offline using the DATAFILE option of the ALTER
DATABASE command.
To bring a datafile online or take it offline, in either archiving mode, you must have
the ALTER DATABASE system privilege. You can perform these operations only
when the database is open in exclusive mode.
See Also: For more information about bringing datafiles online during media
recovery, see the Oracle8i Backup and Recovery Guide.
Renaming and relocating datafiles with these procedures only change the pointers
to the datafiles, as recorded in the database’s control file; it does not physically
rename any operating system files, nor does it copy files at the operating system
level. Therefore, renaming and relocating datafiles involve several steps. Read the
steps and examples carefully before performing these procedures.
You must have the ALTER TABLESPACE system privilege to rename datafiles of a
single tablespace.
4. Use the SQL statement ALTER TABLESPACE with the RENAME DATAFILE
option to change the filenames within the database.
For example, the following statement renames the datafiles FILENAME1 and
FILENAME2 to FILENAME3 and FILENAME4, respectively:
ALTER TABLESPACE users
RENAME DATAFILE ’filename1’, ’filename2’
TO ’filename3’, ’filename4’;
The new file must already exist; this command does not create a file. Also, always
provide complete filenames (including their paths) to properly identify the old and
new datafiles. In particular, specify the old filename exactly as it appears in the
DBA_DATA_FILES view of the data dictionary.
The new file must already exist; this command does not create a file. Also, always
provide complete filenames (including their paths) to properly identify the old and
new datafiles. In particular, specify the old filename exactly as it appears in the
DBA_DATA_FILES view of the data dictionary.
Here, FILENAME1 and FILENAME2 are two fully specified filenames, each
1MB in size.
2. Back up the database.
Before making any structural changes to a database, such as renaming and
relocating the datafiles of one or more tablespaces, always completely back up
the database.
3. Take the tablespace containing the datafile offline, or shut down the database
and restart and mount it, leaving it closed. Either option closes the datafiles of
the tablespace.
4. Copy the datafiles to their new locations using operating system commands.
For this example, the existing files FILENAME1 and FILENAME2 are copied to
FILENAME3 and FILENAME4.
The next time Oracle reads a data block, it uses the checksum to detect corruption in
the block. If a corruption is detected, Oracle returns message ORA-01578 and writes
information about the corruption to a trace file.
FILE# lists the file number of each datafile; the first datafile in the SYSTEM
tablespace created with the database is always file 1. STATUS lists other information
about a datafile. If a datafile is part of the SYSTEM tablespace, its status is SYSTEM
(unless it requires recovery). If a datafile in a non-SYSTEM tablespace is online, its
status is ONLINE. If a datafile in a non-SYSTEM tablespace is offline, its status can
be either OFFLINE or RECOVER. CHECKPOINT lists the final SCN written for a
datafile’s most recent checkpoint.
This chapter describes how to use the Database Resource Manager and includes the
following topics:
■ Using Database Resource Manager Packages
■ Database Resource Manager Views
Introduction
Typically, when database resource allocation decisions are left to the operating
system (OS), you may encounter the following problems:
■ Excessive overhead
Excessive overhead results from OS context switching between Oracle servers
when the number of servers is high.
■ Inefficient scheduling
The OS de-schedules Oracle servers while they hold latches, which is inefficient.
■ Poor resource partitioning
The OS fails to partition CPU resources appropriately among tasks of varying
importance.
■ Inability to manage database-specific resources, such as parallel slaves and
active sessions
Oracle’s Database Resource Manager allocates resources based on a resource plan
that is specified by database administrators. Database Resource Manager ultimately
offers you more control over resource management decisions and addresses the
problems caused by inefficient OS scheduling.
Administrators use the basic elements of Database Resource Manager described in
Table 11–1.
Table 11–1 Database Resource Manager Elements
Element Description
resource consumer group user sessions grouped together based on
resource processing requirements
resource plan contains directives that specify which
resources are allocated to resource
consumer groups
resource allocation method the method/policy used by Database
Resource Manager when allocating for a
particular resource; used by resource
consumer groups and resource plans
resource plan directive used by administrators to associate
resource consumer groups with particular
plans and partition resources among
resource consumer groups
See Also: For detailed conceptual information about the Database Resource
Manager, see Oracle8i Concepts.
You can use the following procedures to create, update, or delete resource plans:
create_plan(plan in varchar2, comment in varchar2,
cpu_mth in varchar2 DEFAULT ’EMPHASIS’,
max_active_sess_target_mth in varchar2 DEFAULT
’MAX_ACTIVE_SESS_ABSOLUTE’,
parallel_degree_limit_mth in varchar2 DEFAULT
’PARALLEL_DEGREE_LIMIT_ABSOLUTE’)
update_plan(plan in varchar2, new_comment in varchar2)
DEFAULT NULL, new_cpu_mth in varchar2
DEFAULT NULL, new_max_active_sess_target_mth in
varchar2 DEFAULT NULL,
new_parallel_degree_limit_mth in varchar2
DEFAULT NULL)
delete_plan(plan in varchar2)
delete_plan_cascade(plan in varchar2)
The delete_plan procedure deletes the specified plan as well as all the plan
directives it refers to. The delete_plan_cascade procedure deletes the specified
plan as well as all its descendants (plan directives, subplans, resource consumer
groups). If delete_plan_cascade encounters an error, it will roll back, leaving
the plan schema unchanged.
If you do not specify the arguments to update_plan procedure, they remain
unchanged in the data dictionary.
If you wish to use a default resource allocation method, you need not specify it
when creating or updating a plan. The method defaults are:
■ cpu_method =’EMPHASIS’
■ parallel_degree_limit_mth =’PARALLEL_DEGREE_LIMIT_ABSOLUTE’
You need not specify the cpu_mth parameter if you wish to use the default CPU
method, which is ROUND-ROBIN.
If you do not specify the arguments for the update_consumer_group procedure,
they remain unchanged in the data dictionary.
dbms_resource_manager.validate_pending_area
dbms_resource_manager.clear_pending_area
dbms_resource_manager.submit_pending_area
Note: The changes come into effect and become active only if the
submit_pending_area procedure completes successfully.
You can also view the current schema containing your changes by selecting from
the appropriate user views while the pending area is active. You can clear the
pending area to abort the current changes any time as well. Call the validate
procedure to check whether your changes are valid.
The changes made within the pending area must adhere to the following rules:
1. No plan schema may contain any loops.
2. All plan and/or resource consumer groups referred to by plan directives must
exist.
3. All plans must have plan directives that point to either plans or resource
consumer groups.
4. All percentages in any given level must not add up to greater than 100 for the
emphasis resource allocation method.
5. A plan that is currently being used as a top plan by an active instance cannot be
deleted.
6. The plan directive parameter parallel_degree_limit_p1 can appear only
in plan directives that refer to resource consumer groups (not other resource
plans).
7. There can be no more than 32 resource consumer groups in any active plan
schema. Also, at most, a plan can have 32 children. All leaves of a top plan must
be consumer resource groups; at the lowest level in a plan schema the plan
directives must refer to consumer groups.
8. Plans and resource consumer groups may not have the same name.
9. There must be a plan directive for OTHER_GROUPS somewhere in an active
plan schema. This ensures that a session not covered by the currently active
plan is allocated resources as specified by the OTHER_GROUPS directive.
Database Resource Manager allows "orphan" resource consumer groups (resource
consumer groups with no plan directives referring to them) because you may wish
to create a resource consumer group that is not currently being used, but will be
used in the future.
You will receive an error message if any of the above rules are broken when checked
by the validate or submit procedures. You may then make changes to fix the
problem(s) and reissue the validate or submit procedures. The
The following commands create a multi-level schema, and use the default plan and
resource consumer group methods as illustrated in Figure 11–1:
begin
dbms_resource_manager.create_pending_area();
dbms_resource_manager.create_plan(plan => ’BUGDB_PLAN’,
comment => ’Resource plan/method for bug users’sessions’);
dbms_resource_manager.create_plan(plan => ’MAILDB_PLAN’,
comment => ’Resource plan/method for mail users’ sessions’);
dbms_resource_manager.create_plan(plan => ’MYDB_PLAN’,
comment => ’Resource plan/method for bug and mail users’ sessions’);
dbms_resource_manager.create_consumer_group(consumer_group => ’Bug_Online_group’,
comment => ’Resource consumer group/method for online bug users’ sessions’);
dbms_resource_manager.create_consumer_group(consumer_group => ’Bug_Batch_group’,
comment => ’Resource consumer group/method for bug users’ sessions who run batch jobs’);
dbms_resource_manager.create_consumer_group(consumer_group => ’Bug_Maintenance_group’,
comment => ’Resource consumer group/method for users’ sessions who maintain
the bug db’);
dbms_resource_manager.create_consumer_group(consumer_group => ’Mail_users_group’,
comment => ’Resource consumer group/method for mail users’ sessions’);
dbms_resource_manager.create_consumer_group(consumer_group => ’Mail_Postman_group’,
comment => ’Resource consumer group/method for mail postman’);
dbms_resource_manager.create_consumer_group(consumer_group => ’Mail_Maintenance_group’,
comment => ’Resource consumer group/method for users’ sessions who maintain the mail
db’);
dbms_resource_manager.create_plan_directive(plan => ’BUGDB_PLAN’, group_or_subplan =>
’Bug_Online_group’,
comment => ’online bug users’ sessions at level 0’, cpu_p1 => 80, cpu_p2=> 0,
parallel_degree_limit_p1 => 8);
dbms_resource_manager.create_plan_directive(plan => ’BUGDB_PLAN’, group_or_subplan =>
’Bug_Batch_group’,
comment => ’batch bug users’ sessions at level 0’, cpu_p1 => 20, cpu_p2 => 0,
parallel_degree_limit_p1 => 2);
dbms_resource_manager.create_plan_directive(plan => ’BUGDB_PLAN’, group_or_subplan =>
’Bug_Maintenance_group’,
comment => ’bug maintenance users’ sessions at level 1’, cpu_p1 => 0, cpu_p2 => 100,
parallel_degree_limit_p1 => 3);
30% @ 70% @
Level 1 Level 1
MAILDB BUGDB
PLAN PLAN
The initial consumer group of a user is the consumer group to which any session
created by that user initially belongs. You must grant the switch privilege directly to
the user or PUBLIC before it can be the user’s initial consumer group. The switch
privilege for the initial consumer group cannot come from a role granted to that
user (these semantics are similar to those for ALTER USER DEFAULT ROLE).
If you have not set the initial consumer group for a user, the user’s initial consumer
group will automatically be the consumer group DEFAULT_CONSUMER_GROUP.
DEFAULT_CONSUMER_GROUP has switch privileges granted to PUBLIC;
therefore, all users are automatically granted switch privilege for this consumer
group.
Upon deletion of a consumer group, all users having the deleted group as their
initial consumer group will have the DEFAULT_CONSUMER_GROUP as their
initial consumer group. All sessions belonging to a deleted consumer group will be
switched to DEFAULT_CONSUMER_GROUP.
You can use the following procedure to change the resource consumer group for all
sessions with a given user id:
switch_consumer_group_for_user(user in varchar2, class in varchar2)
Both procedures also change the resource consumer group of any (PQ) slave
sessions that are related to the top user session.
See Also: For information about views associated with Database Resource Manager,
see the Oracle8i Reference.
If you grant a user permission to switch to a particular consumer group, then that
user can switch their current consumer group to the new consumer group.
If you grant a role permission to switch to a particular resource consumer group,
then any users who have been granted that role and have enabled that role can
immediately switch their current consumer group to the new consumer group.
If you grant PUBLIC the permission to switch to a particular consumer group, then
any user can switch to that group.
If the grant_option argument is TRUE, then users granted switch privilege for the
consumer group may also grant switch privileges for that consumer group to
others.
If you revoke a user’s switch privileges to a particular consumer group, then any
subsequent attempts by that user to switch to that consumer group will fail. If you
revoke the initial consumer group from a user, then that user will automatically be
part of the DEFAULT_CONSUMER_GROUP when logging in.
If you revoke a role’s switch privileges to a consumer group, then any users who
only had switch privilege for the consumer group via that role will not be able to
subsequently switch to that consumer group.
If you revoke from PUBLIC switch privileges to a consumer group, then any users
who could previously only use the consumer group via PUBLIC will not be able to
subsequently switch to that consumer group.
This procedure enables users to switch to a consumer group for which they have
the switch privilege. If the caller is another procedure, then this procedure enables
users to switch to a consumer group for which the owner of that procedure has
switch privileges. This procedure also returns the old consumer group to users, and
can be used to switch back to the old consumer group later.
The parameter initial_group_on_error controls the behavior of the procedure
in the event of an error; if the parameter is set to TRUE and an error occurs, the
invoker’s consumer group is set to his/her initial consumer group.
This chapter describes guidelines for managing schema objects, and includes the
following topics:
■ Managing Space in Data Blocks
■ Setting Storage Parameters
■ Deallocating Space
■ Understanding Space Use of Datatypes
You should familiarize yourself with the concepts in this chapter before attempting
to manage specific schema objects as described in Chapters 13–18.
This indicates that 20% of each data block used for this table’s data segment will be
kept free and available for possible updates to the existing rows already within each
block. Figure 12–1 illustrates PCTFREE.
Notice that before the block reaches PCTFREE, the free space of the data block is
filled by both the insertion of new rows and by the growth of the data block header.
Specifying PCTFREE
The default for PCTFREE is 10 percent. You can use any integer between 0 and 99,
inclusive, as long as the sum of PCTFREE and PCTUSED does not exceed 100.
A smaller PCTFREE has the following effects:
■ reserves less room for updates to expand existing table rows
■ allows inserts to fill the block more completely
■ may save space, because the total data for a table or index is stored in fewer
blocks (more rows or entries per block)
A small PCTFREE might be suitable, for example, for a segment that is rarely
changed.
A larger PCTFREE has the following effects:
■ reserves more room for future updates to existing table rows
■ may require more blocks for the same amount of inserted data (inserting fewer
rows per block)
■ may improve update performance, because Oracle does not need to chain row
pieces as frequently, if ever
A large PCTFREE is suitable, for example, for segments that are frequently updated.
Ensure that you understand the nature of the table or index data before setting
PCTFREE. Updates can cause rows to grow. New values might not be the same size
as values they replace. If there are many updates in which data values get larger,
PCTFREE should be increased. If updates to rows do not affect the total row width,
PCTFREE can be low. Your goal is to find a satisfactory trade-off between densely
packed data and good update performance.
PCTFREE for NonClustered Tables If the data in the rows of a nonclustered table is
likely to increase in size over time, reserve some space for these updates. Otherwise,
updated rows are likely to be chained among blocks.
PCTFREE for Clustered Tables The discussion for nonclustered tables also applies
to clustered tables. However, if PCTFREE is reached, new rows from any table
contained in the same cluster key go into a new data block that is chained to the
existing cluster key.
PCTFREE for Indexes You can specify PCTFREE only when initially creating an
index.
In this case, a data block used for this table’s data segment is not considered for the
insertion of any new rows until the amount of used space in the block falls to 39%
or less (assuming that the block’s used space has previously reached PCTFREE).
Figure 12–2 illustrates this.
60% unused
space
Specifying PCTUSED
The default value for PCTUSED is 40 percent. After the free space in a data block
reaches PCTFREE, no new rows are inserted in that block until the percentage of
space used falls below PCTUSED. The percent value is for the block space available
for data after overhead is subtracted from total space.
You can specify any integer between 0 and 99 (inclusive) for PCTUSED, as long as
the sum of PCTUSED and PCTFREE does not exceed 100.
A smaller PCTUSED has the following effects:
■ reduces processing costs incurred during UPDATE and DELETE statements for
moving a block to the free list when it has fallen below that percentage of usage
■ increases the unused space in a database
A larger PCTUSED has the following effects:
■ improves space efficiency
■ increases processing cost during INSERTs and UPDATEs
INITIAL
The size, in bytes, of the first extent allocated when a segment is created.
Default: 5 data blocks
Minimum: 2 data blocks (rounded up)
Maximum: operating system specific
NEXT
The size, in bytes, of the next incremental extent to be allocated for a segment. The
second extent is equal to the original setting for NEXT. From there forward, NEXT is
set to the previous size of NEXT multiplied by (1 + PCTINCREASE/100).
Default: 5 data blocks
Minimum: 1 data block
Maximum: operating system specific
MAXEXTENTS
The total number of extents, including the first, that can ever be allocated for the
segment.
Default: dependent on the data block size and operating system
Minimum: 1 (extent)
Maximum: unlimited
MINEXTENTS
The total number of extents to be allocated when the segment is created. This allows
for a large allocation of space at creation time, even if contiguous space is not
available.
Default: 1 (extent)
Minimum: 1 (extent)
Maximum: unlimited
PCTINCREASE
The percentage by which each incremental extent grows over the last incremental
extent allocated for a segment. If PCTINCREASE is 0, then all incremental extents
are the same size. If PCTINCREASE is greater than zero, then each time NEXT is
calculated, it grows by PCTINCREASE. PCTINCREASE cannot be negative.
The new NEXT equals 1 + PCTINCREASE/100, multiplied by the size of the last
incremental extent (the old NEXT) and rounded up to the next multiple of a block
size.
Default: 50 (%)
Minimum: 0 (%)
Maximum: operating system specific
INITRANS
Specifies the number of DML transaction entries for which space should be initially
reserved in the data block header. Space is reserved in the headers of all data blocks
in the associated data or index segment.
The default value is 1 for tables and 2 for clusters and indexes.
MAXTRANS
As multiple transactions concurrently access the rows of the same data block, space
is allocated for each DML transaction’s entry in the block. Once the space reserved
by INITRANS is depleted, space for additional transaction entries is allocated out of
the free space in a block, if available. Once allocated, this space effectively becomes
a permanent part of the block header. The MAXTRANS parameter limits the
number of transaction entries that can concurrently use data in a data block.
Therefore, you can limit the amount of free space that can be allocated for
transaction entries in a data block using MAXTRANS.
The default value is an operating system-specific function of block size, not
exceeding 255.
See Also: For specific details about storage parameters, see the Oracle8i SQL
Reference.
Some defaults are operating system specific; see your operating system-specific
Oracle documentation.
a high INITRANS (to eliminate the overhead of having to allocate transaction entry
space, as required when the object is in use) and allowing a higher MAXTRANS so
that no user has to wait to access necessary data blocks.
A PCTFREE setting for an index only has an effect when the index is created. You
cannot specify PCTUSED for an index segment.
Any storage parameter specified at the object level overrides the corresponding
option set at the tablespace level. When storage parameters are not explicitly set at
the object level, they default to those at the tablespace level. When storage
parameters are not set at the tablespace level, Oracle system defaults apply. If
storage parameters are altered, the new options apply only to the extents not yet
allocated.
Also assume that the initialization parameter DB_BLOCK_SIZE is set to 2K. The
following table shows how extents are allocated for the TEST_STORAGE table. Also
shown is the value for the incremental extent, as can be seen in the NEXT column of
the USER_SEGMENTS or DBA_SEGMENTS data dictionary views:
Table 12–1 Extent Allocations
Extent# Extent Size Value for NEXT
1 50 blocks or 102400 bytes 50 blocks or 102400 bytes
2 50 blocks or 102400 bytes 75 blocks or153600 bytes
3 75 blocks or 153600 bytes 113 blocks or 231424 bytes
4 115 blocks or 235520 bytes 170 blocks or 348160 bytes
5 170 blocks or 348160 bytes 255 blocks or 522240 bytes
As a result, the third extent is 500K when allocated, the fourth is (500K*1.5)=750K,
and so on.
Deallocating Space
This section describes aspects of deallocating unused space, and includes the
following topics:
■ Viewing the High Water Mark
■ Issuing Space Deallocation Statements
It is not uncommon to allocate space to a segment, only to find out later that it is not
being used. For example, you may set PCTINCREASE to a high value, which could
create a large extent that is only partially used. Or you could explicitly overallocate
space by issuing the ALTER TABLE...ALLOCATE EXTENT statement. If you find
that you have unused or overallocated space, you can release it so that the unused
space can be used by other segments.
When you explicitly identify an amount of unused space to KEEP, this space is
retained while the remaining unused space is deallocated. If the remaining number
of extents becomes smaller than MINEXTENTS, the MINEXTENTS value changes
to reflect the new number. If the initial extent becomes smaller, the INITIAL value
changes to reflect the new size of the initial extent.
If you do not specify the KEEP clause, all unused space (everything above the high
water mark) is deallocated, as long as the size of the initial extent and
MINEXTENTS are preserved. Thus, even if the high water mark occurs within the
MINEXTENTS boundary, MINEXTENTS remains and the initial extent size is not
reduced.
See Also: For details on the syntax and options associated with deallocating unused
space, see the Oracle8i SQL Reference.
You can verify that deallocated space is freed by looking at the DBA_FREE_SPACE
view. For more information on this view, see the Oracle8i Reference.
Table dquon
Extent 1 Extent 2
10K 10K
If you deallocate all unused space from DQUON and KEEP 10K (see Figure 12–4),
the third extent is deallocated and the second extent remains in tact.
Table dquon
Extent 1 Extent 2
10K 20K
Example 2
When you issue the ALTER TABLE DQUON DEALLOCATE UNUSED statement,
you completely deallocate the third extent, and the second extent is left with 10K.
Note that the size of the next allocated extent defaults to the size of the last
completely deallocated extent, which in this example, is 30K. However, if you can
explicitly set the size of the next extent using the ALTER...STORAGE [NEXT]
statement.
Example 3
To preserve the MINEXTENTS number of extents, DEALLOCATE can retain extents
that were originally allocated to an instance (added below the high water mark),
while deallocating extents that were originally allocated to the segment.
For example, table DQUON has a MINEXTENTS value of 2. Examples 1 and 2 still
yield the same results. However, if the MINEXTENTS value is 3, then the ALTER
TABLE DQUON DEALLOCATE UNUSED statement has no effect, while the
ALTER TABLE DQUON DEALLOCATE UNUSED KEEP 10K statement removes
the third extent and changes the value of MINEXTENTS to 2.
Number The NUMBER datatype stores fixed and floating point numbers.
Datatype Positive numbers in the range 1 x 10-130 to 9.99...9 x 10125 (with up to
38 significant digits), negative numbers in the range
DATE The DATE datatype stores point-in-time values such as dates and
Datatype times. Date data is stored in fixed length fields of seven bytes each.
See Also: For more information about NLS and support for different character sets,
see the Oracle8i National Language Support Guide.
For more information about datatypes, see the Oracle8i SQL Reference.
This chapter describes various aspects of managing partitioned tables and indexes,
and includes the following sections:
■ What Are Partitioned Tables and Indexes?
■ Partitioning Methods
■ Creating Partitions
■ Maintaining Partitions
Partitioning Methods
There are three partitioning methods:
■ Range Partitioning
■ Hash Partitioning
■ Composite Partitioning
You can store hash partitions in specific tablespaces, as shown in the following
statement:
CREATE TABLE scubagear (...)
STORAGE (INITIAL 10k)
PARTITION BY HASH (id) PARTITIONS 16
STORE IN (h1to4, h4to8, h8to12, h12to16);
Or, you can name and store each hash partition in a specific tablespace:
CREATE TABLE product(...)
STORAGE (INITIAL 10k)
PARTITION BY HASH (id)
(PARTITION p1 TABLESPACE h1,
PARTITION p2 TABLESPACE h2);
Coalescing Hash Partitions To remove a single hash partition and redistribute the
data, use the following statement:
ALTER TABLE scubagear COALESCE PARTITION;
Note that the partition being coalesced is determined by the hash function. Also,
when you coalesce a hash partition and redistribute the data, local indexes are not
maintained. You can coalesce the hash partition in parallel.Local index partitions
corresponding to partitions that absorbed rows must be rebuilt from existing
partitions.
Adding Hash Partitions To add a single hash partition and redistribute the data, use
one of the following statements:
ALTER TABLE scubagear ADD PARTITION;
ALTER TABLE scubagear
ADD PARTITION p3 TABLESPACE t3;
Local indexes are not maintained when you add a hash partition. You can also add
the hash partition in parallel.
See Also: For detailed syntax information about the CREATE TABLE
PARTITION...BY HASH and ALTER TABLE statements, see the Oracle8i SQL
Reference.
For more details about hash partitioning, see Oracle8i Concepts.
The following statement shows you can specify subpartition names and names of
tablespaces in which subpartitions should be placed.
CREATE TABLE scubagear (equipno NUMBER, equipname VARCHAR(32), price NUMBER)
PARTITION BY RANGE (equipno) SUBPARTITION BY HASH (equipname)
SUBPARTITIONS 8 STORE IN (ts1, ts3, ts5, ts7)
(PARTITION p1 VALUES LESS THAN (1000) PCTFREE 40,
PARTITION p2 VALUES LESS THAN (2000) STORE IN (ts2, ts4, ts6, ts8),
PARTITION p3 VALUES LESS THAN (MAXVALUE)
(SUBPARTITION p3_s1 TABLESPACE ts4,
SUBPARTITION p3_s2 TABLESPACE ts5));
You can also allocate or deallocate storage for a subpartition of a table or index
using the MODIFY SUBPARTITION clause.
This next statement simply shows how to rename a subpartition that has a system-
generated name that was a consequence of adding a partition to an underlying
table:
ALTER INDEX scuba RENAME SUBPARTITION sys_subp3254 TO bcd_types;
See Also: For more details about the syntax of statements in this section, see the
Oracle8i SQL Reference.
Creating Partitions
Creating a partitioned table is very similar to creating a table or index: you must use
the CREATE TABLE statement with the PARTITION by clause. Also, you must
specify the tablespace name for each partition.
The following example shows a CREATE TABLE statement that contains four
partitions, one for each quarter’s worth of sales. A row with SALE_YEAR=1998,
SALE_MONTH=7, and SALE_DAY=18 has the partitioning key (1998, 7, 18), and is
in the third partition, in the tablespace TSC. A row with SALE_YEAR=1998,
SALE_MONTH=7, and SALE_DAY=1 has the partitioning key (1998, 7, 1), and also
is in the third partition.
CREATE TABLE sales
( invoice_no NUMBER,
sale_year INT NOT NULL,
sale_month INT NOT NULL,
sale_day INT NOT NULL )
PARTITION BY RANGE ( sale_year, sale_month, sale_day)
( PARTITION sales_q1 VALUES LESS THAN ( 1998, 04, 01 )
TABLESPACE tsa,
PARTITION sales_q2 VALUES LESS THAN ( 1998, 07, 01 )
TABLESPACE tsb,
PARTITION sales_q3 VALUES LESS THAN ( 1998, 10, 01 )
TABLESPACE tsc,
PARTITION sales q4 VALUES LESS THAN ( 1999, 01, 01 )
TABLESPACE tsd);
See Also: For more information about the CREATE TABLE statement and
PARTITION clause, see Oracle8i SQL Reference.
For information about partition keys, partition names, bounds, and equipartitioned
tables and indexes, see Oracle8i Concepts.
Maintaining Partitions
This section describes how to perform the following specific partition maintenance
operations:
■ Moving Partitions
■ Adding Partitions
■ Dropping Partitions
■ Coalescing Partitions
Moving Partitions
You can use the MOVE PARTITION clause of the ALTER TABLE statement to:
■ re-cluster data and reduce fragmentation
■ move a partition to another tablespace
■ modify create-time attributes
Typically, you can change the physical storage attributes of a partition in a single
step via a ALTER TABLE/INDEX...MODIFY PARTITION statement. However,
there are some physical attributes, such as TABLESPACE, that you cannot modify
via MODIFY PARTITION. In these cases you can use the MOVE PARTITION
clause.
This statement always drops the partition’s old segment and creates a new segment,
even if you don’t specify a new tablespace.
When the partition you are moving contains data, MOVE PARTITION marks the
matching partition in each local index, and all global index partitions as unusable.
You must rebuild these index partitions after issuing MOVE PARTITION.
Adding Partitions
This section describes how to add new partitions to a partitioned table and how
partitions are added to local indexes.
When there are local indexes defined on the table and you issue the ALTER
TABLE...ADD PARTITION statement, a matching partition is also added to each
local index. Since Oracle assigns names and default physical storage attributes to
the new index partitions, you may wish to rename or alter them after the ADD
operation is complete.
Dropping Partitions
This section describes how to use the ALTER TABLE DROP PARTITION statement
to drop table and index partitions and their data.
Dropping Table Partitions Containing Data and Global Indexes If, however, the partition
contains data and one or more global indexes are defined on the table, use either of
the following methods to drop the table partition:
1. Leave the global indexes in place during the ALTER TABLE...DROP
PARTITION statement. In this situation DROP PARTITION marks all global
index partitions unusable, so you must rebuild them afterwards.
This method is most appropriate for large tables where the partition being
dropped contains a significant percentage of the total data in the table.
2. Issue the DELETE command to delete all rows from the partition before you
issue the ALTER TABLE...DROP PARTITION statement. The DELETE
command updates the global indexes, and also fires triggers and generates redo
and undo logs.
For example, a DBA wishes to drop the first partition, which has a partition
bound of 10000. The DBA issues the following statements:
DELETE FROM sales WHERE TRANSID < 10000;
ALTER TABLE sales DROP PARTITION dec94;
This method is most appropriate for small tables, or for large tables when the
partition being dropped contains a small percentage of the total data in the
table.
This method is most appropriate for large tables where the partition being
dropped contains a significant percentage of the total data in the table.
2. Issue the DELETE command to delete all rows from the partition before you
issue the ALTER TABLE...DROP PARTITION statement. The DELETE
command enforces referential integrity constraints, and also fires triggers and
generates redo and undo log.
DELETE FROM sales WHERE TRANSID < 10000;
ALTER TABLE sales DROP PARTITION dec94;
This method is most appropriate for small tables or for large tables when the
partition being dropped contains a small percentage of the total data in the
table.
Coalescing Partitions
You can distribute contents of a partition (selected by the RDBMS) of a table
partitioned using the hash method into one or more partitions determined by the
hash function, and then destroy the selected partition.
The following statement reduces by one the number of partitions in a table by
coalescing its last partition:
ALTER TABLE ouu1
COALESCE PARTITION;
Truncating Partitions
Use the ALTER TABLE...TRUNCATE PARTITION statement when you wish to
remove all rows from a table partition. You cannot truncate an index partition;
however, the ALTER TABLE TRUNCATE PARTITION statement truncates the
matching partition in each local index.
Truncating Table Partitions Containing Data and Global Indexes If, however, the partition
contains data and global indexes, use either of the following methods to truncate
the table partition:
1. Leave the global indexes in place during the ALTER TABLE TRUNCATE
PARTITION statement.
This method is most appropriate for large tables where the partition being
truncated contains a significant percentage of the total data in the table.
2. Issue the DELETE command to delete all rows from the partition before you
issue the ALTER TABLE...TRUNCATE PARTITION statement. The DELETE
command updates the global indexes, and also fires triggers and generates redo
and undo log.
This method is most appropriate for small tables, or for large tables when the
partition being truncated contains a small percentage of the total data in the
table.
This method is most appropriate for large tables where the partition being
truncated contains a significant percentage of the total data in the table.
2. Issue the DELETE command to delete all rows from the partition before you
issue the ALTER TABLE...TRUNCATE PARTITION statement. The DELETE
command enforces referential integrity constraints, and also fires triggers and
generates redo and undo log.
This method is most appropriate for small tables, or for large tables when the
partition being truncated contains a small percentage of the total data in the
table.
Splitting Partitions
This form of ALTER TABLE/INDEX divides a partition into two partitions. You can
use the SPLIT PARTITION clause when a partition becomes too large and causes
backup, recovery or maintenance operations to take a long time. You can also use
the SPLIT PARTITION clause to redistribute the I/O load; note that you cannot use
this clause for hash partitions.
Splitting a Table Partition: Scenario In this scenario "fee_katy" is a partition in the table
"VET_cats," which has a local index, JAF1. There is also a global index, VET on the
table. VET contains two partitions, VET_parta, and VET_partb.
To split the partition "fee_katy", and rebuild the index partitions, the DBA issues the
following statements:
ALTER TABLE vet_cats SPLIT PARTITION
fee_katy at (100) INTO ( PARTITION
fee_katy1 ..., PARTITION fee_katy2 ...);
ALTER INDEX JAF1 REBUILD PARTITION SYS_P00067;
ALTER INDEX JAF1 REBUILD PARTITION SYS_P00068;
ALTER INDEX VET REBUILD PARTITION VET_parta;
ALTER INDEX VET REBUILD PARTITION VET_partb;
Note: You must examine the data dictionary to locate the names
assigned to the new local index partitions. In this particular
scenario, they are SYS_P00067 and SYS_P00068. If you wish, you
can rename them. Also, unless JAF1 already contained partitions
fee_katy1 and fee_katy2, names assigned to local index partitions
produced by this split will match those of corresponding base table
partitions.
You only need to rebuild if the index partition that you split was unusable.
Merging Partitions
You can merge the contents of two adjacent partitions of a range or composite
partitioned table into one. The resulting partition inherits the higher upper bound
of the two merged partitions.
The following statement merges two adjacent partitions of a range partitioned table:
ALTER TABLE diving
MERGE PARTITIONS bcd1, bcd2 INTO PARTITION bcd1bcd2;
So now the placeholder data segments associated with the NOV98 and DEC98
partitions have been exchanged with the data segments associated with the
ACCOUNTS_NOV98and ACCOUNTS_DEC98 tables.
3. Redefine the ACCOUNTS view.
CREATE OR REPLACE VIEW accounts
SELECT * FROM accounts_jan98
UNION ALL
SELECT * FROM accounts_feb_98
UNION ALL
...
UNION ALL
SELECT * FROM accounts_new PARTITION (nov98)
UNION ALL
SELECT * FROM accounts_new PARTITION (dec98);
See Also: For more information about the syntax and usage of the statements in this
section, see Oracle8i SQL Reference.
To Move the Time Window in a Historical Table Now consider a specific example. You
have a table, ORDER, which contains 13 months of transactions: a year of historical
data in addition to orders for the current month. There is one partition for each
month; the partitions are named ORDER_yymm, as are the tablespaces in which
they reside.
The ORDER table contains two local indexes, ORDER_IX_ONUM, which is a local,
prefixed, unique index on the order number, and ORDER_IX_SUPP, which is a
local, non-prefixed index on the supplier number. The local index partitions are
named with suffixes that match the underlying table. There is also a global unique
index, ORDER_IX_CUST, for the customer name. ORDER_IX_CUST contains three
partitions, one for each third of the alphabet. So on October 31, 1994, change the
time window on ORDER as follows:
1. Back up the data for the oldest time interval.
ALTER TABLESPACE ORDER_9310 BEGIN BACKUP;
ALTER TABLESPACE ORDER_9310 END BACKUP;
You can ensure that no one inserts new rows into ORDER between the DELETE
step and the DROP PARTITION steps by revoking access privileges from an
APPLICATION role, which is used in all applications. You can also bring down all
user-level applications during a well-defined batch window each night or weekend.
This chapter describes the various aspects of managing tables, and includes the
following topics:
■ Guidelines for Managing Tables
■ Creating Tables
■ Altering Tables
■ Manually Allocating Storage for a Table
■ Dropping Tables
■ Index-Organized Tables
Before attempting tasks described in this chapter, familiarize yourself with the
concepts in Chapter 12, "Guidelines for Managing Schema Objects".
See Also: For information about specifying tablespaces, see "Assigning Tablespace
Quotas to Users" on page 9-3.
If you have such tables in your database, consider the following recommendations:
Separate the Table from Its Indexes Place indexes in separate tablespaces from
other objects, and on separate disks if possible. If you ever need to drop and re-
create an index on a very large table (such as when disabling and enabling a
constraint, or re-creating the table), indexes isolated into separate tablespaces can
often find contiguous space more easily than those in tablespaces that contain other
objects.
Allocate Sufficient Temporary Space If applications that access the data in a very
large table perform large sorts, ensure that enough space is available for large
temporary segments and that users have access to this space (temporary segments
always use the default STORAGE settings for their tablespaces).
Table Restrictions
Before creating tables, make sure you are aware of the following restrictions:
■ Tables containing new object types cannot be imported into a pre-Oracle8
database
■ You cannot move types and extent tables to a different schema when the
original data still exists in the database.
■ You cannot merge an exported table into a pre-existing table having the same
name in a different schema.
Oracle has a limit on the total number of columns that a table (or attributes that an
object type) can have (see Oracle8i SQL Reference for this limit.) When you create a
table that contains user-defined type data, Oracle maps columns of user-defined
type to relational columns for storing the user-defined type data. These "hidden"
relational columns are not visible in a DESCRIBE table statement and are not
returned by a SELECT * statement. Therefore, when you create an object table, or a
relational table with columns of REF, varray, nested table, or object type, the total
number of columns that Oracle actually creates for the table may be more than
those you specify, because Oracle creates hidden columns to store the user-defined
type data. The following formulas determine the total number of columns created
for a table with user-defined type data:
num_columns(object identifier) = 1
num_columns(row_type) = 1
num_columns(REF) = 1, if REF is unscoped
= 1, if the REF is scoped and the object identifier
is system generated and the REF has no
referential constraint
= 2, if the REF is scoped and the object identifier
is system generated and the REF has a
referential constraint
= 1 + number of columns in the primary key,
if the object identifier is primary key based
num_columns(nested_table) = 2
num_columns(varray) = 1
num_columns(object_type) = number of scalar attributes in the object type
+ SUM[num_columns(object_type(i))] i= 1 -> n
+ SUM[num_columns(nested_table(j))] j= 1 -> m
+ SUM[num_columns(varray(k))] k= 1 -> p
+ SUM[num_columns(REF(l))] l= 1 -> q
Example 1
CREATE TYPE physical_address_type AS OBJECT
(no CHAR(4), street CHAR(31), city CHAR(5), state CHAR(3));
CREATE TYPE phone_type AS VARRAY(5) OF CHAR(15);
CREATE TYPE electronic_address_type AS OBJECT
(phones phone_type, fax CHAR(12), email CHAR(31));
CREATE TYPE contact_info_type AS OBJECT
(physical_address physical_address_type,
electronic_address electronic_address_type);
CREATE TYPE employee_type AS OBJECT
(eno NUMBER, ename CHAR(60),
contact_info contact_info_type);
num_columns (employee_object_table) =
num_columns(object_identifier)
+ num_columns(row_type)
+ number of top level object columns in employee_type
+ num_columns(employee_type)
= 1 + 1 + 1 + 9 = 12
Example 2:
CREATE TABLE employee_relational_table (einfo employee_type);
num_columns (employee_relational_table) =
Example 3:
CREATE TYPE project_type AS OBJECT (pno NUMBER, pname CHAR(30), budget NUMBER);
num_columns(department) =
number of scalar columns
+ num_columns(mgr)
+ num_columns(project_set)
= 2 + 2 + 2 = 6
Creating Tables
To create a new table in your schema, you must have the CREATE TABLE system
privilege. To create a table in another user’s schema, you must have the CREATE
ANY TABLE system privilege. Additionally, the owner of the table must have a
quota for the tablespace that contains the table, or the UNLIMITED TABLESPACE
system privilege.
Create tables using the SQL statement CREATE TABLE. When user SCOTT issues
the following statement, he creates a nonclustered table named EMP in his schema
and stores it in the USERS tablespace:
CREATE TABLE emp (
empno NUMBER(5) PRIMARY KEY,
ename VARCHAR2(15) NOT NULL,
job VARCHAR2(10),
mgr NUMBER(5),
hiredate DATE DEFAULT (sysdate),
sal NUMBER(7,2),
comm NUMBER(7,2),
deptno NUMBER(3) NOT NULL
CONSTRAINT dept_fkey REFERENCES dept)
PCTFREE 10
PCTUSED 40
TABLESPACE users
Notice that integrity constraints are defined on several columns of the table and that
several storage settings are explicitly specified for the table.
See Also: For more information about system privileges, see Chapter 24, "Managing
User Privileges and Roles". For more information about tablespace quotas, see
Chapter 23, "Managing Users and Resources".
Altering Tables
To alter a table, the table must be contained in your schema, or you must have
either the ALTER object privilege for the table or the ALTER ANY TABLE system
privilege.
A table in an Oracle database can be altered for the following reasons:
■ to add or drop one or more columns to or from the table
■ to add or modify an integrity constraint on a table
■ to modify an existing column’s definition (datatype, length, default value, and
NOT NULL integrity constraint)
■ to modify data block space usage parameters (PCTFREE, PCTUSED)
■ to modify transaction entry settings (INITRANS, MAXTRANS)
■ to modify storage parameters (NEXT, PCTINCREASE)
■ to enable or disable integrity constraints or triggers associated with the table
■ to drop integrity constraints associated with the table
You can increase the length of an existing column. However, you cannot decrease it
unless there are no rows in the table. Furthermore, if you are modifying a table to
increase the length of a column of datatype CHAR, realize that this may be a time
consuming operation and may require substantial additional storage, especially if
the table contains many rows. This is because the CHAR value in each row must be
blank-padded to satisfy the new column length.
When altering the data block space usage parameters (PCTFREE and PCTUSED) of
a table, note that new settings apply to all data blocks used by the table, including
blocks already allocated and subsequently allocated for the table. However, the
blocks already allocated for the table are not immediately reorganized when space
usage parameters are altered, but as necessary after the change.
When altering the transaction entry settings (INITRANS, MAXTRANS) of a table,
note that a new setting for INITRANS applies only to data blocks subsequently
allocated for the table, while a new setting for MAXTRANS applies to all blocks
(already and subsequently allocated blocks) of a table.
The storage parameters INITIAL and MINEXTENTS cannot be altered. All new
settings for the other storage parameters (for example, NEXT, PCTINCREASE)
affect only extents subsequently allocated for the table. The size of the next extent
allocated is determined by the current values of NEXT and PCTINCREASE, and is
not based on previous values of these parameters.
You can alter a table using the SQL command ALTER TABLE. The following
statement alters the EMP table:
ALTER TABLE emp
PCTFREE 30
PCTUSED 60;
See Also: See "Managing Object Dependencies" on page 20-23 for information about
how Oracle manages dependencies.
See Also: For information about the ALLOCATE EXTENT option, see Oracle8i
Parallel Server Concepts and Administration.
Dropping Tables
To drop a table, the table must be contained in your schema or you must have the
DROP ANY TABLE system privilege.
To drop a table that is no longer needed, use the SQL command DROP TABLE. The
following statement drops the EMP table:
DROP TABLE emp;
Dropping Columns
Oracle enables you to drop columns from rows in a table, thereby cleaning up
unused, and potentially space-demanding columns without having to export/
import data, and recreate indexes and constraints.
You can drop columns you no longer need or mark columns to be dropped at a
future time when there is less demand on your system’s resources.
The following statement drops unused columns from table t1:
ALTER TABLE t1 DROP UNUSED COLUMNS;
Restrictions
The following restrictions apply to drop column operations:
■ You cannot drop a column from an object type table.
■ You cannot drop columns from nested tables.
■ You cannot drop all columns in a table.
■ You cannot drop a partitioning key column.
■ You cannot drop a column from tables owned by SYS.
■ You cannot drop a parent key column.
See Also: For more information about the syntax used for dropping columns from
tables, see the Oracle8i SQL Reference.
Index-Organized Tables
This section describes aspects of managing index-organized tables, and includes the
following topics:
■ What Are Index-Organized Tables?
■ Creating Index-Organized Tables
■ Maintaining Index-Organized Tables
■ Analyzing Index-Organized Tables
■ Using the ORDER BY Clause with Index-Organized Tables
■ Converting Index-Organized Tables to Regular Tables
Index
FINANCE ROWID
INVEST ROWID
Table Table
FINANCE
Finance 5543 Finance
STOCK
Invest 6879 Stock
Stock 4254
Trade 3345
Index Index Index
STOCK ROWID STOCK 6874 FINANCE 3345
TRADE ROWID TRADE 5543 INVEST 4254
Indexed data is
stored in index.
Index-organized tables are suitable for accessing data by way of primary key or any
key that is a valid prefix of the primary key. Also, there is no duplication of key
values because a separate index structure containing the key values and ROWID is
not created. Table 14–1 summarizes the difference between an index-organized table
and a regular table.
This example shows that the ORGANIZATION INDEX qualifier specifies an index-
organized table, where the key columns and non-key columns reside in an index
defined on columns that designate the primary key (token,doc_id) for the table.
Index-organized tables can store object types. For example, you can create an index-
organized table containing a column of object type mytype (for the purpose of this
example) as follows:
CREATE TABLE iot (c1 NUMBER primary key, c2 mytype)
ORGANIZATION INDEX;
However, you cannot create an index-organized table of object types. For example,
the following statement would not be valid:
CREATE TABLE iot of mytype ORGANIZATION INDEX;
See Also: For more details about the CREATE INDEX statement, see the Oracle8i
SQL Reference.
See Also: For details about the syntax for creating index-organized tables, see the
Oracle8i SQL Reference.
Choosing and Monitoring a Threshold Value You should choose a threshold value that
can accommodate your key columns, as well as the first few non-key columns (if
they are frequently accessed).
After choosing a threshold value, you can monitor tables to verify that the value
you specified is appropriate. You can use the ANALYZE TABLE...LIST CHAINED
ROWS statement to determine the number and identity of rows exceeding the
threshold value.
See Also: For more information about the ANALYZE statement see the Oracle8i
SQL Reference.
Using the INCLUDING clause In addition to specifying PCTTHRESHOLD, you can use
the INCLUDING <column_name> clause to control which non-key columns are
stored with the key columns. Oracle accommodates all non-key columns up to the
column specified in the INCLUDING clause in the index leaf block, provided it
does not exceed the specified threshold. All non-key columns beyond the column
specified in the INCLUDING clause are stored in the overflow area.
For example, you can modify the previous example where an index-organized table
was created so that it always has the token_offsets column value stored in the
overflow area:
CREATE TABLE docindex(
token CHAR(20),
doc_id NUMBER,
token_frequency NUMBER,
token_offsets VARCHAR2(512),
CONSTRAINT pk_docindex PRIMARY KEY (token, doc_id))
ORGANIZATION INDEX TABLESPACE ind_tbs
PCTTHRESHOLD 20
INCLUDING token_frequency
OVERFLOW TABLESPACE ovf_tbs;
Here, only non-key columns up to token_frequency (in this case a single column
only) are stored with the key column values in the index leaf block.
You can enable key compression using the COMPRESS clause while:
■ creating an index-organized table
■ moving an index-organized table
You can also specify the prefix length (as the number of key columns), which
identifies how the key columns are broken into a prefix and suffix entry.
CREATE TABLE iot(i INT, j INT, k INT, l INT, PRIMARY KEY (i, j, k)) ORGANIZATION INDEX
COMPRESS;
For the list of values (1,2,3), (1,2,4), (1,2,7), (1,3,5) (1,3,4), (1,4,4) the repeated
occurrences of (1,2), (1,3) are compressed away.
You can also override the default prefix length used for compression as follows:
CREATE TABLE iot(i INT, j INT, k INT, l INT, PRIMARY KEY (i, j, k)) ORGANIZATION INDEX
COMPRESS 1;
For the list of values (1,2,3), (1,2,4), (1,2,7), (1,3,5), (1,3,4), (1,4,4), the repeated
occurrences of 1 are compressed away.
You can disable compression as follows:
ALTER TABLE A MOVE NOCOMPRESS;
See Also: For more details about key compression, see Oracle8i Concepts and the
Oracle8i SQL Reference.
overflow data segment. For example, you can set the INITRANS of the primary key
index segment to 4 and the overflow of the data segment INITRANS to 6 as follows:
ALTER TABLE docindex INITRANS 4 OVERFLOW INITRANS 6;
You can also alter PCTTHRESHOLD and INCLUDING column values. A new
setting is used to break the row into head and overflow tail pieces during
subsequent operations. For example, the PCTHRESHOLD and INCLUDING
column values can be altered for the docindex table as follows:
ALTER TABLE docindex PCTTHRESHOLD 15 INCLUDING doc_id;
By setting the INCLUDING column to doc_id, all the columns that follow
doc_id, namely, token_frequency and token_offsets, are stored in the
overflow data segment.
For index-organized tables created without an overflow data segment, you can add
an overflow data segment by using the ADD OVERFLOW clause. For example, if
the docindex table did not have an overflow segment, then you can add an
overflow segment as follows:
ALTER TABLE docindex ADD OVERFLOW TABLESPACE ovf_tbs;
See Also: For details about the ALTER TABLE statement, see the Oracle8i SQL
Reference.
You can move index-organized tables with no overflow data segment online using
the ONLINE option. For example, if the docindex table does not have an overflow
data segment, then you can perform the move online as follows:
ALTER TABLE docindex MOVE ONLINE INITRANS 10;
The following statement rebuilds the index-organized table docindex along with
its overflow data segment:
And in this last statement, index-organized table iot is moved while the LOB index
and data segment for C2 are rebuilt:
ALTER TABLE iot MOVE LOB (C2) STORE AS (TABLESPACE lob_ts);
See Also: For more information about the MOVE option, see the Oracle8i SQL
Reference.
The ANALYZE statement analyzes both the primary key index segment and the
overflow data segment, and computes logical as well as physical statistics for the
table.
■ The logical statistics can be queries using USER_TABLES, ALL_TABLES or
DBA_TABLES.
■ You can query the physical statistics of the primary key index segment using
USER_INDEXES, ALL_INDEXES or DBA_INDEXES (and using the primary
key index name). For example, you can obtain the primary key index segment’s
physical statistics for the table docindex as follows:
SELECT * FROM DBA_INDEXES WHERE INDEX_NAME= ’PK_DOCINDEX’;
■ You can query the physical statistics for the overflow data segment using the
USER_TABLES, ALL_TABLES or DBA_TABLES. You can identify the overflow
entry by searching for IOT_TYPE = ’IOT_OVERFLOW’. For example, you can
obtain overflow data segment physical attributes associated with the docindex
table as follows:
SELECT * FROM DBA_TABLES WHERE IOT_TYPE=’IOT_OVERFLOW’ and IOT_NAME= ’DOCINDEX’
The following queries avoid sorting overhead because the data is already sorted on
the primary key:
SELECT * FROM employees ORDER BY (dept_id, e_id);
SELECT * FROM employees ORDER BY (dept_id);
If, however, you have an ORDER BY clause on a suffix of the primary key column
or non-primary key columns, additional sorting is required (assuming no other
secondary indexes are defined).
SELECT * FROM employees ORDER BY (e_id);
SELECT * FROM employees ORDER BY (e_name);
■ Import the index-organized table data, making sure IGNORE=y (ensures that
object exists error is ignored)
See Also: For more details about using IMPORT/EXPORT, see Oracle8i Utilities.
This chapter describes aspects of view management, and includes the following
topics:
■ Managing Views
■ Managing Sequences
■ Managing Synonyms
Before attempting tasks described in this chapter, familiarize yourself with the
concepts in Chapter 12, "Guidelines for Managing Schema Objects".
Managing Views
A view is a tailored presentation of the data contained in one or more tables (or
other views), and takes the output of a query and treats it as a table. You can think
of a view as a "stored query" or a "virtual table." You can use views in most places
where a table can be used.
This section describes aspects of managing views, and includes the following topics:
■ Creating Views
■ Modifying a Join View
■ Replacing Views
■ Dropping Views
Creating Views
To create a view, you must fulfill the requirements listed below:
■ To create a view in your schema, you must have the CREATE VIEW privilege;
to create a view in another user’s schema, you must have the CREATE ANY
VIEW system privilege. You may acquire these privileges explicitly or via a role.
■ The owner of the view (whether it is you or another user) must have been
explicitly granted privileges to access all objects referenced in the view
definition; the owner cannot have obtained these privileges through roles. Also,
the functionality of the view is dependent on the privileges of the view’s owner.
For example, if the owner of the view has only the INSERT privilege for Scott’s
EMP table, the view can only be used to insert new rows into the EMP table, not
to SELECT, UPDATE, or DELETE rows from it.
■ If the owner of the view intends to grant access to the view to other users, the
owner must have received the object privileges to the base objects with the
GRANT OPTION or the system privileges with the ADMIN OPTION.
You can create views using the SQL command CREATE VIEW. Each view is defined
by a query that references tables, snapshots, or other views. The query that defines a
view cannot contain the FOR UPDATE clause. For example, the following statement
creates a view on a subset of data in the EMP table:
CREATE VIEW sales_staff AS
SELECT empno, ename, deptno
FROM emp
WHERE deptno = 10
WITH CHECK OPTION
CONSTRAINT sales_staff_cnst;
The query that defines the SALES_STAFF view references only rows in department
10. Furthermore, the CHECK OPTION creates the view with the constraint that
INSERT and UPDATE statements issued against the view cannot result in rows that
the view cannot select. For example, the following INSERT statement successfully
inserts a row into the EMP table by means of the SALES_STAFF view, which
contains all rows with department number 10:
INSERT INTO sales_staff VALUES (7584, ’OSTER’, 10);
However, the following INSERT statement is rolled back and returns an error
because it attempts to insert a row for department number 30, which could not be
selected using the SALES_STAFF view:
INSERT INTO sales_staff VALUES (7591, ’WILLIAMS’, 30);
The following statement creates a view that joins data from the EMP and DEPT
tables:
CREATE VIEW division1_staff AS
SELECT ename, empno, job, dname
FROM emp, dept
WHERE emp.deptno IN (10, 30)
AND emp.deptno = dept.deptno;
The DIVISION1_STAFF view joins information from the EMP and DEPT tables. The
CHECK OPTION is not specified in the CREATE VIEW statement for this view.
Views created with errors do not have wildcards expanded. However, if the view is
eventually compiled without errors, wildcards in the defining query are expanded.
By default, views are not created with errors. When a view is created with errors,
Oracle returns a message indicating the view was created with errors. The status of
a view created with errors is INVALID. If conditions later change so that the query
of an invalid view can be executed, the view can be recompiled and become valid
(usable).
See Also: For information changing conditions and their impact on views, see
"Managing Object Dependencies" on page 20-23.
With some restrictions, you can modify views that involve joins. If a view is a join
on other nested views, then the other nested views must be mergeable into the top
level view.
The examples in following sections use the EMP and DEPT tables. These examples
work only if you explicitly define the primary and foreign keys in these tables, or
define unique indexes. Following are the appropriately constrained table definitions
for EMP and DEPT:
CREATE TABLE dept (
deptno NUMBER(4) PRIMARY KEY,
dname VARCHAR2(14),
loc VARCHAR2(13));
You could also omit the primary and foreign key constraints listed above, and create
a UNIQUE INDEX on DEPT (DEPTNO) to make the following examples work.
See Also: For more information about mergeable views see Oracle8i Tuning.
Key-Preserved Tables
The concept of a key-preserved table is fundamental to understanding the restrictions
on modifying join views. A table is key preserved if every key of the table can also
be a key of the result of the join. So, a key-preserved table has its keys preserved
through a join.
The key-preserving property of a table does not depend on the actual data in the
table. It is, rather, a property of its schema and not of the data in the table. For
example, if in the EMP table there was at most one employee in each department,
then DEPT.DEPTNO would be unique in the result of a join of EMP and DEPT, but
DEPT would still not be a key-preserved table.
In this view, EMP is a key-preserved table, because EMPNO is a key of the EMP
table, and also a key of the result of the join. DEPT is not a key-preserved table,
because although DEPTNO is a key of the DEPT table, it is not a key of the join.
This statement fails with an ORA-01779 error (’’cannot modify a column which
maps to a non key-preserved table’’), because it attempts to modify the underlying
DEPT table, and the DEPT table is not key preserved in the EMP_DEPT view.
In general, all modifiable columns of a join view must map to columns of a key-
preserved table. If the view is defined using the WITH CHECK OPTION clause,
then all join columns and all columns of repeated tables are not modifiable.
So, for example, if the EMP_DEPT view were defined using WITH CHECK
OPTION, the following UPDATE statement would fail:
UPDATE emp_dept
SET deptno = 10
WHERE ename = ’SMITH’;
DELETE Statements You can delete from a join view provided there is one and only
one key-preserved table in the join.
The following DELETE statement works on the EMP_DEPT view:
DELETE FROM emp_dept
WHERE ename = ’SMITH’;
This DELETE statement on the EMP_DEPT view is legal because it can be translated
to a DELETE operation on the base EMP table, and because the EMP table is the
only key-preserved table in the join.
In the following view, a DELETE operation cannot be performed on the view
because both E1 and E2 are key-preserved tables:
CREATE VIEW emp_emp AS
SELECT e1.ename, e2.empno, deptno
FROM emp e1, emp e2
WHERE e1.empno = e2.empno;
If a view is defined using the WITH CHECK OPTION clause and the key-preserved
table is repeated, then rows cannot be deleted from such a view:
CREATE VIEW emp_mgr AS
SELECT e1.ename, e2.ename mname
FROM emp e1, emp e2
WHERE e1.mgr = e2.empno
WITH CHECK OPTION;
No deletion can be performed on this view because the view involves a self-join of
the table that is key preserved.
This statement works because only one key-preserved base table is being modified
(EMP), and 40 is a valid DEPTNO in the DEPT table (thus satisfying the FOREIGN
KEY integrity constraint on the EMP table).
An INSERT statement like the following would fail for the same reason that such an
UPDATE on the base EMP table would fail: the FOREIGN KEY integrity constraint
on the EMP table is violated.
INSERT INTO emp_dept (ename, empno, deptno)
VALUES (’KURODA’, 9010, 77);
The following INSERT statement would fail with an ORA-01776 error (’’cannot
modify more than one base table through a view’’).
INSERT INTO emp_dept (empno, ename, loc)
VALUES (9010, ’KURODA’, ’BOSTON’);
Replacing Views
To replace a view, you must have all the privileges required to drop and create a
view. If the definition of a view must change, the view must be replaced; you cannot
alter the definition of a view. You can replace views in the following ways:
■ You can drop and re-create the view.
■ You can redefine the view with a CREATE VIEW statement that contains the OR
REPLACE option. The OR REPLACE option replaces the current definition of a
view and preserves the current security authorizations. For example, assume
that you create the SALES_STAFF view as given in the previous example, and
grant several object privileges to roles and other users. However, now you need
to redefine the SALES_STAFF view to change the department number specified
in the WHERE clause. You can replace the current version of the SALES_STAFF
view with the following statement:
CREATE OR REPLACE VIEW sales_staff AS
SELECT empno, ename, deptno
FROM emp
WHERE deptno = 30
WITH CHECK OPTION CONSTRAINT sales_staff_cnst;
Dropping Views
You can drop any view contained in your schema. To drop a view in another user’s
schema, you must have the DROP ANY VIEW system privilege. Drop a view using
the SQL command DROP VIEW. For example, the following statement drops a view
named SALES_STAFF:
DROP VIEW sales_staff;
Managing Sequences
This section describes various aspects of managing sequences, and includes the
following topics:
■ Creating Sequences
■ Altering Sequences
■ Initialization Parameters Affecting Sequences
■ Dropping Sequences
Creating Sequences
To create a sequence in your schema, you must have the CREATE SEQUENCE
system privilege; to create a sequence in another user’s schema, you must have the
CREATE ANY SEQUENCE privilege. Create a sequence using the SQL command
CREATE SEQUENCE. For example, the following statement creates a sequence
used to generate employee numbers for the EMPNO column of the EMP table:
CREATE SEQUENCE emp_sequence
INCREMENT BY 1
START WITH 1
NOMAXVALUE
NOCYCLE
CACHE 10;
The CACHE option pre-allocates a set of sequence numbers and keeps them in
memory so that sequence numbers can be accessed faster. When the last of the
sequence numbers in the cache has been used, Oracle reads another set of numbers
into the cache.
Oracle might skip sequence numbers if you choose to cache a set of sequence
numbers. For example, when an instance abnormally shuts down (for example,
when an instance failure occurs or a SHUTDOWN ABORT statement is issued),
sequence numbers that have been cached but not used are lost. Also, sequence
numbers that have been used but not saved are lost as well. Oracle might also skip
cached sequence numbers after an export and import; see Oracle8i Utilities for
details.
See Also: For information about how the Oracle Parallel Server affects cached
sequence numbers, see Oracle8i Parallel Server Concepts and Administration.
For performance information on caching sequence numbers, see Oracle8i Tuning.
Altering Sequences
To alter a sequence, your schema must contain the sequence, or you must have the
ALTER ANY SEQUENCE system privilege. You can alter a sequence to change any
of the parameters that define how it generates sequence numbers except the
sequence’s starting number. To change the starting point of a sequence, drop the
sequence and then re-create it. When you perform DDL on sequence numbers you
will lose the cache values.
Alter a sequence using the SQL command ALTER SEQUENCE. For example, the
following statement alters the EMP_SEQUENCE:
ALTER SEQUENCE emp_sequence
INCREMENT BY 10
MAXVALUE 10000
CYCLE
CACHE 20;
Dropping Sequences
You can drop any sequence in your schema. To drop a sequence in another schema,
you must have the DROP ANY SEQUENCE system privilege. If a sequence is no
longer required, you can drop the sequence using the SQL command DROP
SEQUENCE. For example, the following statement drops the ORDER_SEQ
sequence:
DROP SEQUENCE order_seq;
When a sequence is dropped, its definition is removed from the data dictionary.
Any synonyms for the sequence remain, but return an error when referenced.
Managing Synonyms
You can create both public and private synonyms. A public synonym is owned by
the special user group named PUBLIC and is accessible to every user in a database.
A private synonym is contained in the schema of a specific user and available only
to the user and the user’s grantees.
This section includes the following synonym management information:
■ Creating Synonyms
■ Dropping Synonyms
Creating Synonyms
To create a private synonym in your own schema, you must have the CREATE
SYNONYM privilege; to create a private synonym in another user’s schema, you
must have the CREATE ANY SYNONYM privilege. To create a public synonym,
you must have the CREATE PUBLIC SYNONYM system privilege.
Create a synonym using the SQL command CREATE SYNONYM. For example, the
following statement creates a public synonym named PUBLIC_EMP on the EMP
table contained in the schema of JWARD:
CREATE PUBLIC SYNONYM public_emp FOR jward.emp;
Dropping Synonyms
You can drop any private synonym in your own schema. To drop a private
synonym in another user’s schema, you must have the DROP ANY SYNONYM
system privilege. To drop a public synonym, you must have the DROP PUBLIC
SYNONYM system privilege.
Drop a synonym that is no longer required using the SQL command DROP
SYNONYM. To drop a private synonym, omit the PUBLIC keyword; to drop a
public synonym, include the PUBLIC keyword.
For example, the following statement drops the private synonym named EMP:
DROP SYNONYM emp;
When you drop a synonym, its definition is removed from the data dictionary. All
objects that reference a dropped synonym remain; however, they become invalid
(not usable).
See Also: For more information about how dropping synonyms can affect other
schema objects, see "Managing Object Dependencies" on page 20-23.
This chapter describes various aspects of index management, and includes the
following topics:
■ Guidelines for Managing Indexes
■ Creating Indexes
■ Altering Indexes
■ Monitoring Space Use of Indexes
■ Dropping Indexes
Before attempting tasks described in this chapter, familiarize yourself with the
concepts in Chapter 12, "Guidelines for Managing Schema Objects".
■ You can use the estimated size of an individual index to better manage the disk
space that the index will use. When an index is created, you can set appropriate
storage parameters and improve I/O performance of applications that use the
index.
For example, assume that you estimate the maximum size of a index before
creating it. If you then set the storage parameters when you create the index,
fewer extents will be allocated for the table’s data segment, and all of the
index’s data will be stored in a relatively contiguous section of disk space. This
decreases the time necessary for disk I/O operations involving this index.
The maximum size of a single index entry is approximately one-half the data block
size. As with tables, you can explicitly set storage parameters when creating an
index.
See Also: For specific information about storage parameters, see "Setting Storage
Parameters" on page 12-7.
Coalescing Indexes
When you encounter index fragmentation (due to improper sizing or increased
growth), you can rebuild or coalesce the index. Before you perform either task,
though, weigh the costs and benefits of each option and choose the one that works
best for your situation. Table 16–1 describes costs and benefits associated with
rebuilding and coalescing indexes.
REBUILD COALESCE
Quickly moves index to another Cannot move index to another
tablespace. tablespace.
Higher costs. Requires more disk Lower costs. Does not require more
space. disk space.
Creates new tree, shrinks height if Coalesces leaf blocks within same
applicable. branch of tree.
Enables you to quickly change storage Quickly frees up index leaf blocks for
and tablespace parameters without use.
having to drop the original index.
In situations where you have B-tree index leaf blocks that can be freed up for reuse,
you can merge those leaf blocks using the following statement:
ALTER INDEX vmoore COALESCE;
Figure 16–1 illustrates the effect of an ALTER INDEX COALESCE on the index
VMOORE. Before performing the operation, the first two leaf blocks are 50% full,
which means you have an opportunity to reduce fragmentation and completely fill
the first block while freeing up the second (in this example, assume that
PCTFREE=0).
Before ALTER INDEX vmoore COALESCE; After ALTER INDEX vmoore COALESCE;
Creating Indexes
This section describes how to create an index, and includes the following topics:
■ Creating an Index Associated with a Constraint
■ Creating an Index Explicitly
■ Creating a Function-Based Index
■ Re-creating an Existing Index
■ Creating a Key-Compressed Index
To enable a UNIQUE key or PRIMARY KEY (which creates an associated index), the
owner of the table needs a quota for the tablespace intended to contain the index, or
the UNLIMITED TABLESPACE system privilege.
LOBS, LONG and LONG RAW columns cannot be indexed.
Oracle enforces a UNIQUE key or PRIMARY KEY integrity constraint by creating a
unique index on the unique key or primary key. This index is automatically created
by Oracle when the constraint is enabled; no action is required by the issuer of the
CREATE TABLE or ALTER TABLE statement to create the index. This includes both
when a constraint is defined and enabled, and when a defined but disabled
constraint is enabled.
In general, it is better to create constraints to enforce uniqueness than it is to use the
CREATE UNIQUE INDEX syntax. A constraint’s associated index always assumes
the name of the constraint; you cannot specify a specific name for a constraint
index.
If you do not specify storage options (such as INITIAL and NEXT) for an index, the
default storage options of the host tablespace are automatically used.
PCTFREE 0;
Notice that several storage settings are explicitly specified for the index.
See Also: For more information about the syntax for creating online indexes, see the
Oracle8i SQL Reference.
In this SQL statement, when Area(geo) is referenced in the WHERE clause, the
optimizer considers using the index area_index.
See Also: For more conceptual information about function-based indexes, see
Oracle8i Concepts.
For information about function-based indexing and application development, see
the Oracle8i Application Developer’s Guide - Fundamentals.
Example 1
The following statement creates a function-based index, idx on table emp:
CREATE INDEX idx ON emp (UPPER(emp_name));
SELECT statements can use either an index range scan (the expression is a prefix of
the index) or index full scan (preferable when the index specifies a high degree of
parallelism).
CREATE INDEX idx ON t (a + b * (c - 1), a, b);
SELECT a FROM t WHERE a + b * (c - 1) < 100;
Example 2
You can also use function-based indexes to support NLS sort index as well.
NLSSORT is a function that returns a sort key that has been given a string. Thus, if
you want to build an index on name using NLSSORT, issue the following statement:
This statement creates the nls_index on table t_table with the collation
sequence German.
Now, to select from NLS_SORT:
SELECT * FROM t_table ORDER BY name;
Example 3
Another use for function-based indexing is to perform non-case-sensitive searches:
CREATE INDEX case_insensitive_idx ON emp_table (UPPER(empname));
Example 4
This example also illustrates the most common uses of function-based indexing: a
case-insensitive sort and language sort.
CREATE INDEX empi ON emp
UPPER ((ename), NLSSORT(ename));
This line directs the sort to use a German linguistic sort key.
See Also: For more information about function-based indexing, see Oracle8i
Concepts and Oracle8i SQL Reference.
The REBUILD clause must immediately follow the index name, and precede any
other options. Also, the REBUILD clause cannot be used in conjunction with the
DEALLOCATE UNUSED clause.
See Also: For more information on the ALTER INDEX command and optional
clauses, see the Oracle8i SQL Reference.
Key compression breaks an index key into a prefix and a suffix entry. Compression
is achieved by sharing the prefix entries among all the suffix entries in an index
block. This sharing can lead to huge savings in space, allowing you to store more
keys per index block while improving performance.
Key compression can be useful in the following situations:
■ You have a non-unique index where ROWID is appended to make the key
unique. If you use key compression here, the duplicate key will be stored as a
prefix entry on the index block without the ROWID. The remaining rows will be
suffix entries consisting of only the ROWID
■ You have a unique multi-column index.
You can enable key compression using the COMPRESS clause.You can also specify
the prefix length (as the number of key columns), which identifies how the key
columns are broken into a prefix and suffix entry. For example, the following
statement will compress away duplicate occurrences of a key in the index leaf block.
CREATE INDEX emp_ename (ename)
TABLESPACE users
COMPRESS 1
The COMPRESS clause can also be specified during rebuild. For example, during
rebuild you can disable compression as follows:
ALTER INDEX emp_ename REBUILD NOCOMPRESS;
See Also: For more details about the CREATE INDEX statement, see the Oracle8i
SQL Reference.
Altering Indexes
To alter an index, your schema must contain the index or you must have the ALTER
ANY INDEX system privilege. You can alter an index only to change the transaction
entry parameters or to change the storage parameters; you cannot change its
column structure.
Alter the storage parameters of any index, including those created by Oracle to
enforce primary and unique key integrity constraints, using the SQL command
ALTER INDEX. For example, the following statement alters the EMP_ENAME
index:
ALTER INDEX emp_ename
INITRANS 5
MAXTRANS 10
The percentage of an index’s space usage will vary according to how often index
keys are inserted, updated, or deleted. Develop a history of an index’s average
efficiency of space usage by performing the following sequence of operations
several times:
■ analyzing statistics
■ validating the index
■ checking PCT_USED
■ dropping and re-creating (or coalescing) the index
When you find that an index’s space usage drops below its average, you can
condense the index’s space by dropping the index and rebuilding it, or coalescing it.
See Also: For information about analyzing an index’s structure, see "Analyzing
Tables, Indexes, and Clusters" on page 20-3.
Dropping Indexes
To drop an index, the index must be contained in your schema, or you must have
the DROP ANY INDEX system privilege.
You might want to drop an index for any of the following reasons:
■ The index is no longer required.
■ The index is not providing anticipated performance improvements for queries
issued against the associated table. (For example, the table might be very small,
or there might be many rows in the table but very few index entries.)
■ Applications do not use the index to query the data.
■ The index has become invalid and must be dropped before being rebuilt.
■ The index has become too fragmented and must be dropped before being
rebuilt.
When you drop an index, all extents of the index’s segment are returned to the
containing tablespace and become available for other objects in the tablespace.
How you drop an index depends on whether you created the index explicitly with a
CREATE INDEX statement, or implicitly by defining a key constraint on a table.
You cannot drop only the index associated with an enabled UNIQUE key or
PRIMARY KEY constraint. To drop a constraint’s associated index, you must
disable or drop the constraint itself.
DROP INDEX emp_ename;
See Also: For information about analyzing indexes, see "Analyzing Tables, Indexes,
and Clusters" on page 20-3.
For more information about dropping a constraint’s associated index, see
"Managing Integrity Constraints" on page 20-13.
This chapter describes aspects of managing clusters (including clustered tables and
indexes), and includes the following topics:
■ Guidelines for Managing Clusters
■ Creating Clusters
■ Altering Clusters
■ Dropping Clusters
Before attempting tasks described in this chapter, familiarize yourself with the
concepts in Chapter 12, "Guidelines for Managing Schema Objects".
DEPT Table
20 DNAME LOC
DEPTNO DNAME LOC
ADMIN NEW YORK
10 SALES BOSTON
20 ADMIN NEW YORK
EMPNO ENAME . . .
932 KEHR . . .
1139 WILSON . . .
1277 NORMAN . . .
The following sections describe guidelines to consider when managing clusters, and
includes the following topics:
■ Choose Appropriate Tables for the Cluster
■ Choose Appropriate Columns for the Cluster Key
■ Specify Data Block Space Use
■ Specify the Space Required by an Average Cluster Key and Its Associated Rows
■ Specify the Location of Each Cluster and Cluster Index Rows
■ Estimate Cluster Size and Set Storage Parameters
See Also: For more information about clusters, see Oracle8i Concepts.
Specify the Space Required by an Average Cluster Key and Its Associated Rows
The CREATE CLUSTER command has an optional argument, SIZE, which is the
estimated number of bytes required by an average cluster key and its associated
rows. Oracle uses the SIZE parameter when performing the following tasks:
■ estimating the number of cluster keys (and associated rows) that can fit in a
clustered data block
■ limiting the number of cluster keys placed in a clustered data block; this
maximizes the storage efficiency of keys within a cluster
SIZE does not limit the space that can be used by a given cluster key. For example, if
SIZE is set such that two cluster keys can fit in one data block, any amount of the
available data block space can still be used by either of the cluster keys.
By default, Oracle stores only one cluster key and its associated rows in each data
block of the cluster’s data segment. Although block size can vary from one
operating system to the next, the rule of one key per block is maintained as
clustered tables are imported to other databases on other machines.
If all the rows for a given cluster key value cannot fit in one block, the blocks are
chained together to speed access to all the values with the given key. The cluster
index points to the beginning of the chain of blocks, each of which contains the
cluster key value and associated rows. If the cluster SIZE is such that more than one
key fits in a block, blocks can belong to more than one chain.
The cluster and its cluster index can be created in different tablespaces. In fact,
creating a cluster and its index in different tablespaces that are stored on different
storage devices allows table data and index data to be retrieved simultaneously
with minimal disk contention.
Creating Clusters
This section describes how to create clusters, and includes the following topics:
■ Creating Clustered Tables
■ Creating Cluster Indexes
To create a cluster in your schema, you must have the CREATE CLUSTER system
privilege and a quota for the tablespace intended to contain the cluster or the
UNLIMITED TABLESPACE system privilege.
To create a cluster in another user’s schema, you must have the CREATE ANY
CLUSTER system privilege and the owner must have a quota for the tablespace
intended to contain the cluster or the UNLIMITED TABLESPACE system privilege.
You can create a cluster using the SQL CREATE CLUSTER statement. The following
statement creates a cluster named EMP_DEPT, which stores the EMP and DEPT
tables, clustered by the DEPTNO column:
Note: You can specify the schema for a clustered table in the
CREATE TABLE statement. A clustered table can be in a different
schema than the schema containing the cluster.Also, the names of
the columns don’t have to match, but their structure does.
In either case, you must also have either a quota for the tablespace intended to
contain the cluster index, or the UNLIMITED TABLESPACE system privilege.
A cluster index must be created before any rows can be inserted into any clustered
table. The following statement creates a cluster index for the EMP_DEPT cluster:
CREATE INDEX emp_dept_index
ON CLUSTER emp_dept
INITRANS 2
MAXTRANS 5
TABLESPACE users
STORAGE (INITIAL 50K
NEXT 50K
MINEXTENTS 2
MAXEXTENTS 10
PCTINCREASE 33)
PCTFREE 5;
Several storage settings are explicitly specified for the cluster and cluster index.
See Also: See Chapter 24, "Managing User Privileges and Roles" for more
information about system privileges, and Chapter 23, "Managing Users and
Resources" for information about tablespace quotas.
Altering Clusters
You can alter an existing cluster to change the following settings:
■ data block space usage parameters (PCTFREE, PCTUSED)
■ the average cluster key size (SIZE)
■ transaction entry settings (INITRANS, MAXTRANS)
■ storage parameters (NEXT, PCTINCREASE)
To alter a cluster, your schema must contain the cluster or you must have the
ALTER ANY CLUSTER system privilege.
When you alter data block space usage parameters (PCTFREE and PCTUSED) or
the cluster size parameter (SIZE) of a cluster, the new settings apply to all data
blocks used by the cluster, including blocks already allocated and blocks
subsequently allocated for the cluster. Blocks already allocated for the table are
reorganized when necessary (not immediately).
When you alter the transaction entry settings (INITRANS, MAXTRANS) of a
cluster, a new setting for INITRANS applies only to data blocks subsequently
allocated for the cluster, while a new setting for MAXTRANS applies to all blocks
(already and subsequently allocated blocks) of a cluster.
The storage parameters INITIAL and MINEXTENTS cannot be altered. All new
settings for the other storage parameters affect only extents subsequently allocated
for the cluster.
To alter a cluster, use the ALTER CLUSTER statement. The following statement
alters the EMP_DEPT cluster:
ALTER CLUSTER emp_dept
PCTFREE 30
PCTUSED 60;
For more information about the CLUSTER parameter in the ALTER CLUSTER
statement, see Oracle8i Parallel Server Concepts and Administration.
Dropping Clusters
This section describes aspects of dropping clusters, and includes the following
topics:
■ Dropping Clustered Tables
■ Dropping Cluster Indexes
A cluster can be dropped if the tables within the cluster are no longer necessary.
When a cluster is dropped, so are the tables within the cluster and the
corresponding cluster index; all extents belonging to both the cluster’s data segment
and the index segment of the cluster index are returned to the containing tablespace
and become available for other segments within the tablespace.
Note: When you drop a single table from a cluster, Oracle deletes
each row of the table individually. To maximize efficiency when
you intend to drop an entire cluster, drop the cluster including all
tables by using the DROP CLUSTER statement with the
INCLUDING TABLES option. Drop an individual table from a
cluster (using the DROP TABLE statement) only if you want the
rest of the cluster to remain.
See Also: For information about dropping a table, see "Dropping Tables" on
page 14-12.
If the cluster contains one or more clustered tables and you intend to drop the tables
as well, add the INCLUDING TABLES option of the DROP CLUSTER statement, as
follows:
DROP CLUSTER emp_dept INCLUDING TABLES;
If the INCLUDING TABLES option is not included and the cluster contains tables,
an error is returned.
If one or more tables in a cluster contain primary or unique keys that are referenced
by FOREIGN KEY constraints of tables outside the cluster, the cluster cannot be
dropped unless the dependent FOREIGN KEY constraints are also dropped. This
can be easily done using the CASCADE CONSTRAINTS option of the DROP
CLUSTER statement, as shown in the following example:
DROP CLUSTER emp_dept INCLUDING TABLES CASCADE CONSTRAINTS;
Oracle returns an error if you do not use the CASCADE CONSTRAINTS option and
constraints exist.
See Also: For information about dropping an index, see "Dropping Indexes" on
page 16-15.
This chapter describes how to manage hash clusters, and includes the following
topics:
■ Guidelines for Managing Hash Clusters
■ Altering Hash Clusters
■ Dropping Hash Clusters
See Also: Before attempting tasks described in this chapter, familiarize yourself
with the concepts in Chapter 12, "Guidelines for Managing Schema Objects".
Advantages of Hashing
If you opt to use indexing rather than hashing, consider whether to store a table
individually or as part of a cluster.
Hashing is most advantageous when you have the following conditions:
In such cases, the cluster key in the equality condition is hashed, and the
corresponding hash key is usually found with a single read. In comparison, for
an indexed table the key value must first be found in the index (usually several
reads), and then the row is read from the table (another read).
■ The tables in the hash cluster are primarily static in size so that you can
determine the number of rows and amount of space required for the tables in
the cluster. If tables in a hash cluster require more space than the initial
allocation for the cluster, performance degradation can be substantial because
overflow blocks are required.
Disadvantages of Hashing
Hashing is not advantageous in the following situations:
■ Most queries on the table retrieve rows over a range of cluster key values. For
example, in full table scans or queries like the following, a hash function cannot
be used to determine the location of specific hash keys; instead, the equivalent
of a full table scan must be done to fetch the rows for the query:
SELECT . . . WHERE cluster_key < . . . ;
With an index, key values are ordered in the index, so cluster key values that
satisfy the WHERE clause of a query can be found with relatively few I/Os.
■ The table is not static and continually growing. If a table grows without limit,
the space required over the life of the table (its cluster) cannot be pre-
determined.
■ Applications frequently perform full-table scans on the table and the table is
sparsely populated. A full-table scan in this situation takes longer under
hashing.
■ You cannot afford to pre-allocate the space that the hash cluster will eventually
need.
See Also: For more information about creating hash clusters and specifying hash
functions see the Oracle8i SQL Reference.
For information about hash functions and specifying user-defined hash functions,
see Oracle8i Concepts.
Even if you decide to use hashing, a table can still have separate indexes on any
columns, including the cluster key. See the Oracle8i Application Developer’s Guide -
Fundamentals for additional recommendations.
In this example, only one hash key can be assigned per data block. Therefore, the
initial space required for the hash cluster is at least 100*2K or 200K. The settings for
the storage parameters do not account for this requirement. Therefore, an initial
extent of 100K and a second extent of 150K are allocated to the hash cluster.
Alternatively, assume the HASH parameters are specified as follows:
SIZE 500 HASHKEYS 100
In this case, three hash keys are assigned to each data block. Therefore, the initial
space required for the hash cluster is at least 34*2K or 68K. The initial settings for
the storage parameters are sufficient for this requirement (an initial extent of 100K is
allocated to the hash cluster).
The following sections explain setting the parameters of the CREATE CLUSTER
command specific to hash clusters.
See Also: For additional information about creating tables in a cluster, guidelines
for setting other parameters of the CREATE CLUSTER command, and the privileges
required to create a hash cluster, see "Creating Clusters" on page 17-6.
Oracle rounds the HASHKEY value up to the nearest prime number, so this cluster
has a maximum of 503 hash key values, each of size 512 bytes:
Note: The single table option is valid only for hash clusters.
HASHKEYS must also be specified.
See Also: For more information about the CREATE CLUSTER statement, see the
Oracle8i SQL Reference.
Setting HASH IS
Specify the HASH IS parameter only if the cluster key is a single column of the
NUMBER datatype, and contains uniformly distributed integers. If the above
conditions apply, you can distribute rows in the cluster so that each unique cluster
key value hashes, with no collisions, to a unique hash value. If these conditions do
not apply, omit this option so that you use the internal hash function.
Setting SIZE
SIZE should be set to the average amount of space required to hold all rows for any
given hash key. Therefore, to properly determine SIZE, you must be aware of the
characteristics of your data:
■ If the hash cluster is to contain only a single table and the hash key values of the
rows in that table are unique (one row per value), SIZE can be set to the average
row size in the cluster.
■ If the hash cluster is to contain multiple tables, SIZE can be set to the average
amount of space required to hold all rows associated with a representative hash
value.
■ If the hash cluster does not use the internal hash function (if you specified
HASH IS) and you expect little or no collisions, you can set SIZE as estimated;
no collisions occur and space is used as efficiently as possible.
Overestimating the value of SIZE increases the amount of unused space in the
cluster. If space efficiency is more important than the performance of data retrieval,
disregard the above adjustments and use the estimated value for SIZE.
Setting HASHKEYS
For maximum distribution of rows in a hash cluster, Oracle rounds the HASHKEYS
value up to the nearest prime number.
Example 1 You decide to load the EMP table into a hash cluster. Most queries
retrieve employee records by their employee number. You estimate
that the maximum number of rows in the EMP table at any given
time is 10000 and that the average row size is 55 bytes.
In this case, EMPNO should be the cluster key. Since this column
contains integers that are unique, the internal hash function can be
bypassed. SIZE can be set to the average row size, 55 bytes; note
that 34 hash keys are assigned per data block. HASHKEYS can be
set to the number of rows in the table, 10000, rounded up to the
next highest prime number, 10007:
In this case, DEPTNO should be the cluster key. Since this column
contains integers that are uniformly distributed, the internal hash
function can be bypassed. A pre-estimated SIZE (the average
amount of space required to hold all rows per department) is 55
bytes * 10, or 550 bytes. Using this value for SIZE, only three hash
keys can be assigned per data block. If you expect some collisions
and want maximum performance of data retrieval, slightly alter
your estimated SIZE to prevent collisions from requiring overflow
blocks. By adjusting SIZE by 12%, to 620 bytes (see previous section
about setting SIZE for clarification), only three hash keys are
assigned per data block, leaving more space for rows from expected
collisions.
The implications for altering a hash cluster are identical to those for altering an
index cluster. However, note that the SIZE, HASHKEYS, and HASH IS parameters
cannot be specified in an ALTER CLUSTER statement. You must re-create the
cluster to change these parameters and then copy the data from the original cluster.
See Also: For more information about altering an index cluster, see ""Altering
Clusters" on page 17-8.
A table in a hash cluster is dropped using the SQL DROP TABLE statement. The
implications of dropping hash clusters and tables in hash clusters are the same for
index clusters.
See Also: For more information about dropping clusters, see "Dropping Clusters"
on page 17-10.
Oracle provides different methods for detecting and correcting data block
corruption. One method is to drop and re-create an object after the corruption is
detected; however, this is not always possible or desirable. If data block corruption
is limited to a subset of rows, another option is to rebuild the table by selecting all
data except for the corrupt rows.
Yet another way to manage data block corruption is to use the DBMS_REPAIR
package. You can use DBMS_REPAIR to detect and repair corrupt blocks in tables
and indexes. Using this approach, you can address corruptions where possible, and
also continue to use objects while you attempt to rebuild or repair them.
DBMS_REPAIR uses the following approach to address corruptions:
■ Step 1: Detect and Report Corruptions
■ Step 2: Evaluate the Costs and Benefits of Using DBMS_REPAIR
■ Step 3: Make Objects Usable
■ Step 4: Repair Corruptions and Rebuild Lost Data
that should be on a freelist, that aren’t. You can address this by running the
rebuild_freelists procedure.
Indexes and tables may be out of sync. You can address this by first executing
the dump_orphan_keys procedure (to obtain information from the keys that
might be useful in rebuilding corrupted data). Then issue the ALTER INDEX
REBUILD ONLINE statement to get the table and its indexes back in sync.
4. If repair involves loss of data, can this data be retrieved?
You can retrieve data from the index when a data block is marked corrupt. The
dump_orphan_keys procedures can help you retrieve this information. Of
course, retrieving data in this manner depends on the amount of redundancy
between the indexes and the table.
A similar issue occurs when selecting rows that are chained. Essentially, a query of
the same row may or may not access the corruption—thereby giving different
results.
DBMS_REPAIR Procedures
This sections contains detailed descriptions of the DBMS_REPAIR procedures.
check_object
The check_object procedure checks the specified objects, and populates the
repair table with information about corruptions and repair directives. Validation
consists of block checking all blocks in the object. You may optionally specify a
range, partition name, or subpartition name when you wish to check a portion of an
object.
procedure check_object(schema_name IN varchar2,
object_name IN varchar2,
partition_name IN varchar2 DEFAULT NULL,
object_type IN binary_integer DEFAULT TABLE_OBJECT,
repair_table_name IN varchar2 DEFAULT ’REPAIR_TABLE’,
flags IN binary_integer DEFAULT NULL,
relative_fno IN binary_integer DEFAULT NULL,
block_start IN binary_integer DEFAULT NULL,
block_end IN binary_integer DEFAULT NULL,
corrupt_count OUT binary_integer)
fix_corrupt_blocks
Use this procedure to fix the corrupt blocks in specified objects based on
information in the repair table that was previously generated by the
check_object procedure. Prior to effecting any change to a block, the block is
checked to ensure the block is still corrupt. Corrupt blocks are repaired by marking
the block software corrupt. When a repair is effected, the associated row in the
repair table is updated with a fix timestamp.
procedure fix_corrupt_blocks(
schema_name IN varchar2,
object_name IN varchar2,
partition_name IN varchar2 DEFAULT NULL,
object_type IN binary_integer DEFAULT TABLE_OBJECT,
repair_table_name IN varchar2 DEFAULT ’REPAIR_TABLE’,
flags IN boolean DEFAULT NULL,
fix_count OUT binary_integer)
dump_orphan_keys
Reports on index entries that point to rows in corrupt data blocks. For each such
index entry encountered, a row is inserted into the specified orphan table.
If the repair table is specified, then any corrupt blocks associated with the base table
are handled in addition to all data blocks that are marked software corrupt.
Otherwise, only blocks that are marked corrupt are handled.
This information may be useful for rebuilding lost rows in the table and for
diagnostic purposes.
procedure dump_orphan_keys(
schema_name IN varchar2,
object_name IN varchar2,
rebuild_freelists
Rebuilds the freelists for the specified object. All free blocks are placed on the
master freelist. All other freelists are zeroed. If the object has multiple freelist
groups, then the free blocks are distributed among all freelists, allocating to the
different groups in round-robin fashion.
procedure rebuild_freelists(
schema_name IN varchar2,
object_name IN varchar2,
partition_name IN varchar2 DEFAULT NULL,
skip_corrupt_blocks
Enables or disables the skipping of corrupt blocks during index and table scans of
the specified object. When the object is a table, skip applies to the table and its
indexes. When the object is a cluster, it applies to all of the tables in the cluster, and
their respective indexes.
procedure skip_corrupt_blocks(
schema_name IN varchar2,
object_name IN varchar2,
partition_name IN varchar2 DEFAULT NULL,
object_type IN binary_integer DEFAULT TABLE_OBJECT,
flags IN boolean DEFAULT SKIP_FLAG);
admin_tables
Provides administrative functions for repair and orphan key tables.
procedure admin_tables(
table_name IN varchar2,
table_type IN binary_integer,
action IN binary_integer,
tablespace IN varchar2 DEFAULT NULL);
DBMS_REPAIR Exceptions
This chapter describes general schema object management issues that fall outside
the scope of Chapters 11 through 19, and includes the following topics:
■ Creating Multiple Tables and Views in a Single Operation
■ Renaming Schema Objects
■ Analyzing Tables, Indexes, and Clusters
■ Truncating Tables and Clusters
■ Enabling and Disabling Triggers
■ Managing Integrity Constraints
■ Managing Object Dependencies
■ Managing Object Name Resolution
■ Changing Storage Parameters for the Data Dictionary
■ Displaying Information About Schema Objects
The CREATE SCHEMA statement does not support Oracle extensions to the ANSI
CREATE TABLE and CREATE VIEW commands; this includes the STORAGE
clause.
If you drop and re-create an object, all privileges granted for that object are lost.
Privileges must be re-granted when the object is re-created. Alternatively, a table,
view, sequence, or a private synonym of a table, view, or sequence can be renamed
using the RENAME statement. When using the RENAME statement, grants made
for the object are carried forward for the new name. For example, the following
statement renames the SALES_STAFF view:
RENAME sales_staff TO dept_30;
A table, index, or cluster can be analyzed to validate the structure of the object. For
example, in rare cases such as hardware or other system failures, an index can
become corrupted and not perform correctly. When validating the index, you can
confirm that every entry in the index points to the correct row of the associated
table. If a schema object is corrupt, you can drop and re-create it.
A table or cluster can be analyzed to collect information about chained rows of the
table or cluster. These results are useful in determining whether you have enough
room for updates to rows. For example, this information can show whether
PCTFREE is set appropriately for the table or cluster.
See Also: For more information about analyzing tables, indexes, and clusters for
performance statistics and the optimizer, see Oracle8i Tuning.
For information about analyzing index-organized tables, see Chapter 14, "Managing
Tables".
See Also: For more information about the SQL statement ANALYZE, see the
Oracle8i SQL Reference.
For more information about the data dictionary views containing statistics, see the
Oracle8i Reference.
■ number of rows
■ number of blocks that have been used *
■ number of blocks never used
■ average available free space
■ number of chained rows
■ average row length
■ number of distinct values per column
■ the second smallest value per column *
■ the second largest value per column *
Cluster Statistics The only statistic that can be gathered for a cluster is the average
cluster key chain length; this statistic can be estimated or computed. Statistics for
tables in a cluster and all indexes associated with the cluster’s tables (including the
cluster key index) are automatically gathered when the cluster is analyzed for
statistics.
Computing Statistics
The following statement computes statistics for the EMP table:
ANALYZE TABLE emp COMPUTE STATISTICS;
The following query estimates statistics on the EMP table, using the default
statistical sample of 1064 rows:
ANALYZE TABLE emp ESTIMATE STATISTICS;
To specify the statistical sample that Oracle should use, include the SAMPLE option
with the ESTIMATE STATISTICS option. You can specify an integer that indicates
either a number of rows or index values, or a percentage of the rows or index values
in the table. The following statements show examples of each option:
ANALYZE TABLE emp
ESTIMATE STATISTICS
SAMPLE 2000 ROWS;
ANALYZE TABLE emp
ESTIMATE STATISTICS
SAMPLE 33 PERCENT;
In either case, if you specify a percentage greater than 50, or a number of rows or
index values that is greater than 50% of those in the object, Oracle computes the
exact statistics, rather than estimating.
You can validate an object and all related objects by including the CASCADE
option. The following statement validates the EMP table and all associated indexes:
ANALYZE TABLE emp VALIDATE STRUCTURE CASCADE;
See Also: The name and location of the UTLCHAIN.SQL script are operating
system-dependent; see your operating system-specific Oracle documentation.
For more information about reducing the number of chained and migrated rows in
a table or cluster, see Oracle8i Tuning.
3. Using TRUNCATE
You can delete all rows of the table using the SQL statement TRUNCATE. For
example, the following statement truncates the EMP table:
TRUNCATE TABLE emp;
Using DELETE
If there are many rows present in a table or cluster when using the DELETE
command, significant system resources are consumed as the rows are deleted. For
example, CPU time, redo log space, and rollback segment space from the table and
any associated indexes require resources. Also, as each row is deleted, triggers can
be fired. The space previously allocated to the resulting empty table or cluster
remains associated with that object.With DELETE you can choose which rows to
delete, whereas TRUNCATE and DROP wipe out the entire object.
Using TRUNCATE
Using the TRUNCATE statement provides a fast, efficient method for deleting all
rows from a table or cluster. A TRUNCATE statement does not generate any
rollback information and it commits immediately; it is a DDL statement and cannot
be rolled back. A TRUNCATE statement does not affect any structures associated
with the table being truncated (constraints and triggers) or authorizations. A
TRUNCATE statement also specifies whether space currently allocated for the table
is returned to the containing tablespace after truncation.
You can truncate any table or cluster in the user’s associated schema. Also, any user
that has the DROP ANY TABLE system privilege can truncate a table or cluster in
any schema.
Before truncating a table or clustered table containing a parent key, all referencing
foreign keys in different tables must be disabled. A self-referential constraint does
not have to be disabled.
As a TRUNCATE statement deletes rows from a table, triggers associated with the
table are not fired. Also, a TRUNCATE statement does not generate any audit
information corresponding to DELETE statements if auditing is enabled. Instead, a
single audit record is generated for the TRUNCATE statement being issued.
A hash cluster cannot be truncated. Also, tables within a hash or index cluster
cannot be individually truncated; truncation of an index cluster deletes all rows
from all tables in the cluster. If all the rows must be deleted from an individual
clustered table, use the DELETE command or drop and re-create the table.
The REUSE STORAGE or DROP STORAGE options of the TRUNCATE command
control whether space currently allocated for a table or cluster is returned to the
containing tablespace after truncation. The default option, DROP STORAGE,
reduces the number of extents allocated to the resulting table to the original setting
for MINEXTENTS. Freed extents are then returned to the system and can be used
by other objects.
Alternatively, the REUSE STORAGE option specifies that all space currently
allocated for the table or cluster remains allocated to it. For example, the following
statement truncates the EMP_DEPT cluster, leaving all extents previously allocated
for the cluster available for subsequent inserts and deletes:
TRUNCATE CLUSTER emp_dept REUSE STORAGE;
The REUSE or DROP STORAGE option also applies to any associated indexes.
When a table or cluster is truncated, all associated indexes are also truncated. Also
note that the storage parameters for a truncated table, cluster, or associated indexes
are not changed as a result of the truncation.
See Also: See Chapter 25, "Auditing Database Use", for information about auditing.
disabled A disabled trigger does not execute its trigger body, even if a
triggering statement is issued and the trigger restriction (if
any) evaluates to TRUE.
To enable or disable triggers using the ALTER TABLE statement, you must own the
table, have the ALTER object privilege for the table, or have the ALTER ANY
TABLE system privilege. To enable or disable an individual trigger using the
ALTER TRIGGER statement, you must own the trigger or have the ALTER ANY
TRIGGER system privilege.
See Also: For more details about triggers, see Oracle8i Concepts.
For details about creating triggers, see Oracle8i SQL Reference.
Enabling Triggers
You enable a disabled trigger using the ALTER TRIGGER statement with the
ENABLE option. To enable the disabled trigger named REORDER on the
INVENTORY table, enter the following statement:
ALTER TRIGGER reorder ENABLE;
To enable all triggers defined for a specific table, use the ALTER TABLE statement
with the ENABLE ALL TRIGGERS option. To enable all triggers defined for the
INVENTORY table, enter the following statement:
ALTER TABLE inventory
ENABLE ALL TRIGGERS;
Disabling Triggers
You may want to temporarily disable a trigger if one of the following conditions is
true:
■ An object that the trigger references is not available.
■ You have to perform a large data load and want it to proceed quickly without
firing triggers.
■ You are loading data into the table to which the trigger applies.
You disable a trigger using the ALTER TRIGGER statement with the DISABLE
option. To disable the trigger REORDER on the INVENTORY table, enter the
following statement:
ALTER TRIGGER reorder DISABLE;
You can disable all triggers associated with a table at the same time using the
ALTER TABLE statement with the DISABLE ALL TRIGGERS option. For example,
to disable all triggers defined for the INVENTORY table, enter the following
statement:
ALTER TABLE inventory
DISABLE ALL TRIGGERS;
enable novalidate A table with enable novalidate constraints can contain invalid
data, but it is not possible to add new invalid data to it.
Useful as an intermediate state before validating the data in the
table using enable validate. This ensures no new data can
violate the constraint, and no locks are held when taking
constraints from enable no validate to enable validate.
This mode is useful when you don’t want to enable the
constraint to check for exceptions, for example, after a data
warehouse load.
Disabling Constraints
To enforce the rules defined by integrity constraints, the constraints should always
be enabled. However, you may wish to temporarily disable the integrity constraints
of a table for the following performance reasons:
■ when loading large amounts of data into a table
■ when performing batch operations that make massive changes to a table (for
example, changing every employee’s number by adding 1000 to the existing
number)
■ when importing or exporting one table at a time
In all three cases, temporarily disabling integrity constraints can improve the
performance of the operation, especially in data warehouse configurations.
It is possible to enter data that violates a constraint while that constraint is disabled.
Thus, you should always enable the constraint after completing any of the
operations listed in the bullets above.
Enabling Constraints
While a constraint is enabled, no row violating the constraint can be inserted into
the table. However, while the constraint is disabled such a row can be inserted; this
row is known as an exception to the constraint. If the constraint is in the enable
novalidated state, violations resulting from data entered while the constraint was
disabled remain. The rows that violate the constraint must be either updated or
deleted in order for the constraint to be put in the validated state.
You can examine all rows violating constraints in the EXCEPTIONS table
See Also: For details about the EXCEPTIONS table, see Oracle8i Reference.
See Also: For more details about the SET CONSTRAINTS statement, see the
Oracle8i SQL Reference.
For general information about constraints, see Oracle8i Concepts.
Select Appropriate Data You may wish to defer constraint checks on UNIQUE and
FOREIGN keys if the data you are working with has any of the following
characteristics:
■ tables are snapshots
Ensure Constraints Are Created Deferrable After you have identified and selected the
appropriate tables, make sure the tables’ FOREIGN, UNIQUE and PRIMARY key
constraints are created deferrable. You can do so by issuing a statement similar to
the following:
CREATE TABLE dept (
deptno NUMBER PRIMARY KEY,
dname VARCHAR2 (30)
);
CREATE TABLE emp (
empno NUMBER,
ename VARCHAR2 (30),
deptno NUMBER REFERENCES (dept),
CONSTRAINT epk PRIMARY KEY (empno) DEFERRABLE,
CONSTRAINT efk FOREIGN KEY (deptno)
REFERENCES (dept. deptno) DEFERRABLE);
INSERT INTO dept VALUES (10, ’Accounting’);
INSERT INTO dept VALUES (20, ’SALES’);
INSERT INTO emp VALUES (1, ’Corleone’, 10);
INSERT INTO emp VALUES (2, ’Costanza’, 20);
COMMIT;
Set All Constraints Deferred Within the application being used to manipulate the data,
you must set all constraints deferred before you actually begin processing any data.
Use the following DML statement to set all deferrable constraints deferred:
SET CONSTRAINTS ALL DEFERRED;
Check the Commit (Optional) You can check for constraint violations before committing
by issuing the SET CONSTRAINTS ALL IMMEDIATE statement just before issuing
the COMMIT. If there are any problems with a constraint, this statement will fail
and the constraint causing the error will be identified. If you commit while
constraints are violated, the transaction will be rolled back and you will receive an
error message.
Note: Deferrable UNIQUE and PRIMARY keys all must use non-
unique indexes.
■ DISABLE
■ ENABLE [VALIDATE]
■ DISABLE [NOVALIDATE]
■ ENABLE NOVALIDATE
■ DISABLE VALIDATE
If none of these clauses are identified in a constraint’s definition, Oracle
automatically enables and validates the constraint.
An ALTER TABLE statement that defines and disables an integrity constraint never
fails because of rows of the table that violate the integrity constraint. The definition
of the constraint is allowed because its rule is not enforced.
See Also: For more information about constraint exceptions, see "Reporting
Constraint Exceptions" on page 20-21.
To enable a UNIQUE key or PRIMARY KEY, which creates an associated index, the
owner of the table also needs a quota for the tablespace intended to contain the
index, or the UNLIMITED TABLESPACE system privilege.
To disable or drop a UNIQUE key or PRIMARY KEY constraint and all dependent
FOREIGN KEY constraints in a single step, use the CASCADE option of the
DISABLE or DROP clauses. For example, the following statement disables a
PRIMARY KEY constraint and any FOREIGN KEY constraints that depend on it:
ALTER TABLE dept
DISABLE PRIMARY KEY CASCADE;
Dropping UNIQUE key and PRIMARY KEY constraints drops the associated
unique indexes. Also, if FOREIGN KEYs reference a UNIQUE or PRIMARY KEY,
you must include the CASCADE CONSTRAINTS clause in the DROP statement, or
you cannot drop the constraint.
The following statement attempts to validate the PRIMARY KEY of the DEPT table,
and if exceptions exist, information is inserted into a table named EXCEPTIONS:
If duplicate primary key values exist in the DEPT table and the name of the
PRIMARY KEY constraint on DEPT is SYS_C00610, the following rows might be
placed in the table EXCEPTIONS by the previous statement:
A more informative query would be to join the rows in an exception report table
and the master table to list the actual rows that violate a specific constraint, as
shown in the following example:
SELECT deptno, dname, loc FROM dept, exceptions
WHERE exceptions.constraint = ’SYS_C00610’
AND dept.rowid = exceptions.row_id;
All rows that violate a constraint must be either updated or deleted from the table
containing the constraint. When updating exceptions, you must change the value
violating the constraint to a value consistent with the constraint or a null. After the
row in the master table is updated or deleted, the corresponding rows for the
exception in the exception report table should be deleted to avoid confusion with
later exception reports. The statements that update the master table and the
exception report table should be in the same transaction to ensure transaction
consistency.
To correct the exceptions in the previous examples, you might issue the following
transaction:
UPDATE dept SET deptno = 20 WHERE dname = ’RESEARCH’;
DELETE FROM exceptions WHERE constraint = ’SYS_C00610’;
COMMIT;
When managing exceptions, the goal is to eliminate all exceptions in your exception
report table.
Note: While you are correcting current exceptions for a table with
the constraint disabled, other users may issue statements creating
new exceptions. You can avoid this by enable novalidating the
constraint before you start eliminating exceptions.
See Also: The exact name and location of the UTLEXCPT.SQL script is operating
system specific. For more information, see your operating system-specific Oracle
documentation.
Oracle automatically recompiles an invalid view or PL/SQL program unit the next
time it is used. In addition, a user can force Oracle to recompile a view or program
unit using the appropriate SQL command with the COMPILE parameter. Forced
compilations are most often used to test for errors when a dependent view or
program unit is invalid, but is not currently being used. In these cases, automatic
recompilation would not otherwise occur until the view or program unit was
a. In the current schema, Oracle searches for an object whose name matches
the first piece of the object name. If it does not find such an object, it
continues with Step b.
b. If no schema object is found in the current schema, Oracle searches for a
public synonym that matches the first piece of the name. If it does not find
one, it continues with Step c.
c. If no public synonym is found, Oracle searches for a schema whose name
matches the first piece of the object name. If it finds one, it returns to Step b,
now using the second piece of the name as the object to find in the qualified
schema. If the second piece does not correspond to a object in the
previously qualified schema or there is not a second piece, Oracle returns
an error.
If no schema is found in Step c, the object cannot be qualified and Oracle
returns an error.
2. A schema object has been qualified. Any remaining pieces of the name must
match a valid part of the found object. For example, if SCOTT.EMP.DEPTNO is
the name, SCOTT is qualified as a schema, EMP is qualified as a table, and
DEPTNO must correspond to a column (because EMP is a table). If EMP is
qualified as a package, DEPTNO must correspond to a public constant,
variable, procedure, or function of that package.
When global object names are used in a distributed database, either explicitly or
indirectly within a synonym, the local Oracle resolves the reference locally. For
example, it resolves a synonym to a remote table’s global object name. The partially
resolved statement is shipped to the remote database, and the remote Oracle
completes the resolution of the object as described here.
you cannot create new objects, even though the tablespace intended to hold the
objects seems to have sufficient space. To remedy this situation, you can change the
storage parameters of the underlying data dictionary tables to allow them to be
allocated more extents, in the same way that you can change the storage settings for
user-created segments. For example, you can adjust the values of NEXT or
PCTINCREASE for the data dictionary table.
Of all of the data dictionary segments, the following are the most likely to require
change:
For the clustered tables, you must change the storage settings for the cluster, not for
the table.
OBJECT_NAME OBJECT_TYPE
------------------------- -------------------
EMP_DEPT CLUSTER
EMP TABLE
DEPT TABLE
EMP_DEPT_INDEX INDEX
PUBLIC_EMP SYNONYM
EMP_MGR VIEW
Notice that not all columns have user-specified defaults. These columns
automatically have NULL as the default.
Notice that the RS1 rollback segment is comprised of two extents, both 10K, while
the SYSTEM rollback segment is comprised of three equally sized extents of 50K.
■ The segment has the maximum number of extents allowed by the data block
size, which is operating system specific.
The following query returns the names, owners, and tablespaces of all segments
that fit any of the above criteria:
SELECT seg.owner, seg.segment_name,
seg.segment_type, seg.tablespace_name,
DECODE(seg.segment_type,
’TABLE’, t.next_extent,
’CLUSTER’, c.next_extent,
’INDEX’, i.next_extent,
’ROLLBACK’, r.next_extent)
FROM sys.dba_segments seg,
sys.dba_tables t,
sys.dba_clusters c,
sys.dba_indexes i,
sys.dba_rollback_segs r
Note: When you use this query, replace data_block_size with the
data block size for your system.
Once you have identified a segment that cannot allocate additional extents, you can
solve the problem in either of two ways, depending on its cause:
■ If the tablespace is full, add datafiles to the tablespace.
■ If the segment has too many extents, and you cannot increase MAXEXTENTS
for the segment, perform the following steps: first, export the data in the
segment; second, drop and re-create the segment, giving it a larger INITIAL
setting so that it does not need to allocate so many extents; and third, import
the data back into the segment.
This chapter describes how to manage rollback segments, and includes the
following topics:
■ Guidelines for Managing Rollback Segments
■ Creating Rollback Segments
■ Specifying Storage Parameters for Rollback Segments
■ Taking Rollback Segments Online and Offline
■ Explicitly Assigning a Transaction to a Rollback Segment
■ Dropping Rollback Segments
■ Monitoring Rollback Segment Information
See Also: If you are using Oracle with the Parallel Server option, see Oracle8i
Parallel Server Concepts and Administration.
The instance acquires all the rollback segments listed in this parameter, even if more
than TRANSACTIONS/TRANSACTIONS_PER_ROLLBACK_SEGMENT segments
are specified. The rollback segments can be either private or public.
You should tell users about the different sets of rollback segments that correspond
to the different types of transactions. Often, it is not beneficial to assign a transaction
explicitly to a specific rollback segment; however, you can assign an atypical
transaction to an appropriate rollback segment created for such transactions. For
example, you can assign a transaction that contains a large batch job to a large
rollback segment.
When a mix of transactions is not prevalent, each rollback segment should be 10%
of the size of the database’s largest table because most SQL statements affect 10% or
less of a table; therefore, a rollback segment of this size should be sufficient to store
the actions performed by most SQL statements.
Generally speaking, you should set a high MAXEXTENTS for rollback segments;
this allows a rollback segment to allocate subsequent extents as it needs them.
where:
T = total initial rollback segment size, in bytes
n = number of extents initially allocate
s = calculated size, in bytes, of each extent initially allocated
After s is calculated, create the rollback segment and specify the storage parameters
INITIAL and NEXT as s, and MINEXTENTS to n. PCTINCREASE cannot be
specified for rollback segments and therefore defaults to 0. Also, if the size s of an
extent is not an exact multiple of the data block size, it is rounded up to the next
multiple.
Size, High Water the most space ever allocated for the rollback
segment, in bytes
Size, Optimal the OPTIMAL size of the rollback segment, in
bytes
Average Size, Shrunk the average size of the space Oracle truncated
from the rollback segment, in bytes
Assuming that an instance has equally sized rollback segments with comparably
sized extents, the OPTIMAL parameter for a given rollback segment should be set
slightly higher than Average Sizes, Active. Table 21–1 provides additional
information on how to interpret the statistics given in this monitor.
See Also: Once a rollback segment is created, it is not available for use by
transactions of any instance until it is brought online. See "Taking Rollback
Segments Online and Offline" on page 21-10 for more information.
■ The minimum number of extents and the number of extents initially allocated
when the segment is created is 15.
■ The maximum number of extents that the rollback segment can allocate,
including the initial extent, is 100.
The following statement creates a rollback segment with these characteristics:
CREATE PUBLIC ROLLBACK SEGMENT data1_rs
TABLESPACE users
STORAGE (
INITIAL 50K
NEXT 50K
OPTIMAL 750K
MINEXTENTS 15
MAXEXTENTS 100);
You can alter the settings for the SYSTEM rollback segment, including the
OPTIMAL parameter, just as you can alter those of any rollback segment.
See Also: For guidance on setting sizes and storage parameters (including
OPTIMAL) for rollback segments, see "Guidelines for Managing Rollback
Segments" on page 21-2.
identify unlimited format for rollback segments, extents for that segment must have
a minimum of 4 data blocks. Thus, a limited format rollback segment cannot be
converted to unlimited format if it has less than 4 data blocks in any extent. If you
want to convert from limited to unlimited format and have less than 4 data blocks
in an extent, your only choice is to drop and re-create the rollback segment.
■ You want to drop a rollback segment, but cannot because transactions are
currently using it. To prevent the rollback segment from being used, you can
take it offline before dropping it.
You might later want to bring an offline rollback segment back online so that
transactions can use it. When a rollback segment is created, it is initially offline, and
you must explicitly bring a newly created rollback segment online before it can be
used by an instance’s transactions. You can bring an offline rollback segment online
via any instance accessing the database that contains the rollback segment.
After you bring a rollback segment online, its status in the data dictionary view
DBA_ROLLBACK_SEGS is ONLINE.
See Also: For information about the ROLLBACK_SEGMENTS and
DBA_ROLLBACK_SEGS parameters, see the Oracle8i Reference.
To see a query for checking rollback segment state, see "Displaying Rollback
Segment Information" on page 21-14.
If you try to take a rollback segment that does not contain active rollback entries
offline, Oracle immediately takes the segment offline and changes its status to
"OFFLINE".
In contrast, if you try to take a rollback segment that contains rollback data for
active transactions (local, remote, or distributed) offline, Oracle makes the rollback
segment unavailable to future transactions and takes it offline after all the active
transactions using the rollback segment complete. Until the transactions complete,
the rollback segment cannot be brought online by any instance other than the one
that was trying to take it offline. During this period, the rollback segment’s status in
the view DBA_ROLLBACK_SEGS remains ONLINE; however, the rollback
segment’s status in the view V$ROLLSTAT is PENDING OFFLINE.
The instance that tried to take a rollback segment offline and caused it to change to
PENDING OFFLINE can bring it back online at any time; if the rollback segment is
brought back online, it will function normally.
See Also: For information on viewing rollback segment status, see "Displaying
Rollback Segment Information" on page 21-14.
For information about the views DBA_ROLLBACK_SEGS and V$ROLLSTAT, see
the Oracle8i Reference.
After the transaction is committed, Oracle will automatically assign the next
transaction to any available rollback segment unless the new transaction is
explicitly assigned to a specific rollback segment by the user.
To drop a rollback segment, you must have the DROP ROLLBACK SEGMENT
system privilege.
If a rollback segment is offline, you can drop it using the SQL statement DROP
ROLLBACK SEGMENT.
The following statement drops the DATA1_RS rollback segment:
DROP PUBLIC ROLLBACK SEGMENT data1_rs;
If you use the DROP ROLLBACK SEGMENT statement, indicate the correct type of
rollback segment to drop, public or private, by including or omitting the PUBLIC
keyword.
After a rollback segment is dropped, its status changes to INVALID. The next time a
rollback segment is created, it takes the row vacated by a dropped rollback segment,
if one is available, and the dropped rollback segment’s row no longer appears in the
DBA_ROLLBACK_SEGS view.
See Also: For more information about the view DBA_ROLLBACK_SEGS, see the
Oracle8i Reference.
In addition, the following data dictionary views contain information about the
segments of a database, including rollback segments:
■ USER_SEGMENTS
■ DBA_SEGMENTS
This chapter provides guidelines for developing security policies for database
operation, and includes the following topics:
■ System Security Policy
■ Data Security Policy
■ User Security Policy
■ Password Management Policy
■ Auditing Policy
User Authentication
Database users can be authenticated (verified as the correct person) by Oracle using
the host operating system, network services, or the database. Generally, user
authentication via the host operating system is preferred for the following reasons:
■ Users can connect to Oracle faster and more conveniently without specifying a
username or password.
■ Centralized control over user authorization in the operating system: Oracle
need not store or manage user passwords and usernames if the operating
system and database correspond.
■ User entries in the database and operating system audit trails correspond.
User authentication by the database is normally used when the host operating
system cannot support user authentication.
See Also: For more information about network authentication, see Oracle8i
Distributed Database Systems.
For more information about user authentication, see "Creating Users" on page 23-11.
Overall data security should be based on the sensitivity of data. If information is not
sensitive, then the data security policy can be more lax. However, if data is
sensitive, a security policy should be developed to maintain tight control over
access to objects.
Password Security
If user authentication is managed by the database, security administrators should
develop a password security policy to maintain database access security. For
example, database users should be required to change their passwords at regular
intervals, and of course, when their passwords are revealed to others. By forcing a
user to modify passwords in such situations, unauthorized database access can be
reduced.
Privilege Management
Security administrators should consider issues related to privilege management for
all types of users. For example, in a database with many usernames, it may be
beneficial to use roles (which are named groups of related privileges that you grant
to users or other roles) to manage the privileges available to users. Alternatively, in
a database with a handful of usernames, it may be easier to grant privileges
explicitly to users and avoid the use of roles.
Security administrators managing a database with many users, applications, or
objects should take advantage of the benefits offered by roles. Roles greatly simplify
the task of privilege management in complicated environments.
End-User Security
Security administrators must also define a policy for end-user security. If a database
is large with many users, the security administrator can decide what groups of
users can be categorized, create user roles for these user groups, grant the necessary
privileges or application roles to each user role, and assign the user roles to the
users. To account for exceptions, the security administrator must also decide what
privileges must be explicitly granted to individual users.
Users
ACCTS_PAY ACCTS_REC
Role Role Application Roles
Privileges to Privileges to
execute the execute the Application Privileges
ACCTS_PAY ACCTS_REC
application application
Administrator Security
Security administrators should have a policy addressing administrator security. For
example, when the database is large and there are several types of database
administrators, the security administrator may decide to group related
administrative privileges into several administrative roles. The administrative roles
can then be granted to appropriate administrator users. Alternatively, when the
database is small and has only a few administrators, it may be more convenient to
create one administrative role and grant it to all administrators.
Although some database systems use only one of these options, other systems could
mix them. For example, application developers can be allowed to create new stored
procedures and packages, but not allowed to create tables or indexes. A security
administrator’s decision regarding this issue should be based on the following:
■ the control desired over a database’s space usage
■ the control desired over the access paths to schema objects
■ the database used to develop applications—if a test database is being used for
application development, a more liberal development policy would be in order
Account Locking
When a particular user exceeds a designated number of failed login attempts, the
server automatically locks that user’s account. DBAs specify the permissible
number of failed login attempts using the CREATE PROFILE statement. DBAs also
specify the amount of time accounts remain locked.
In the following example, the maximum number of failed login attempts for the
user ASHWINI is 4, and the amount of time the account will remain locked is 30
days; the account will unlock automatically after the passage of 30 days.
CREATE PROFILE prof LIMIT
FAILED_LOGIN_ATTEMPTS 4
PASSWORD_LOCK_TIME 30;
ALTER USER ashwini PROFILE prof;
If the DBA does not specify a time interval for unlocking the account,
ACCOUNT_LOCK _TIME reverts to a default value. If the DBA specifies
ACCOUNT_LOCK_TIME as UNLIMITED, then the system security officer must
explicitly unlock the account. Thus, the amount of time an account remains locked
depends upon how the DBA configures the resource profile assigned to the user.
After a user successfully logs into an account, that user’s unsuccessful login attempt
count, if there is one, is reset to 0.
The security officer can also explicitly lock user accounts. When this occurs, the
account cannot be unlocked automatically; only the security officer should unlock
the account.
See Also: For more information about the CREATE PROFILE statement, see the
Oracle8i SQL Reference.
the user or DBA must change the password. The following statement indicates that
ASHWINI can use the same password for 90 days before it expires:
CREATE PROFILE prof LIMIT
FAILED_LOGIN_ATTEMPTS 4
PASSWORD_LOCK_TIME 30
PASSWORD_LIFE_TIME 90;
ALTER USER ashwini PROFILE prof;
DBAs can also specify a grace period using the CREATE PROFILE statement. Users
enter the grace period upon the first attempt to login to a database account after
their password has expired. During the grace period, a warning message appears
each time users try to log in to their accounts, and continues to appear until the
grace period expires. Users must change the password within the grace period. If
the password is not changed within the grace period, the account expires and no
further logins to that account are allowed until the password is changed.
Figure 22–2 shows the chronology of the password lifetime and grace period.
For example, the lifetime of a password is 60 days, and the grace period is 3 days. If
the user tries to log in on any day after the 60th day (this could be the 70th day,
100th day, or another; the point here is that it is the first login attempt after the
password lifetime), that user receives a warning message indicating that the
password is about to expire in 3 days. If the user does not change the password
within three days from the first day of the grace period, the user’s account expires.
The following statement indicates that the user must change the password within 3
days of its expiration:
CREATE PROFILE prof LIMIT
FAILED_LOGIN_ATTEMPTS 4
ACCOUNT_LOCK_TIME 30
PASSWORD_GRACE_TIME 3;
ALTER USER ashwini PROFILE prof;
The security officer can also explicitly expire the account. This is particularly useful
for new accounts.
See Also: For more information about the CREATE PROFILE statement, see
Oracle8i SQL Reference.
Password History
DBAs use the CREATE PROFILE statement to specify a time interval during which
users cannot reuse a password.
In the following statement, the DBA indicates that the user cannot reuse her
password for 60 days.
CREATE PROFILE prof LIMIT
PASSWORD_REUSE_TIME 60
PASSWORD_REUSE_MAX UNLIMITED;
The next statement shows that the number of password changes the user must
make before her current password can be used again is 3.
CREATE PROFILE prof LIMIT
PASSWORD_REUSE_MAX 3
PASSWORD_REUSE_TIME UNLIMITED;
routine_name (
userid_parameter IN VARCHAR(30),
password_parameter IN VARCHAR (30),
old_password_parameter IN VARCHAR (30)
)
RETURN BOOLEAN
Password Verification Routine: Sample Script The following sample script sets default
password resource limits and provides minimum checking of password complexity.
You can use this sample script as a model when developing your own complexity
checks for a new password.
This script sets the default password resource parameters, and must be run to
enable the password features. However, you can change the default resource
parameters if necessary.
The default password complexity function performs the following minimum
complexity checks:
■ The password satisfies minimum length requirements.
■ The password is not the username. You can modify this function based on your
requirements.
This function must be created in SYS schema, and you must connect sys/
<password> as sysdba before running the script.
CREATE OR REPLACE FUNCTION verify_function
(username varchar2,
password varchar2,
old_password varchar2)
RETURN boolean IS
n boolean;
m integer;
differ integer;
isdigit boolean;
ischar boolean;
ispunct boolean;
digitarray varchar2(20);
punctarray varchar2(25);
chararray varchar2(52);
BEGIN
digitarray:= '0123456789';
chararray:= 'abcdefghijklmnopqrstuvwxyzABCDEFGHIJKLMNOPQRSTUVWXYZ';
punctarray:='!"#$%&()’’*+,-/:;<=>?_';
--Check if the password contains at least one letter, one digit and one
--punctuation mark.
--1. Check for the digit
--You may delete 1. and replace with 2. or 3.
isdigit:=FALSE;
m := length(password);
FOR i IN 1..10 LOOP
FOR j IN 1..m LOOP
IF substr(password,j,1) = substr(digitarray,i,1) THEN
isdigit:=TRUE;
GOTO findchar;
END IF;
END LOOP;
END LOOP;
IF isdigit = FALSE THEN
raise_application_error(-20003, 'Password should contain at least one
digit, one character and one punctuation');
END IF;
--2. Check for the character
<<findchar>>
ischar:=FALSE;
FOR i IN 1..length(chararray) LOOP
FOR j IN 1..m LOOP
IF substr(password,j,1) = substr(chararray,i,1) THEN
ischar:=TRUE;
GOTO findpunct;
END IF;
END LOOP;
END LOOP;
IF ischar = FALSE THEN
raise_application_error(-20003, 'Password should contain at least one digit, one
character and one punctuation');
END IF;
--3. Check for the punctuation
<<findpunct>>
ispunct:=FALSE;
FOR i IN 1..length(punctarray) LOOP
FOR j IN 1..m LOOP
IF substr(password,j,1) = substr(punctarray,i,1) THEN
ispunct:=TRUE;
GOTO endsearch;
END IF;
END LOOP;
END LOOP;
IF ispunct = FALSE THEN raise_application_error(-20003, 'Password should contain at least
one \ digit, one character and one punctuation');
END IF;
<<endsearch>>
--Check if the password differs from the previous password by at least 3 letters
IF old_password = '' THEN
raise_application_error(-20004, 'Old password is null');
END IF;
--Everything is fine; return TRUE ;
differ := length(old_password) - length(password);
m := length(password);
ELSE
m:= length(old_password);
END IF;
differ := abs(differ);
FOR i IN 1..m LOOP
IF substr(password,i,1) != substr(old_password,i,1) THEN
differ := differ + 1;
END IF;
END LOOP;
IF differ < 3 THEN
raise_application_error(-20004, 'Password should differ by at
least 3 characters');
END IF;
END IF;
--Everything is fine; return TRUE ;
RETURN(TRUE);
END;
Auditing Policy
Security administrators should define a policy for the auditing procedures of each
database. You may, for example, decide to have database auditing disabled unless
questionable activities are suspected. When auditing is required, the security
administrator must decide what level of detail to audit the database; usually,
general system auditing is followed by more specific types of auditing after the
origins of suspicious activity are determined.
This chapter describes how to control access to an Oracle database, and includes the
following topics:
■ Session and User Licensing
■ User Authentication
■ Oracle Users
■ Managing Resources with Profiles
■ Listing Information About Database Users and Profiles
See Also: For guidelines on establishing security policies for users and profiles, see
Chapter 22, "Establishing Security Policies".
Privileges and roles control the access a user has to a database and the schema
objects within the database. For information on privileges and roles, see Chapter 24,
"Managing User Privileges and Roles".
See Also: For information about the initial installation procedure, see Chapter 2,
"Creating an Oracle Database".
Connecting Privileges
After your instance’s session limit is reached, only users with RESTRICTED
SESSION privilege (usually DBAs) can connect to the database. When a user with
RESTRICTED SESSION privileges connects, Oracle sends the user a message
indicating that the maximum limit has been reached, and writes a message to the
ALERT file. When the maximum is reached, you should connect only to terminate
unneeded processes. Do not raise the licensing limits unless you have upgraded
your Oracle license agreement.
In addition to setting a maximum concurrent session limit, you can set a warning
limit on the number of concurrent sessions. After this limit is reached, additional
users can continue to connect (up to the maximum limit); however, Oracle writes an
appropriate message to the ALERT file with each connection, and sends each
connecting user who has the RESTRICTED SESSION privilege a warning indicating
that the maximum is about to be reached.
If a user is connecting with administrator privileges, the limits still apply; however,
Oracle enforces the limit after the first statement the user executes.
In addition to enforcing the concurrent usage limits, Oracle tracks the highest
number of concurrent sessions for each instance. You can use this "high water
mark."
See Also: For information about terminating sessions, see "Terminating Sessions" on
page 4-15.
For information about Oracle licensing limit upgrades, see "Viewing Licensing
Limits and Current Values" on page 23-6.
See Also: For more information about setting and changing limits in a parallel
server environment, see Oracle8i Parallel Server Concepts and Administration.
If you set this limit, you are not required to set a warning limit
(LICENSE_SESSIONS_WARNING). However, using the warning limit makes the
maximum limit easier to manage, because it gives you advance notice that your site
is nearing maximum use.
option. The following statement changes the maximum limit to 100 concurrent
sessions:
ALTER SYSTEM SET LICENSE_MAX_SESSIONS = 100;
The following statement changes both the warning limit and the maximum limit:
ALTER SYSTEM
SET LICENSE_MAX_SESSIONS = 64
LICENSE_SESSIONS_WARNING = 54;
If you change either limit to a value lower than the current number of sessions, the
current sessions remain; however, the new limit is enforced for all future
connections until the instance is shut down. To change the limit permanently,
change the value of the appropriate parameter in the parameter file.
To change the concurrent usage limits while the database is running, you must have
the ALTER SYSTEM privilege. Also, to connect to an instance after the instance’s
maximum limit has been reached, you must have the RESTRICTED SESSION
privilege.
If the database contains more than LICENSE_MAX_USERS when you start it,
Oracle returns a warning and writes an appropriate message in the ALERT file. You
cannot create additional users until the number of users drops below the limit or
until you delete users or upgrade your Oracle license.
If you try to change the limit to a value lower than the current number of users,
Oracle returns an error and continues to use the old limit. If you successfully
change the limit, the new limit remains in effect until you shut down the instance;
to change the limit permanently, change the value of LICENSE_MAX_USERS in the
parameter file.
To change the maximum named users limit, you must have the ALTER SYSTEM
privilege.
WARNING: Do not raise the named user limit unless you have
appropriately upgraded your Oracle license. Contact your Oracle
representative for more information.
sessions_highwater s_high,
users_max
FROM v$license;
In addition, Oracle writes the session high water mark to the database’s ALERT file
when the database shuts down, so you can check for it there.
To see the current number of named users defined in the database, use the
following query:
SELECT COUNT(*) FROM dba_users;
COUNT(*)
----------
174
User Authentication
This section describes aspects of authenticating users, and includes the following
topics:
■ Database Authentication
■ External Authentication
■ Enterprise Authentication
Depending on how you want user identities to be authenticated, there are three
ways to define users before they are allowed to create a database session:
1. You can configure Oracle so that it performs both identification and
authentication of users. This is called database authentication.
2. You can configure Oracle so that it performs only the identification of users
(leaving authentication up to the operating system or network service). This is
called external authentication.
3. You can configure Oracle so that it performs only the identification of users.
This is called enterprise authentication.
Database Authentication
If you choose database authentication for a user, administration of the user account,
password, and authentication of that user is performed entirely by Oracle. To have
Oracle authenticate a user, specify a password for the user when you create or alter
the user. Users can change their password at any time. Passwords are stored in an
encrypted format. Each password must be made up of single-byte characters, even
if your database uses a multi-byte character set.
To enhance security when using database authentication, Oracle recommends the
use of password management, including account locking, password aging and
expiration, password history, and password complexity verification.
The following statement creates a user who is identified and authenticated by
Oracle:
CREATE USER scott IDENTIFIED BY tiger;
See Also: For more information about the CREATE USER and ALTER USER
statements, see Oracle8i SQL Reference.
For more information about valid passwords, see Oracle8i SQL Reference.
For more information about Oracle password management, see Chapter 22,
"Establishing Security Policies".
External Authentication
When you choose external authentication for a user, the user account is maintained
by Oracle, but password administration and user authentication is performed by an
external service. This external service can be the operating system or a network
service, such as Net8.
With external authentication, your database relies on the underlying operating
system or network authentication service to restrict access to database accounts. A
database password is not used for this type of login. If your operating system or
network service permits, you can have it authenticate users. If you do so, set the
parameter OS_AUTHENT_PREFIX, and use this prefix in Oracle usernames. This
parameter defines a prefix that Oracle adds to the beginning of every user’s
operating system account name. Oracle compares the prefixed username with the
Oracle usernames in the database when a user attempts to connect.
For example, assume that OS_AUTHENT_PREFIX is set as follows:
OS_AUTHENT_PREFIX=OPS$
cannot connect using a multi-threaded server, since this connection uses Net8. This
default restriction prevents a remote user from impersonating another operating
system user over a network connection.
If you are not concerned about remote users impersonating another operating
system user over a network connection, and you want to use operating system user
authentication with network clients, set the parameter REMOTE_OS_AUTHENT
(default is FALSE) to TRUE in the database’s parameter file. Setting the initialization
parameter REMOTE_OS_AUTHENT to TRUE allows the RDBMS to accept the
client operating system username received over a non-secure connection and use it
for account access. The change will take effect the next time you start the instance
and mount the database.
Network Authentication
Network authentication is performed via Net8, which may be configured to use a
third party service such as Kerberos. If you are using Net8 as the only external
authentication service, the setting of the parameter REMOTE_OS_AUTHENT is
irrelevant, since Net8 only allows secure connections.
See Also: For information about network authentication, see Oracle8i Distributed
Database Systems.
Enterprise Authentication
If you choose enterprise authentication for a user, the user account is maintained by
Oracle, but password administration and user authentication is performed by the
Oracle Security Service (OSS). This authentication service can be shared among
multiple Oracle database servers and allows user’s authentication and
authorization information to be managed centrally.
Use the following command to create a user (known as a global user) who is
identified by Oracle and authenticated by the Oracle Security Service:
See Also: For information about the contents of the <EXTERNAL NAME> string,
see Oracle8i Distributed Database Systems.
Oracle Users
Each Oracle database has a list of valid database users. To access a database, a user
must run a database application and connect to the database instance using a valid
username defined in the database. This section explains how to manage users for a
database, and includes the following topics:
■ Creating Users
■ Altering Users
■ Dropping Users
Creating Users
To create a database user, you must have the CREATE USER system privilege.
When creating a new user, tablespace quotas can be specified for any tablespace in
the database, even if the creator does not have a quota on a specified tablespace.
Because it is a powerful privilege, a security administrator is normally the only user
who has the CREATE USER system privilege.
You create a user with the SQL statement CREATE USER. Using either option, you
can also specify the new user’s default and temporary segment tablespaces,
tablespace quotas, and profile.
CREATE USER OPS$jward
IDENTIFIED EXTERNALLY
DEFAULT TABLESPACE data_ts
TEMPORARY TABLESPACE temp_ts
See Also: A newly created user cannot connect to the database until granted the
CREATE SESSION system privilege; see "Granting System Privileges and Roles" on
page 24-9.
Specifying a Name
Within each database a username must be unique with respect to other usernames
and roles; a user and role cannot have the same name. Furthermore, each user has
an associated schema. Within a schema, each schema object must have a unique
name.
In this case, the connecting user must supply the correct password to the database
to connect successfully.
The default setting for every user’s default tablespace is the SYSTEM tablespace. If
a user does not create objects, this default setting is fine. However, if a user creates
any type of object, consider specifically setting the user’s default tablespace. You
can set a user’s default tablespace during user creation, and change it later.
Changing the user’s default tablespace affects only objects created after the setting
is changed.
Consider the following issues when deciding which tablespace to specify:
■ Set a user’s default tablespace only if the user has the privileges to create objects
(such as tables, views, and clusters).
■ Set a user’s default tablespace to a tablespace for which the user has a quota.
■ If possible, set a user’s default tablespace to a tablespace other than the
SYSTEM tablespace to reduce contention between data dictionary objects and
user objects for the same datafiles.
In the previous CREATE USER statement, JWARD’s default tablespace is DATA_TS.
create objects. Minimally, assign users a quota for the default tablespace, and
additional quotas for other tablespaces in which they will create objects.
You can assign a user either individual quotas for a specific amount of disk space in
each tablespace or an unlimited amount of disk space in all tablespaces. Specific
quotas prevent a user’s objects from consuming too much space in the database.
You can assign a user’s tablespace quotas when you create the user, or add or
change quotas later. If a new quota is less than the old one, then the following
conditions hold true:
■ If a user has already exceeded a new tablespace quota, the user’s objects in the
tablespace cannot be allocated more space until the combined space of these
objects falls below the new quota.
■ If a user has not exceeded a new tablespace quota, or if the space used by the
user’s objects in the tablespace falls under a new tablespace quota, the user’s
objects can be allocated space up to the new quota.
Revoking Tablespace Access You can revoke a user’s tablespace access by changing
the user’s current quota to zero. After a quota of zero is assigned, the user’s objects
in the revoked tablespace remain, but the objects cannot be allocated any new space.
Altering Users
Users can change their own passwords. However, to change any other option of a
user’s security domain, you must have the ALTER USER system privilege. Security
administrators are normally the only users that have this system privilege, as it
allows a modification of any user’s security domain. This privilege includes the
ability to set tablespace quotas for a user on any tablespace in the database, even if
the user performing the modification does not have a quota for a specified
tablespace.
You can alter a user’s security settings with the SQL statement ALTER USER.
Changing a user’s security settings affects the user’s future sessions, not current
sessions.
The following statement alters the security settings for user AVYRROS:
ALTER USER avyrros
IDENTIFIED EXTERNALLY
DEFAULT TABLESPACE data_ts
TEMPORARY TABLESPACE temp_ts
QUOTA 100M ON data_ts
QUOTA 0 ON test_ts
PROFILE clerk;
The ALTER USER statement here changes AVYRROS’s security settings as follows:
■ Authentication is changed to use AVYRROS’s operating system account.
■ AVYRROS’s default and temporary tablespaces are explicitly set.
■ AVYRROS is given a 100M quota for the DATA_TS tablespace.
Users can change their own passwords this way, without any special privileges
(other than those to connect to the database). Users should be encouraged to change
their passwords frequently.
Users must have the ALTER USER privilege to switch between Oracle database
authentication, external authentication, and enterprise authentication; usually, only
DBAs should have this privilege.
Dropping Users
When a user is dropped, the user and associated schema are removed from the data
dictionary and all schema objects contained in the user’s schema, if any, are
immediately dropped.
See Also: For more information about terminating sessions, see "Terminating
Sessions" on page 4-15.
Creating Profiles
To create a profile, you must have the CREATE PROFILE system privilege. You can
create profiles using the SQL statement CREATE PROFILE. At the same time, you
can explicitly set particular resource limits.
The following statement creates the profile CLERK:
CREATE PROFILE clerk LIMIT
SESSIONS_PER_USER 2
CPU_PER_SESSION unlimited
CPU_PER_CALL 6000
LOGICAL_READS_PER_SESSION unlimited
LOGICAL_READS_PER_CALL 100
IDLE_TIME 30
CONNECT_TIME 480;
All unspecified resource limits for a new profile take the limit set by the DEFAULT
profile. You can also specify limits for the DEFAULT profile.
Any user with the ALTER PROFILE system privilege can adjust the limits in the
DEFAULT profile. The DEFAULT profile cannot be dropped.
Assigning Profiles
After a profile has been created, you can assign it to database users. Each user can
be assigned only one profile at any given time. If a profile is assigned to a user who
already has a profile, the new profile assignment overrides the previously assigned
profile. Profile assignments do not affect current sessions. Profiles can be assigned
only to users and not to roles or other profiles.
Profiles can be assigned to users using the SQL statements CREATE USER or
ALTER USER.
See Also: For more information about assigning a profile to a user, see "Creating
Users" on page 23-11 and "Altering Users" on page 23-15.
Altering Profiles
You can alter the resource limit settings of any profile using the SQL statement
ALTER PROFILE. To alter a profile, you must have the ALTER PROFILE system
privilege.
Any adjusted profile limit overrides the previous setting for that profile limit. By
adjusting a limit with a value of DEFAULT, the resource limit reverts to the default
limit set for the database. All profiles not adjusted when altering a profile retain the
previous settings. Any changes to a profile do not affect current sessions. New
profile settings are used only for sessions created after a profile is modified.
The following statement alters the CLERK profile:
ALTER PROFILE clerk LIMIT
CPU_PER_CALL default
LOGICAL_READS_PER_SESSION 20000;
See Also: For information about default profiles, see "Using the DEFAULT Profile"
on page 23-18.
Notice that both explicit resource limits and a composite limit can exist concurrently
for a profile. The limit that is reached first stops the activity in a session. Composite
limits allow additional flexibility when limiting the use of system resources.
A large cost means that the resource is very expensive, while a small cost means
that the resource is not expensive. By default, each resource is initially given a cost
of 0. A cost of 0 means that the resource should not be considered in the composite
limit (that is, it does not cost anything to use this resource). No resource can be
given a cost of NULL.
See Also: For additional information and recommendations on setting resource
costs, see your operating system-specific Oracle documentation and the Oracle8i
SQL Reference.
Dropping Profiles
To drop a profile, you must have the DROP PROFILE system privilege. You can
drop a profile using the SQL statement DROP PROFILE. To successfully drop a
profile currently assigned to a user, use the CASCADE option.
The following statement drops the profile CLERK, even though it is assigned to a
user:
DROP PROFILE clerk CASCADE;
■ DBA_PROFILES
■ RESOURCE_COST
■ V$SESSION
■ V$SESSTAT
■ V$STATNAME
See Also: See the Oracle8i Reference for detailed information about each view.
When specific quotas are assigned, the exact number is indicated in the
MAX_BYTES column. Unlimited quotas are indicated by "-1".
Examples
This section contains examples that use functions described throughout this
chapter.
1. The following statement creates the profile prof:
CREATE PROFILE prof limit
FAILED_LOGIN_ATTEMPTS 5
PASSWORD_LIFE_TIME 60
PASSWORD_REUSE_MAX 60
PASSWORD_REUSE_TIME UNLIMITED
PASSWORD_VERIFY_FUNCTION verify_function
PASSWORD_LOCK_TIME 1
PASSWORD_GRACE_TIME 10;
2. The following statement creates a user with the same password as the username
with profile prof;
CREATE USER userscott IDENTIFIED BY userscott PROFILE prof;
ORA-28003: Password verification for the specified password failed
ORA-20001: Password same as user
4. The following statement changes the user's password to "scott%" again and
returns an error:
ALTER USER userscott IDENTIFIED BY "scott%";
ORA-28007: The password cannot be reused
This chapter explains how to control the ability to execute system operations and
access to schema objects using privileges and roles. The following topics are
included:
■ Identifying User Privileges
■ Managing User Roles
■ Granting User Privileges and Roles
■ Revoking User Privileges and Roles
■ Granting Roles Using the Operating System or Network
■ Listing Privilege and Role Information
See Also: For information about controlling access to a database, see Chapter 23.
For suggested general database security policies, see Chapter 22.
System Privileges
There are over 100 distinct system privileges. Each system privilege allows a user to
perform a particular database operation or class of database operations.
For security reasons, system privileges do not allow users to access the data
dictionary. Hence, users with ANY privileges (such as UPDATE ANY TABLE,
SELECT ANY TABLE or CREATE ANY INDEX) cannot access dictionary tables and
views that have not been granted to PUBLIC.
See Also: For a complete list/description of system privileges, see the Oracle8i SQL
Reference.
Note: SYSDBA should not grant any user the object privileges for
nonexported objects in the dictionary; doing so may compromise
the integrity of the database.
See Also: For details about any exported table or view, see the Oracle8i Reference.
Object Privileges
Each type of object has different privileges associated with it. For a detailed list of
objects and associated privileges, see the Oracle8i SQL Reference.
statements. Note that if all object privileges are granted using the ALL shortcut,
individual privileges can still be revoked.
Likewise, all individually granted privileges can be revoked using the ALL
shortcut. However, if you REVOKE ALL, and revoking causes integrity constraints
to be deleted (because they depend on a REFERENCES privilege that you are
revoking), you must include the CASCADE CONSTRAINTS option in the REVOKE
statement.
Creating a Role
You can create a role using the SQL statement CREATE ROLE.
You must have the CREATE ROLE system privilege to create a role. Typically, only
security administrators have this system privilege.
The following statement creates the CLERK role, which is authorized by the
database using the password BICENTENNIAL:
CREATE ROLE clerk
IDENTIFIED BY bicentennial;
Role Names
You must give each role you create a unique name among existing usernames and
role names of the database. Roles are not contained in the schema of any user.
Predefined Roles
The roles listed in Table 24–1 are automatically defined for Oracle databases. These
roles are provided for backward compatibility to earlier versions of Oracle. You can
grant and revoke privileges and roles to these predefined roles, much the way you
do with any role you define.
Role Authorization
A database role can optionally require authorization when a user attempts to enable
the role. Role authorization can be maintained by the database (using passwords),
by the operating system, or by a network service.
To alter the authorization method for a role, you must have the ALTER ANY ROLE
system privilege or have been granted the role with the ADMIN OPTION.
See Also: For more information about network roles, see Oracle8i Distributed
Database Systems.
See Also: For more information about valid passwords, see the Oracle8i Reference.
Role authentication via the operating system is useful only when the operating
system is able to dynamically link operating system privileges with applications.
When a user starts an application, the operating system grants an operating system
privilege to the user. The granted operating system privilege corresponds to the role
associated with the application. At this point, the application can enable the
application role. When the application is terminated, the previously granted
operating system privilege is revoked from the user’s operating system account.
If a role is authorized by the operating system, you must configure information for
each user at the operating system level. This operation is operating system
dependent.
If roles are granted by the operating system, you do not need to have the operating
system authorize them also; this is redundant.
See Also: For more information about roles granted by the operating system, see
"Granting Roles Using the Operating System or Network" on page 24-16.
because a remote user could impersonate another operating system user over a
network connection.
If you are not concerned with this security risk and want to use operating system
role authentication for network clients, set the parameter REMOTE_OS_ROLES in
the database’s parameter file to TRUE. The change will take effect the next time you
start the instance and mount the database. (The parameter is FALSE by default.)
Withholding Authorization
A role can also be created without authorization. If a role is created without any
protection, the role can be enabled or disabled by any grantee.
Using the MAX_ENABLED_ROLES Parameter A user can enable as many roles as specified
by the initialization parameter MAX_ENABLED_ROLES. All indirectly granted roles
enabled as a result of enabling a primary role are included in this count. The database
administrator can alter this limitation by modifying the value for this parameter. Higher
values permit each user session to have more concurrently enabled roles. However, the
larger the value for this parameter, the more memory space is required on behalf of each
user session; this is because the PGA size is affected for each user session, and requires 4
bytes per role. Determine the highest number of roles that will be concurrently enabled
by any one user and use this value for the MAX_ENABLED_ROLES parameter.
Dropping Roles
In some cases, it may be appropriate to drop a role from the database. The security
domains of all users and roles granted a dropped role are immediately changed to
reflect the absence of the dropped role’s privileges. All indirectly granted roles of
the dropped role are also removed from affected security domains. Dropping a role
automatically removes the role from all users’ default role lists.
Because the creation of objects is not dependent on the privileges received via a role,
tables and other objects are not dropped when a role is dropped.
To drop a role, you must have the DROP ANY ROLE system privilege or have been
granted the role with the ADMIN OPTION.
You can drop a role using the SQL statement DROP ROLE.
The following statement drops the role CLERK:
DROP ROLE clerk;
The user MICHAEL can not only use all of the privileges implicit in the NEW_DBA
role, but can grant, revoke, or drop the NEW_DBA role as deemed necessary.
Because of these powerful capabilities, exercise caution when granting system
privileges or roles with the ADMIN OPTION. Such privileges are usually reserved
for a security administrator and rarely granted to other administrators or users of
the system.
To grant the INSERT object privilege for only the ENAME and JOB columns of the
EMP table to the users JFEE and TSMITH, issue the following statement:
GRANT insert(ename, job) ON emp TO jfee, tsmith;
To grant all object privileges on the SALARY view to the user JFEE, use the ALL
shortcut, as shown in the following example:
GRANT ALL ON salary TO jfee;
The following statement revokes all privileges (which were originally granted to the
role HUMAN_RESOURCE) from the table DEPT:
REVOKE ALL ON dept FROM human_resources;
Note: This statement above would only revoke the privileges that
the grantor authorized, not the grants made by other users. The
GRANT OPTION for an object privilege cannot be selectively
revoked. The object privilege must be revoked and then re-granted
without the GRANT OPTION. Users cannot revoke object
privileges from themselves.
revoke the UPDATE privilege on just the DEPTNO column, you would issue the
following two statements:
REVOKE UPDATE ON dept FROM human_resources;
GRANT UPDATE (dname) ON dept TO human_resources;
The REVOKE statement revokes UPDATE privilege on all columns of the DEPT table
from the role HUMAN_RESOURCES. The GRANT statement re-grants UPDATE
privilege on the DNAME column to the role HUMAN_RESOURCES.
Any foreign key constraints currently defined that use the revoked REFERENCES
privilege are dropped when the CASCADE CONSTRAINTS options is specified.
System Privileges
There are no cascading effects when revoking a system privilege related to DDL
operations, regardless of whether the privilege was granted with or without the
ADMIN OPTION. For example, assume the following:
1. The security administrator grants the CREATE TABLE system privilege to JFEE
with the ADMIN OPTION.
2. JFEE creates a table.
3. JFEE grants the CREATE TABLE system privilege to TSMITH.
4. TSMITH creates a table.
5. The security administrator revokes the CREATE TABLE system privilege from
JFEE.
6. JFEE’s table continues to exist. TSMITH still has the table and the CREATE
TABLE system privilege.
Object Privileges
Revoking an object privilege may have cascading effects that should be investigated
before issuing a REVOKE statement.
■ Object definitions that depend on a DML object privilege can be affected if the
DML object privilege is revoked. For example, assume the procedure body of
the TEST procedure includes a SQL statement that queries data from the EMP
table. If the SELECT privilege on the EMP table is revoked from the owner of
the TEST procedure, the procedure can no longer be executed successfully.
■ Object definitions that require the ALTER and INDEX DDL object privileges are
not affected if the ALTER or INDEX object privilege is revoked. For example, if
the INDEX privilege is revoked from a user that created an index on someone
else’s table, the index continues to exist after the privilege is revoked.
■ When a REFERENCES privilege for a table is revoked from a user, any foreign
key integrity constraints defined by the user that require the dropped
REFERENCES privilege are automatically dropped. For example, assume that
the user JWARD is granted the REFERENCES privilege for the DEPTNO
column of the DEPT table and creates a foreign key on the DEPTNO column in
the EMP table that references the DEPTNO column. If the REFERENCES
privilege on the DEPTNO column of the DEPT table is revoked, the foreign key
constraint on the DEPTNO column of the EMP table is dropped in the same
operation.
■ The object privilege grants propagated using the GRANT OPTION are revoked
if a grantor’s object privilege is revoked. For example, assume that USER1 is
granted the SELECT object privilege with the GRANT OPTION, and grants the
SELECT privilege on EMP to USER2. Subsequently, the SELECT privilege is
revoked from USER1. This revoke is cascaded to USER2 as well. Any objects
that depended on USER1’s and USER2’s revoked SELECT privilege can also be
affected, as described in previous bullet items.
administered using the operating system and passed to Oracle when a user creates
a session. As part of this mechanism, each user’s default roles and the roles granted
to a user with the ADMIN OPTION can be identified. Even if the operating system
is used to authorize users for roles, all roles must be created in the database and
privileges assigned to the role with GRANT statements.
Roles can also be granted through a network service. For information about
network roles, see Oracle8i Distributed Database Systems.
The advantage of using the operating system to identify a user’s database roles is
that privilege management for an Oracle database can be externalized. The security
facilities offered by the operating system control a user’s privileges. This option
may offer advantages of centralizing security for a number of system activities. For
example, MVS Oracle administrators may want RACF groups to identify a database
user’s roles, UNIX Oracle administrators may want UNIX groups to identify a
database user’s roles, or VMS Oracle administrators may want to use rights
identifiers to identify a database user’s roles.
The main disadvantage of using the operating system to identify a user’s database
roles is that privilege management can only be performed at the role level.
Individual privileges cannot be granted using the operating system, but can still be
granted inside the database using GRANT statements.
A secondary disadvantage of using this feature is that by default users cannot
connect to the database through the multi-threaded server, or any other network
connection, if the operating system is managing roles. However, you can change
this default; see "Using Network Connections with Operating System Role
Management" on page 24-19.
See Also: The features described in this section are available only on some
operating systems. This information is operating system-dependent; see your
operating system-specific Oracle documentation.
available for the user. Role specification can also indicate which roles are the default
roles of a user and which roles are available with the ADMIN OPTION. No matter
which operating system is used, the role specification at the operating system level
follows the format:
ORA_<ID>_<ROLE>[_[D][A]]
where:
ID
The definition of ID varies on different operating systems. For example, on VMS, ID
is the instance identifier of the database; on MVS, it is the machine type; on UNIX, it
is the system ID.
D
This optional character indicates that this role is to be a default role of the database
user.
A
This optional character indicates that this role is to be granted to the user with the
ADMIN OPTION. This allows the user to grant the role to other roles only. (Roles
cannot be granted to users if the operating system is used to manage roles.)
For example, an operating system account might have the following roles identified
in its profile:
ORA_PAYROLL_ROLE1
ORA_PAYROLL_ROLE2_A
ORA_PAYROLL_ROLE3_D
ORA_PAYROLL_ROLE4_DA
When the corresponding user connects to the PAYROLL instance of Oracle, ROLE3
and ROLE4 are defaults, while ROLE2 and ROLE4 are available with the ADMIN
OPTION.
EXTERNALLY if you are using OS_ROLES = TRUE, so that the database accounts
are tied to the OS account that was granted privileges.
EMP DELETE NO
To list all the column-specific privileges that have been granted, use the following
query:
SELECT grantee, table_name, column_name, privilege
FROM sys.dba_col_privs;
If SWILLIAMS has enabled the SECURITY_ADMIN role and issues this query,
Oracle returns the following information:
ROLE
------------------------------
SECURITY_ADMIN
The following query lists all system privileges currently available in the issuer’s
security domain, both from explicit privilege grants and from enabled roles:
SELECT * FROM session_privs;
If SWILLIAMS has the SECURITY_ADMIN role enabled and issues this query,
Oracle returns the following results:
PRIVILEGE
----------------------------------------
AUDIT SYSTEM
CREATE SESSION
CREATE USER
BECOME USER
ALTER USER
DROP USER
CREATE ROLE
DROP ANY ROLE
GRANT ANY ROLE
AUDIT ANY
CREATE PROFILE
ALTER PROFILE
DROP PROFILE
If the SECURITY_ADMIN role is disabled for SWILLIAMS, the first query would
have returned no rows, while the second query would only return a row for the
CREATE SESSION privilege grant.
ROLE PASSWORD
---------------- --------
CONNECT NO
RESOURCE NO
DBA NO
SECURITY_ADMIN YES
The following query lists all the system privileges granted to the
SECURITY_ADMIN role:
SELECT * FROM role_sys_privs WHERE role = ’SECURITY_ADMIN’;
The following query lists all the object privileges granted to the
SECURITY_ADMIN role:
SELECT table_name, privilege FROM role_tab_privs
WHERE role = ’SECURITY_ADMIN’;
TABLE_NAME PRIVILEGE
--------------------------- ----------------
AUD$ DELETE
AUD$ SELECT
This chapter describes how to use the Oracle auditing facilities, and includes the
following topics:
■ Guidelines for Auditing
■ Creating and Deleting the Database Audit Trail Views
■ Managing Audit Trail Information
■ Viewing Database Audit Trail Information
■ Auditing Through Database Triggers
After you have a clear understanding of the reasons for auditing, you can
devise an appropriate auditing strategy and avoid unnecessary auditing.
For example, suppose you are auditing to investigate suspicious database
activity. This information by itself is not specific enough. What types of
suspicious database activity do you suspect or have you noticed? A more
focused auditing purpose might be to audit unauthorized deletions from
arbitrary tables in the database. This purpose narrows the type of action being
audited and the type of object being affected by the suspicious activity.
■ Audit knowledgeably.
Audit the minimum number of statements, users, or objects required to get the
targeted information. This prevents unnecessary audit information from
cluttering the meaningful information and consuming valuable space in the
SYSTEM tablespace. Balance your need to gather sufficient security information
with your ability to store and process it.
For example, if you are auditing to gather information about database activity,
determine exactly what types of activities you are tracking, audit only the
activities of interest, and audit only for the amount of time necessary to gather
the information you desire. Do not audit objects if you are only interested in
each session’s logical I/O information.
■ USER_AUDIT_SESSION, DBA_AUDIT_SESSION
■ USER_AUDIT_STATEMENT, DBA_AUDIT_STATEMENT
■ USER_AUDIT_OBJECT, DBA_AUDIT_OBJECT
■ DBA_AUDIT_EXISTS
■ USER_AUDIT_SESSION, DBA_AUDIT_SESSION
■ USER_TAB_AUDIT_OPTS
See Also: For information about these views, see the Oracle8i Reference.
For examples of audit information interpretations, see "Viewing Database Audit
Trail Information" on page 25-17.
Action Code
This describes the operation performed or attempted. The AUDIT_ACTIONS data
dictionary table contains a list of these codes and their descriptions.
Privileges Used
This describes any system privileges used to perform the operation. The
SYSTEM_PRIVILEGE_MAP table lists all of these codes, and their descriptions.
Completion Code
This describes the result of the attempted operation. Successful operations return a
value of zero, while unsuccessful operations return the Oracle error code describing
why the operation was unsuccessful.
On operating systems that do not make an audit trail accessible to Oracle, these
audit trail records are placed in an Oracle audit trail file in the same directory as
background process trace files.
See Also: For examples of trigger usage for this specialized type of auditing, see
"Auditing Through Database Triggers" on page 25-20.
Shortcuts for Statement Audit Options Shortcuts are provided so that you can specify
several related statement options with one word.
Shortcuts are not statement options themselves; rather, they are ways of specifying
sets of related statement options with one word in AUDIT and NOAUDIT
statements. Shortcuts for system privileges and statement options are detailed in the
Oracle8i SQL Reference.
Shortcut for Object Audit Options The ALL shortcut can be used to specify all available
object audit options for a schema object. This shortcut is not an option itself; rather,
it is a way of specifying all object audit options with one word in AUDIT and
NOAUDIT statements.
subsequent database sessions to use these options; existing sessions will continue
using the audit options in place at session creation.
See Also: For a complete description of the AUDIT command, see the Oracle8i SQL
Reference.
For more information about enabling and disabling auditing, see "Enabling and
Disabling Database Auditing" on page 25-13.
You can set this option selectively for individual users also, as in the next example:
AUDIT SESSION
BY scott, lori;
To audit all successful and unsuccessful uses of the DELETE ANY TABLE system
privilege, enter the following statement:
AUDIT DELETE ANY TABLE;
To audit all unsuccessful SELECT, INSERT, and DELETE statements on all tables
and unsuccessful uses of the EXECUTE PROCEDURE system privilege, by all
database users, and by individual audited statement, issue the following statement:
AUDIT SELECT TABLE, INSERT TABLE, DELETE TABLE,
EXECUTE PROCEDURE
BY ACCESS
WHENEVER NOT SUCCESSFUL;
The AUDIT SYSTEM system privilege is required to set any statement or privilege
audit option. Normally, the security administrator is the only user granted this
system privilege.
Enabling Object Auditing To audit all successful and unsuccessful DELETE statements
on the SCOTT.EMP table, BY SESSION (the default value), enter the following
statement:
AUDIT DELETE ON scott.emp;
To audit all successful SELECT, INSERT, and DELETE statements on the DEPT table
owned by user JWARD, BY ACCESS, enter the following statement:
AUDIT SELECT, INSERT, DELETE
ON jward.dept
BY ACCESS
WHENEVER SUCCESSFUL;
To set the default object auditing options to audit all unsuccessful SELECT
statements, BY SESSION (the default), enter the following statement:
AUDIT SELECT
ON DEFAULT
WHENEVER NOT SUCCESSFUL;
A user can set any object audit option for the objects contained in the user’s schema.
The AUDIT ANY system privilege is required to set an object audit option for an
object contained in another user’s schema or to set the default object auditing
options; normally, the security administrator is the only user granted this system
privilege.
See Also: For a complete syntax listing of the NOAUDIT command, see the Oracle8i
SQL Reference.
Also see "Enabling and Disabling Database Auditing" on page 25-13.
The following statements turn off all statement (system) and privilege audit
options:
NOAUDIT ALL;
NOAUDIT ALL PRIVILEGES;
To disable statement or privilege auditing options, you must have the AUDIT
SYSTEM system privilege.
Disabling Object Auditing The following statements turn off the corresponding
auditing options:
NOAUDIT DELETE
ON emp;
NOAUDIT SELECT, INSERT, DELETE
ON jward.dept;
Furthermore, to turn off all object audit options on the EMP table, enter the
following statement:
NOAUDIT ALL
ON emp;
Disabling Default Object Audit Options To turn off all default object audit options, enter
the following statement:
NOAUDIT ALL
ON DEFAULT;
Note that all schema objects created before this NOAUDIT statement is issued
continue to use the default object audit options in effect at the time of their creation,
unless overridden by an explicit NOAUDIT statement after their creation.
To disable object audit options for a specific object, you must be the owner of the
schema object. To disable the object audit options of an object in another user’s
schema or to disable default object audit options, you must have the AUDIT ANY
system privilege. A user with privileges to disable object audit options of an object
can override the options set by any user.
After you have edited the parameter file, restart the database instance to enable or
disable database auditing as intended.
See Also: For more information about editing parameter files, see the Oracle8i
Reference.
Alternatively, to delete all audit records from the audit trail generated as a result of
auditing the table EMP, enter the following statement:
DELETE FROM sys.aud$
WHERE obj$name=’EMP’;
If audit trail information must be archived for historical purposes, the security
administrator can copy the relevant records to a normal database table (for
example, using "INSERT INTO table SELECT ... FROM sys.aud$ ...") or export the
audit trail table to an operating system file.
Only the user SYS, a user who has the DELETE ANY TABLE privilege, or a user to
whom SYS has granted DELETE privilege on SYS.AUD$ can delete records from the
database audit trail.
Note: If the audit trail is completely full and connections are being
audited (that is, if the SESSION option is set), typical users cannot
connect to the database because the associated audit record for the
connection cannot be inserted into the audit trail. In this case, the
security administrator must connect as SYS (operations by SYS are
not audited) and make space available in the audit trail.
See Also: For information about exporting tables, see Oracle8i Utilities.
Audit records generated as a result of object audit options set for the SYS.AUD$
table can only be deleted from the audit trail by someone connected with
administrator privileges, which itself has protection against unauthorized use. As a
final measure of protecting the audit trail, any operation performed while
connected with administrator privileges is audited in the operating system audit
trail, if available.
See Also: For more information about the availability of an operating system audit
trail and possible uses, see your operating system-specific Oracle documentation.
EXECUTE scott.fire_employee(7902);
The following sections show the information that can be listed using the audit trail
views in the data dictionary.
Notice that the view reveals the statement audit options set, whether they are set for
success or failure (or both), and whether they are set for BY SESSION or BY
ACCESS.
OWNER OBJECT_NAME OBJECT_TY ALT AUD COM DEL GRA IND INS LOC ...
----- ----------- --------- --- --- --- --- --- --- --- --- ...
SCOTT EMP TABLE S/S -/- -/- A/- -/- S/S -/- -/- ...
SCOTT EMPLOYEE VIEW -/- -/- -/- A/- -/- S/S -/- -/- ...
Notice that the view returns information about all the audit options for the specified
object. The information in the view is interpreted as follows:
■ The character "-" indicates that the audit option is not set.
■ The character "S" indicates that the audit option is set, BY SESSION.
■ The character "A" indicates that the audit option is set, BY ACCESS.
■ Each audit option has two possible settings, WHENEVER SUCCESSFUL and
WHENEVER NOT SUCCESSFUL, separated by "/". For example, the DELETE
audit option for SCOTT.EMP is set BY ACCESS for successful delete statements
and not set at all for unsuccessful delete statements.
ALT AUD COM DEL GRA IND INS LOC REN SEL UPD REF EXE
--- --- --- --- --- --- --- --- --- --- --- --- ---
S/S -/- -/- -/- -/- S/S -/- -/- S/S -/- -/- -/- -/-
When deciding whether to create a trigger to audit database activity, consider the
advantages that the standard Oracle database auditing features provide compared
to auditing by triggers:
■ Standard auditing options cover DML and DDL statements regarding all types
of schema objects and structures.
■ All database audit information is recorded centrally and automatically using
the auditing features of Oracle.
■ Auditing features enabled using the standard Oracle features are easier to
declare and maintain and less prone to errors than are auditing functions
defined through triggers.
■ Any changes to existing auditing options can also be audited to guard against
malicious database activity.
■ Using the database auditing features, you can generate records once every time
an audited statement is issued (BY ACCESS) or once for every session that
issues an audited statement (BY SESSION). Triggers cannot audit by session; an
audit record is generated each time a trigger-audited table is referenced.
■ Database auditing can audit unsuccessful data access. In comparison, any audit
information generated by a trigger is rolled back if the triggering statement is
rolled back.
■ Connections and disconnections, as well as session activity (such as physical
I/Os, logical I/Os, and deadlocks), can be recorded by standard database
auditing.
When using triggers to provide sophisticated auditing, normally use AFTER
triggers. By using AFTER triggers, you record auditing information after the
triggering statement is subjected to any applicable integrity constraints, preventing
cases where audit processing is carried out unnecessarily for statements that
generate exceptions to integrity constraints.
When you should use AFTER row as opposed to AFTER statement triggers
depends on the information being audited. For example, row triggers provide
value-based auditing on a per-row basis for tables. Triggers can also allow the user
to supply a "reason code" for issuing the audited SQL statement, which can be
useful in both row and statement-level auditing situations.
The following trigger audits modifications to the EMP table on a per-row basis. It
requires that a "reason code" be stored in a global package variable before the
update. The trigger demonstrates the following:
■ how triggers can provide value-based auditing
■ how to use public package variables
Comments within the code explain the functionality of the trigger.
CREATE TRIGGER audit_employee
AFTER INSERT OR DELETE OR UPDATE ON emp
FOR EACH ROW
BEGIN
/* AUDITPACKAGE is a package with a public package
variable REASON. REASON could be set by the
application by a command such as EXECUTE
AUDITPACKAGE.SET_REASON(reason_string). Note that a
package variable has state for the duration of a
session and that each session has a separate copy of
all package variables. */
IF auditpackage.reason IS NULL THEN
raise_application_error(-20201,’Must specify reason with ’,
’AUDITPACKAGE.SET_REASON(reason_string)’);
END IF;
Optionally, you can also set the reason code back to NULL if you want to force the
reason code to be set for every update. The following AFTER statement trigger sets
the reason code back to NULL after the triggering statement is executed:
CREATE TRIGGER audit_employee_reset
AFTER INSERT OR DELETE OR UPDATE ON emp
BEGIN
auditpackage.set_reason(NULL);
END;
The previous two triggers are both fired by the same type of SQL statement.
However, the AFTER row trigger is fired once for each row of the table affected by
the triggering statement, while the AFTER statement trigger is fired only once after
the triggering statement execution is completed.
Index-1
database partially available to users, 3-7 SET MTS_DISPATCHERS option, 4-7
DATAFILE...OFFLINE DROP option, 10-8 SET MTS_SERVERS option, 4-6
DROP LOGFILE MEMBER option, 6-15 SET RESOURCE_LIMIT option, 23-21
DROP LOGFILE option, 6-14 SWITCH LOGFILE option, 6-16
MOUNT option, 3-7 ALTER SYSTEM RESUME, 3-13
NOARCHIVELOG option, 7-7 ALTER SYSTEM SUSPEND, 3-8
OPEN option, 3-7 ALTER TABLE command
RENAME FILE option ADD PARTITION clause, 13-11
datafiles for multiple tablespaces, 10-10 ALLOCATE EXTENT option, 14-11
UNRECOVERABLE DATAFILE option, 6-17 DISABLE ALL TRIGGERS option, 20-13
ALTER FUNCTION command DISABLE integrity constraint option, 20-20
COMPILE option, 20-25 DROP integrity constraint option, 20-21
ALTER INDEX COALESCE, 16-7 DROP PARTITION clause, 13-12
ALTER INDEX command, 13-18 ENABLE ALL TRIGGERS option, 20-12
about, 16-13 ENABLE integrity constraint option, 20-20
MAXTRANS option, 12-9 example, 14-11
MOVE PARTITION clause, 13-11 MAXTRANS option, 12-9
REBUILD PARTITION clause, 13-11, 13-20 MODIFY PARTITION clause, 13-10
ALTER PACKAGE command SPLIT PARTITION clause, 13-11, 13-17
COMPILE option, 20-25 TRUNCATE PARTITION clause, 13-15
ALTER PROCEDURE command ALTER TABLESPACE command
COMPILE option, 20-25 ADD DATAFILE parameter, 10-5
ALTER PROFILE command ONLINE option
altering resource limits, 23-19 example, 9-10
COMPOSITE_LIMIT option, 23-19 READ ONLY option, 9-12
ALTER RESOURCE COST command, 23-20 READ WRITE option, 9-14
ALTER ROLE command RENAME DATA FILE option, 10-10
changing authorization method, 24-8 ALTER TRIGGER command
ALTER ROLLBACK SEGMENT command DISABLE option, 20-13
changing storage parameters, 21-9 ENABLE option, 20-12
OFFLINE option, 21-12 ALTER USER privilege, 23-15
ONLINE option, 21-11, 21-12 ALTER VIEW command
PUBLIC option, 21-9 COMPILE option, 20-25
STORAGE clause, 21-9 altering
ALTER SEQUENCE command, 15-11 cluster indexes, 17-9
ALTER SESSION command clustered tables, 17-9
SET SQL_TRACE parameter, 4-10 clusters, 17-8
ALTER SYSTEM command database status, 3-7
ARCHIVE LOG ALL option, 7-10 hash clusters, 18-8
ARCHIVE LOG option, 7-10 indexes, 16-13
ENABLE RESTRICTED SESSION option, 3-9 public rollback segments, 21-9
SET LICENSE_MAX_SESSIONS option, 23-4 rollback segment storage parameters, 21-9
SET LICENSE_MAX_USERS option, 23-6 sequences, 15-10
SET LICENSE_SESSIONS_WARNING storage parameters, 14-10
option, 23-4 tables, 14-10, 14-11
Index-2
tablespace storage, 9-8 failed destinations and, 7-16
users, 23-15 multiplexing, 7-11
ANALYZE command normal transmission of, 7-14
CASCADE option, 20-8 specifying destinations for, 7-11
COMPUTE STATISTICS option, 20-7 standby transmission of, 7-14
ESTIMATE STATISTICS SAMPLE option, 20-7 status information, 7-24
LIST CHAINED ROWS option, 20-9 transmitting, 7-14
shared SQL and, 20-8 tuning, 7-20
STATISTICS option, 20-4 ARCHIVELOG mode, 7-4, 7-6
VALIDATE STRUCTURE option, 20-8 advantages, 7-5
ANALYZE TABLE VALIDATE STRUCTURE, 19-3 archiving, 7-4
analyzing archived redo logs, 7-25 automatic archiving in, 7-5
analyzing objects definition of, 7-4
about, 20-3 distributed databases, 7-6
privileges, 20-3 enabling, 7-7
application administrator, 1-3 manual archiving in, 7-5
database administrator versus, 22-11 running in, 7-4
application developers switching to, 7-7
privileges for, 22-9 taking datafiles offline and online in, 10-8
roles for, 22-10 archiving
application development advantages, 7-4
security for, 22-10 automatic
applications disabling, 7-9
quiescing during maintenance operations, 13-21 disabling at instance startup, 7-9
ARCH process enabling, 7-8
specifying multiple processes, 7-20 enabling after instance startup, 7-9
archive buffer parameters, 7-22 enabling at instance startup, 7-9
ARCHIVE LOG command changing archiving mode, 7-7
LIST option, 6-14 destination states, 7-13
ARCHIVE LOG option active/inactive, 7-14
ALTER SYSTEM command, 7-10 enabled/disabled, 7-13
archived redo logs, 7-2 valid/invalid, 7-13
analyzing, 7-25 destinations
archiving modes, 7-7 failure, 7-16
automatic archiving, 7-8 disabling, 7-7
destination states, 7-13 disadvantages, 7-4
active/inactive, 7-14 enabling, 7-7, 7-9
bad param, 7-14 increasing speed of, 7-23
deferred, 7-14 manual, 7-10
enabled/disabled, 7-13 minimizing impact on system performance, 7-
valid/invalid, 7-13 23
destinations multiple ARCH processes, 7-20
re-archiving to failed, 7-19 privileges
sample scenarios, 7-18 disabling, 7-9
enabling automatic archiving, 7-8 enabling, 7-8
Index-3
for manual archiving, 7-10 privileges required for system, 25-10
setting archive buffer parameters, 7-22 schema objects, 25-11
setting initial mode, 7-7 session level, 25-8
to failed destinations, 7-19 shortcuts for object, 25-9
tuning, 7-20 shortcuts for system, 25-8
viewing information on, 7-24 statement, 25-10
AUDIT command, 25-9 statement level, 25-8
schema objects, 25-11 suspicious activity, 25-3
statement auditing, 25-10 system privileges, 25-10
system privileges, 25-10 triggers and, 25-20
audit trail, 25-14 using the database, 25-2
archiving, 25-15 viewing
auditing changes to, 25-16 active object options, 25-19
controlling size of, 25-14 active privilege options, 25-18
creating and deleting, 25-4 active statement options, 25-18
deleting views, 25-5 defauly object options, 25-19
interpreting, 25-17 views, 25-4
maximum size of, 25-14 authentication
protecting integrity of, 25-16 database managed, 23-8
purging records from, 25-15 operating system, 1-7
recording changes to, 25-16 password file, 1-9
records in, 25-7 password policy, 22-4
reducing size of, 25-16 specifying when creating a user, 23-12
table that holds, 25-2 users, 22-2, 23-7, 23-9
views on, 25-4 authorization
AUDIT_TRAIL parameter changing for roles, 24-8
setting, 25-13 omitting for roles, 24-8
auditing, 25-2 operating-system role management and, 24-7
AUDIT command, 25-9 roles
audit option levels, 25-8 about, 24-6
audit trail records, 25-5 multi-threaded server and, 24-7
default options, 25-11 automatic archiving
disabling default options, 25-13 archive log destination, 7-8
disabling options, 25-11, 25-12, 25-13
disabling options versus auditing, 25-12
enabling options, 25-9, 25-13
B
enabling options versus auditing, 25-10 background processes
guidelines, 25-2 Oracle8i processes, 4-9
historical information, 25-4 BACKGROUND_DUMP_DEST parameter, 4-11
keeping information manageable, 25-2 backups
managing the audit trail, 25-4 after creating new databases
operating-system audit trails, 25-7 full backups, 2-7
policies for, 22-18 guidelines, 1-20
privilege audit options, 25-9 before database creation, 2-4
privileges required for object, 25-11 effects of archiving on, 7-4
Index-4
bad param destination state, 7-14 CLEAR LOGFILE option
bitmapped tablespaces, 9-5 ALTER DATABASE command, 6-17
bringing online clearing redo log files, 6-7, 6-17
tablespaces, 9-10 restrictions, 6-17
broken jobs cluster keys
about, 8-12 columns for, 17-4
marking, 8-13 SIZE parameter, 17-5
running, 8-13 clustered tables, 17-10
buffers clusters
buffer cache in SGA, 2-11 allocating extents, 17-9
bug fixes, 1-21 altering, 17-8
analyzing statistics, 20-3
choosing data, 17-4
C columns for cluster key, 17-4
CASCADE option creating, 17-6
integrity constraints, 17-11 dropped tables and, 14-13
when dropping unique or primary keys, 20-20 dropping, 17-10
cascading revokes, 24-14 estimating space, 17-5, 17-6
CATAUDIT.SQL guidelines for managing, 17-4
running, 25-4 hash
CATBLOCK.SQL script, 4-8 contrasted with index, 18-2
CATNOAUD.SQL hash clusters, 18-1
running, 25-5 index
change vectors, 6-2 contrasted with hash, 18-2
CHAR datatype index creation, 17-8
increasing column length, 14-10 indexes and, 16-2
space use of, 12-17 keys, 17-2
character sets location, 17-5
multi-byte characters managing, 17-1
in role names, 24-5 overview of, 17-2
in role passwords, 24-7 privileges
user passwords and, 23-12 for creating, 17-6
parameter file and, 3-14 for dropping, 17-10
specifying when creating a database, 2-2 specifying PCTFREE for, 12-4
supported by Oracle, 12-17 storage parameters, 12-10
CHECK constraint, 20-19 truncating, 20-9
check_object procedure, 19-3, 19-7 validating structure, 20-8
checkpoint process (CKPT) columns
starting, 4-12 displaying information about, 20-31
CHECKPOINT_PROCESS parameter granting privileges for selected, 24-10
setting, 4-12 granting privileges on, 24-11
checksums increasing length, 14-10
for data blocks, 10-12 INSERT privilege and, 24-11
redo log blocks, 6-16 listing users granted to, 24-21
CKPT, 4-12 privileges, 24-11
Index-5
revoking privileges on, 24-13 requirement of one, 5-3
commands, SQL size of, 5-3
CREATE DATABASE, 6-10 specifying names before database creation, 2-10
commands, SQL*Plus unavailable during startup, 3-3
ARCHIVE LOG, 6-14 CONTROL_FILES parameter
HOST, 6-13 overwriting existing control files, 2-10
committing transactions setting
writing redo log buffer and, 6-2 before database creation, 2-10, 5-4
composite limits, 23-19 names for, 5-2
costs and, 23-20 costs
service units, 23-19 resource limits and, 23-20
COMPUTE STATISTICS option, 20-7 CREATE CLUSTER command
configuring an instance example, 17-7
with dedicated server processes, 4-2 for hash clusters, 18-4
CONNECT role, 24-5 HASH IS option, 18-6
connecting HASHKEYS option, 18-7
administrator privileges, 3-10 SIZE option, 18-6
connections CREATE CONTROLFILE command
auditing, 25-8 about, 5-5
dedicated servers, 4-2 checking for inconsistencies, 5-8
during shutdown, 3-9 NORESETLOGS option, 5-7
control files RESETLOGS option, 5-7
adding, 5-5 CREATE DATABASE command
changing size, 5-4 CONTROLFILE REUSE option, 5-4
conflicts with data dictionary, 5-8 example, 2-7
creating MAXLOGFILES option, 6-10
about, 5-3 MAXLOGMEMBERS option, 6-10
additional control files, 5-5 CREATE INDEX command
initially, 5-4 explicitly, 16-8
new files, 5-5 ON CLUSTER option, 17-8
default name, 2-10, 5-4 UNRECOVERABLE, 16-5
dropping, 5-9 with a constraint, 16-8
errors during creation, 5-9 CREATE PROFILE command
guidelines for, 5-2 about, 23-18
importance of mirrored, 5-2 COMPOSITE_LIMIT option, 23-19
location of, 5-3 CREATE ROLE command
log sequence numbers, 6-5 IDENTIFIED BY option, 24-7
managing, 5-1 IDENTIFIED EXTERNALLY option, 24-7
mirroring, 2-10 CREATE ROLLBACK SEGMENT command
moving, 5-5 about, 21-8
names, 5-2 tuning guidelines, 2-15
number of, 5-3 CREATE SCHEMA command
overwriting existing, 2-10 multiple tables and views, 20-2
relocating, 5-5 privileges required, 20-2
renaming, 5-5 CREATE SEQUENCE command, 15-10
Index-6
CREATE SYNONYM command, 15-12 specifying storage parameters, 21-8
CREATE TABLE command sequences, 15-10
about, 14-9 synonyms, 15-12
CLUSTER option, 17-7 tables, 14-9
PARTITION clause, 13-9 tablespaces, 9-3
UNRECOVERABLE, 14-4 rollback segments required, 9-5
CREATE TABLESPACE command views, 15-2
datafile names in, 9-4
example, 9-4
CREATE USER command
D
IDENTIFIED BY option, 23-12 data
IDENTIFIED EXTERNALLY option, 23-12 security of, 22-3
CREATE VIEW command data blocks
about, 15-2 altering size of, 2-11
OR REPLACE option, 15-9 managing space usage of, 12-2
WITH CHECK OPTION, 15-3 managing space use of, 12-2
creating operating system blocks versus, 2-11
audit trail, 25-4 PCTFREE storage parameter, 12-3
cluster index, 17-6 PCTUSED storage parameter, 12-5
clustered tables, 17-6 shared in clusters, 17-2
clusters, 17-6 size of, 2-11
control files, 5-3 verifying, 10-12
database, 1-19, 2-1 data dictionary
backing up the new database, 2-7 changing storage parameters, 20-29
during installation, 2-3 conflicts with control files, 5-8
executing CREATE DATABASE, 2-6 dropped tables and, 14-12
migration from different versions, 2-3 schema object views, 20-29
preparing to, 2-2 segments in the, 20-27
prerequisites for, 2-3 setting storage parameters of, 20-26
problems encountered while, 2-8 V$DBFILE view, 2-8
databases, 7-7 V$DISPATCHER view, 4-7
datafiles, 9-3, 10-5 V$LOGFILE view, 2-8
hash clustered tables, 18-4 V$QUEUE view, 4-7
hash clusters, 18-4 data integrity, 20-19
indexes integrity constraints, 20-19
explicitly, 16-8 database administrator, 1-2
multiple objects, 20-2 application administrator versus, 22-11
online redo log groups, 6-11 initial priorities, 1-17
parameter file, 2-4 operating-system account, 1-4
partitioned objects, 13-9 password files for, 1-7
partitioned tables, 13-9 responsibilities of, 1-2
profiles, 23-18 roles
redo log members, 6-11 about, 1-6
rollback segments for security, 22-8
about, 21-8 security and privileges of, 1-4
Index-7
security for, 22-7 specifying control files, 2-10
security officer versus, 1-3, 22-2 starting up
usernames, 1-5 before database creation, 2-6
utilities for, 1-17 general procedures for, 3-2
database links restricting access, 3-4
job queues and, 8-9 structure of
Database Resource Manager, 11-1 distributed database, 1-19
databases test, 22-9
administering, 1-1 tuning
auditing, 25-1 archiving large databases, 7-20
availability, 3-7 responsibilities for, 1-20
backing up user responsibilities, 1-3
after creation of, 1-20 viewing datafiles and redo log files, 2-8
full backups, 2-7 datafiles
control files of, 5-2 adding to a tablespace, 10-5
CREATE DATABASE command, 2-7 bringing online and offline, 10-7
creating checking associated tablespaces, 9-31
opening and, 1-19 creating, 9-3
trouble-shooting problems, 2-8 database administrators access, 1-4
design of default directory, 10-5
implementing, 1-20 dropping, 9-14
dropping, 2-8 NOARCHIVELOG mode, 10-8
exclusive mode, 3-6 fully specifying filenames, 10-5
global database name identifying filenames, 10-11
about, 2-9 location, 10-4
global database names managing, 10-1
in a distributed system, 2-9 maximum number of, 10-2
hardware evaluation, 1-18 minimum number of, 10-2
logical structure of, 1-19 MISSING, 5-8
managing monitoring, 10-13
size of, 10-1 online, 10-8
migration of, 2-3 privileges to rename, 10-9
mounting a database, 3-4 privileges to take offline, 10-8
mounting to an instance, 3-7 relocating, 10-9, 10-10
names relocating, example, 10-11
about, 2-9 renaming, 10-9, 10-10
conflicts in, 2-9 renaming for single tables, 10-9
opening reusing, 10-5
a closed database, 3-7 size of, 10-4
parallel mode, 3-6 storing separately from redo log files, 10-4
physical structure of, 1-19 unavailable when database is opened, 3-3
planning, 1-18 verifying data blocks, 10-12
production, 22-9, 22-11 viewing
renaming, 5-5 general status of, 10-13
restricting access to, 3-4, 3-8 V$DBFILE and V$LOGFILE views, 2-8
Index-8
datatypes DBMS_LOGMNR_D.BUILD package, 7-28
character, 12-17 DBMS_LOGMNR.ADD_LOGFILE package
DATE, 12-18 LogMiner, 7-29
individual type names, 12-17 DBMS_LOGMNR.START_LOGMNR package
LONG, 12-18 LogMiner, 7-30
NUMBER, 12-17 DBMS_REPAIR package, 19-1
space use of, 12-17 DBMS_RESOURCE_MANAGER package, 11-3
summarized, 12-19 DBMS_RESOURCE_MANAGER_PRIVS
DATE datatype, 12-18 package, 11-10
DB_BLOCK_BUFFERS parameter DBMS_SESSION package, 11-11
setting before database creation, 2-11 DBMS_UTILITY.ANALYZE_SCHEMA()
DB_BLOCK_CHECKING parameter, 19-3 running, 20-8
DB_BLOCK_CHECKSUM, 10-12 dedicated server processes
DB_BLOCK_SIZE parameter configuring, 4-2
database buffer cache size and, 2-11 connecting with, 4-2
setting before creation, 2-11 trace files for, 4-10
DB_DOMAIN parameter dedicated servers
setting before database creation, 2-9 multi-threaded servers contrasted with, 4-3
DB_NAME parameter default
setting before database creation, 2-9 audit options, 25-11
DB_VERIFY utility, 19-3 disabling, 25-13
DBA, 1-2 profile, 23-18
DBA role, 1-6, 24-5 role, 23-16
DBA_DATA_FILES, 9-31, 10-13 tablespace quota, 23-13
DBA_EXTENTS, 10-13 temporary tablespace, 23-13
DBA_FREE_SPACE, 9-31, 10-13 user tablespaces, 23-12
DBA_FREE_SPACE_COALESCED view, 9-9 DEFAULT_CONSUMER_GROUP, 11-9
DBA_INDEXES view deferred destination state, 7-14
filling with data, 20-5 deleting
DBA_ROLLBACK_SEGS view, 21-14 table statistics, 20-4
DBA_SEGMENTS, 9-31, 10-13 dependencies
DBA_TAB_COLUMNS view displaying, 20-32
filling with data, 20-5 destination states for archived redo logs, 7-13
DBA_TABLES view destinations
filling with data, 20-5 archived redo logs
DBA_TABLESPACES, 9-31, 10-13 sample scenarios, 7-18
DBA_TABLESPACES view, 9-15 developers, application, 22-9
DBA_TS_QUOTAS, 9-31, 10-13 dictionary files
DBA_USERS, 9-31, 10-13 LogMiner and the, 7-27
DBMS_JOB package disabled destination state
altering a job, 8-11 for archived redo logs, 7-13
forcing jobs to execute, 8-14 disabling
job queues and, 8-3 archiving, 7-7, 7-9
REMOVE procedure and, 8-11 audit options, 25-11, 25-12
submitting jobs, 8-4 auditing, 25-13
Index-9
integrity constraints, 20-18 clusters, 17-10
effects on indexes, 16-7 control files, 5-9
resource limits, 23-21 databases, 2-8
triggers, 20-12 datafiles, 9-14
disconnections hash clusters, 18-9
auditing, 25-8 index partition, 13-14
dispatcher processes indexes, 16-15
number to start, 4-5 integrity constraints
privileges to change number of, 4-7 about, 20-21
removing, 4-7 effects on indexes, 16-7
setting the number of, 4-7 online redo log groups, 6-14
spawning new, 4-7 online redo log members, 6-14
distributed databases profiles, 23-21
running in ARCHIVELOG mode, 7-6 roles, 24-8
running in NOARCHIVELOG mode, 7-6 rollback segments, 21-11, 21-13
starting a remote instance, 3-6 sequences, 15-11
distributed processing synonyms, 15-12
parameter file location in, 3-15 table partitions, 13-12
distributing I/O, 2-15 tables, 14-12
DROP CLUSTER command tablespaces
CASCADE CONSTRAINTS option, 17-11 about, 9-14
dropping required privileges, 9-15
cluster with no tables, 17-11 users, 23-16
hash cluster, 18-9 views, 15-9
INCLUDING TABLES option, 17-11 dump_orphan_keys procedure, 19-6, 19-9
DROP LOGFILE MEMBER option dynamic performance tables
ALTER DATABASE command, 6-15 using, 4-9
DROP LOGFILE option
ALTER DATABASE command, 6-14
DROP PARTITION clause
E
ALTER TABLE command, 13-12 enabled destination state
DROP PROFILE command, 23-21 for archived redo logs, 7-13
DROP ROLE command, 24-8, 24-9 enabling
DROP ROLLBACK SEGMENT command, 21-14 archiving, 7-7
DROP SYNONYM command, 15-12 auditing options
DROP TABLE command about, 25-9
about, 14-12 privileges for, 25-13
CASCADE CONSTRAINTS option, 14-12 integrity constraints
for clustered tables, 17-10 at creation, 20-18
DROP TABLESPACE command, 9-15 example, 20-19
DROP USER command, 23-17 reporting exceptions, 20-21
DROP USER privilege, 23-17 when violations exist, 20-15
dropping resource limits, 23-21
audit trail, 25-4 triggers, 20-12
cluster indexes, 17-10 encryption
Index-10
Oracle passwords, 23-8 extents
enroll allocating
database users, 1-20 clusters, 17-9
Enterprise Manager index creation, 16-6
operating system account, 1-4 tables, 14-11
environment of a job, 8-6 data dictionary views for, 20-30
errors displaying free extents, 20-33
ALERT file and, 4-10 displaying information on, 20-32
ORA-00028, 4-16 dropped tables and, 14-12
ORA-01090, 3-9
ORA-01173, 5-9
ORA-01176, 5-9
F
ORA-01177, 5-9 failures
ORA-1215, 5-9 media
ORA-1216, 5-9 multiplexed online redo logs, 6-5
ORA-1547, 20-29 files
ORA-1628 through 1630, 20-29 OS limit on number open, 9-2
snapshot too old, 21-5 fix_corrupt_blocks procedure, 19-5, 19-7
trace files and, 4-10 forcing a log switch, 6-16
when creating a database, 2-8 with the ALTER SYSTEM command, 6-16
when creating control file, 5-9 FOREIGN KEY constraint
while starting an instance, 3-5 enabling, 20-19
ESTIMATE STATISTICS option, 20-7 free space
estimating size coalescing, 9-8
hash clusters, 18-4 listing free extents, 20-33
tables, 14-5 tablespaces and, 9-32
evaluating function-based indexes, 16-9
hardware for the Oracle8i, 1-18 functions
example recompiling, 20-25
creating constraints, 20-19
examples G
altering an index, 16-13
exceptions global database name, 2-9
integrity constraints, 20-21 global index
exclusive mode dropping partition with, 13-12, 13-15
of the database, 3-6 splitting partition in, 13-18
rollback segments and, 21-3 global user, 23-10
terminating remaining user sessions, 4-16 GRANT command
EXP_FULL_DATABASE role, 24-5 ADMIN option, 24-10
Export utility GRANT option, 24-11
about, 1-17 object privileges, 24-10
restricted mode and, 3-4 SYSOPER/SYSDBA privileges, 1-13
exporting jobs, 8-7 system privileges and roles, 24-9
exports when takes effect, 24-15
modes, 7-14, 7-18, 7-19 GRANT OPTION
Index-11
about, 24-11 restricted mode and, 3-4
revoking, 24-13 importing
granting privileges and roles jobs, 8-7
listing grants, 24-19 inactive destination state
shortcuts for object privileges, 24-3 for archived redo logs, 7-14
SYSOPER/SYSDBA privileges, 1-13 index partition
groups dropping, 13-14
redo log files moving, 13-11
LOG_FILES initialization parameter, 6-10 rebuilding, 13-20
Guidelines, 10-2 splitting, 13-18
guidelines indexes
for managing rollback segments, 21-2 adding partition, 13-12
altering, 16-13
analyzing statistics, 20-3
H cluster
hardware altering, 17-9
evaluating, 1-18 creating, 17-6
hash clusters dropping, 17-10
altering, 18-8 managing, 17-1
choosing key, 18-6 correct tables and columns, 16-8
clusters, 18-1 creating
controlling space use of, 18-6 after inserting table data, 16-3
creating, 18-4 explicitly, 16-8
dropping, 18-9 unrecoverably, 16-5
estimating storage, 18-4 disabling and dropping constraints and, 16-7
example, 18-7 dropped tables and, 14-12
managing, 18-1 dropping, 16-15
usage, 18-2 estimating size, 16-5
high water mark extent allocation for, 16-6
for a session, 23-3 guidelines for managing, 16-2
historical table INITRANS for, 16-4
moving time window in, 13-20 limiting per table, 16-3
HOST command managing, 16-1, 16-15
SQL*Plus, 6-13 MAXTRANS for, 16-4
monitoring space use of, 16-14
I overview of, 16-2
parallelizing index creation, 16-5
I/O PCTFREE for, 16-4
distributing, 2-15 PCTUSED for, 16-4
identification privileges
users, 23-7 for altering, 16-13
IMP_FULL_DATABASE role, 24-5 for dropping, 16-15
implementing database design, 1-20 separating from a table, 14-6
Import utility setting storage parameters for, 16-5
about, 1-17 SQL*Loader and, 16-3
Index-12
storage parameters, 12-10 dropping, 20-21
tablespace for, 16-4 dropping and disabling, 16-7
temporary segments and, 16-3 dropping tablespaces and, 9-15
validating structure, 20-8 enabling, 20-14
index-organized table, 14-14 enabling on creation, 20-18
in-doubt transactions enabling when violations exist, 20-15
rollback segments and, 21-11 exceptions to, 20-21
initial managing, 20-15
passwords for SYS and SYSTEM, 1-5 violations, 20-15
INITIAL storage parameter, 12-7 when to disable, 20-15
altering, 14-11 INTERNAL
initialization parameters alternatives to, 1-8
affecting sequences, 15-11 connecting for shutdown, 3-10
LOG_ARCHIVE_BUFFER_SIZE, 7-22, 7-23 OSOPER and OSDBA, 1-8
LOG_ARCHIVE_BUFFERS, 7-22, 7-23 security for, 22-8
LOG_ARCHIVE_DEST_n, 7-11 INTERNAL date function
LOG_ARCHIVE_DEST_STATE_n, 7-13 executing jobs and, 8-8
LOG_ARCHIVE_MAX_PROCESSES, 7-20 invalid destination state
LOG_ARCHIVE_MIN_SUCCEED_DEST, 7-17 for archived redo logs, 7-13
LOG_ARCHIVE_START, 7-9, 7-14
LOG_BLOCK_CHECKSUM, 6-16
LOG_FILES, 6-10
J
multi-threaded server and, 4-4 Job, 8-3
INITRANS storage parameter job queues, 8-2, 8-3
altering, 14-11 executing jobs in, 8-9
default, 12-9 locks, 8-9
guidelines for setting, 12-9 privileges for using, 8-4
transaction entries and, 12-9 removing jobs from, 8-11
INSERT privilege scheduling jobs, 8-3
granting, 24-11 viewing, 8-15
revoking, 24-13 jobs
installation altering, 8-11
and creating a database, 2-3 broken, 8-12
Oracle8i, 1-18 database links and, 8-9
tuning recommendations for, 2-14 executing, 8-9
instance menu exporting, 8-7
prevent Connections option, 3-9 forcing to execute, 8-14
instances importing, 8-7
aborting, 3-12 INTERNAL date function and, 8-8
shutting down immediately, 3-11 job definition, 8-7
starting, 3-2 job number, 8-7
starting before database creation, 2-6 killing, 8-14
integrity constraints managing, 8-3
disabling, 20-14, 20-19 marking broken jobs, 8-13
disabling on creation, 20-18 ownership of, 8-7
Index-13
removing from job queue, 8-11 privileges for changing named user limits, 23-6
running broken jobs, 8-13 privileges for changing session limits, 23-5
scheduling, 8-3 session-based, 23-2
submitting to job queue, 8-4 viewing limits, 23-6
trace files, 8-10 limits
troubleshooting, 8-10 composite limits, 23-19
join view, 15-4 concurrent usage, 23-2
DELETE statements, 15-7 resource limits, 23-19
key-preserved tables in, 15-5 session, high water mark, 23-3
mergeable, 15-5 LIST CHAINED ROWS option, 20-9
modifying location
rule for, 15-6 rollback segments, 21-7
when modifiable, 15-4 locks
JQ locks, 8-9 job queue, 8-9
monitoring, 4-8
log sequence number
K control files, 6-5
key-preserved tables log switches
in join views, 15-5 description, 6-5
keys forcing, 6-16
cluster, 17-2 log sequence numbers, 6-5
killing multiplexed redo log files and, 6-7
jobs, 8-14 privileges, 6-16
waiting for archiving to complete, 6-7
L log writer process (LGWR)
multiplexed redo log files and, 6-6
LGWR, 4-11 online redo logs available for use, 6-3
LICENSE_MAX_SESSIONS parameter trace file monitoring, 4-11
changing while instance runs, 23-4 trace files and, 6-6
setting, 23-4 writing to online redo log files, 6-2, 6-3
setting before database creation, 2-12 LOG_ARCHIVE_BUFFER_SIZE initialization
LICENSE_MAX_USERS parameter parameter, 7-23
changing while database runs, 23-6 LOG_ARCHIVE_BUFFERS initialization
setting, 23-6 parameter, 7-23
setting before database creation, 2-12 LOG_ARCHIVE_BUFFERS parameter
LICENSE_SESSION_WARNING parameter setting, 7-23
setting before database creation, 2-12 LOG_ARCHIVE_DEST initialization parameter
LICENSE_SESSIONS_WARNING parameter specifying destinations using, 7-11
changing while instance runs, 23-4 LOG_ARCHIVE_DEST_n initialization
setting, 23-4 parameter, 7-11
licensing REOPEN option, 7-19
complying with license agreement, 2-12, 23-2 LOG_ARCHIVE_DUPLEX_DEST initialization
concurrent usage, 23-2 parameter
named user, 23-2, 23-5 specifying destinations using, 7-11
number of concurrent sessions, 2-13 LOG_ARCHIVE_MAX_PROCESSES initialization
Index-14
parameter, 7-20 MAXDATAFILES parameter
LOG_ARCHIVE_MIN_SUCCEED_DEST changing, 5-5
initialization parameter, 7-17 MAXEXTENTS storage parameter
LOG_ARCHIVE_START initialization about, 12-8
parameter, 7-9 setting for the data dictionary, 20-27
bad param destination state, 7-14 MAXINSTANCES parameter
setting, 7-9 changing, 5-5
LOG_BLOCK_CHECKSUM initialization parameter MAXLOGFILES option
enabling redo block checking with, 6-16 CREATE DATABASE command, 6-10
LOG_FILES initialization parameter MAXLOGFILES parameter
number of log files, 6-10 changing, 5-5
logical structure of a database, 1-19 MAXLOGHISTORY
LogMiner, 7-25 changing, 5-5
LogMiner utility, 7-25, 7-31 MAXLOGMEMBERS option
dictionary file, 7-27 CREATE DATABASE command, 6-10
using the, 7-29, 7-30 MAXLOGMEMBERS parameter
using to analyze archived redo logs, 7-25 changing, 5-5
LONG datatype, 12-18 MAXTRANS storage parameter
altering, 14-11
default, 12-9
M guidelines for setting, 12-9
maintenance release number, 1-21 transaction entries and, 12-9
managing media recovery
auditing, 25-1 effects of archiving on, 7-4
cluster indexes, 17-1 memory
clustered tables, 17-1 viewing per user, 23-25
clusters, 17-1 migration
indexes, 16-1, 16-15 database migration, 2-3
jobs, 8-3 MINEXTENTS storage parameter
object dependencies, 20-23 about, 12-8
profiles, 23-17 altering, 14-11
roles, 24-4 mirrored control files
rollback segments, 21-1 importance of, 5-2
sequences, 15-9 mirrored files
synonyms, 15-11 online redo log, 6-6
tables, 14-1 location, 6-9
users, 23-11 size, 6-9
views, 15-1, 15-9 mirroring
manual archiving control files, 2-10
in ARCHIVELOG mode, 7-10 modes
marked user session, 4-17 exclusive, 3-6
MAX_DUMP_FILE_SIZE parameter, 4-11 parallel, 3-6
MAX_ENABLED_ROLES parameter restricted, 3-4, 3-8
default roles and, 24-8 modifiable join view
enabling roles and, 24-8 definition of, 15-4
Index-15
MODIFY PARTITION clause NEXT storage parameter, 12-8
ALTER TABLE command, 13-10 setting for the data dictionary, 20-27
modifying NOARCHIVELOG mode
a join view, 15-4 archiving, 7-4
monitoring definition, 7-4
datafiles, 10-13 media failure, 7-4
locks, 4-8 no hot backups, 7-4
performance tables, 4-9 running in, 7-4
processes of an instance, 4-8 switching to, 7-7
rollback segments, 21-6 taking datafiles offline in, 10-8
tablespaces, 10-13 NOAUDIT command
mounting a database, 3-4 disabling audit options, 25-11
exclusive mode, 3-6 privileges, 25-12
parallel mode, 3-6 schema objects, 25-12
MOVE PARTITION clause statements, 25-12
ALTER TABLE command, 13-11 normal transmission mode
moving definition, 7-15
control files, 5-5 NOT NULL constraint, 20-19
index partitions, 13-11 NUMBER datatype, 12-17
relocating, 10-9
table partition, 13-10
MTS_DISPATCHERS parameter
O
setting initially, 4-5 objects, schema
multiplexing cascading effects on revoking, 24-14
archived redo logs, 7-11 default tablespace for, 23-13
redo log files, 6-5 granting privileges, 24-10
groups, 6-6 in a revoked tablespace, 23-14
multi-threaded server owned by dropped users, 23-16
configuring dispatchers, 4-5 privileges with, 24-3
database startup and, 3-2 revoking privileges, 24-12
dedicated server contrasted with, 4-3 offline rollback segments
enabling and disabling, 4-6 about, 21-10
OS role management restrictions, 24-19 bringing online, 21-11
restrictions on OS role authorization, 24-7 when to use, 21-10
starting, 4-4 offline tablespaces
altering, 9-10
priorities, 9-10
N rollback segments and, 21-10
named user limits, 23-5 online index, 16-7
setting initially, 2-13 online redo log, 6-2
Net8 creating
service names in, 7-15 groups and members, 6-11
transmitting archived logs via, 7-15 creating members, 6-11
network protocol do not back up, 7-3
dispatcher for each, 4-5 dropping groups, 6-14
Index-16
dropping members, 6-14 Oracle8i
forcing a log switch, 6-16 installing, 1-18
guidelines for configuring, 6-5 Oracle8i Server
INVALID members, 6-15 complying with license agreement, 23-2
location of, 6-9 identifying releases, 1-21
managing, 6-1 processes
moving files, 6-13 checkpoint (CKPT), 4-12
number of files in the, 6-9 monitoring, 4-8
optimum configuration for the, 6-9 operating-system names, 4-9
privileges trace files fpr, 4-10
adding groups, 6-11 Oracle8i Server processes
dropping groups, 6-14 processes
dropping members, 6-15 dedicated server processes, 4-2
forcing a log switch, 6-16 identifying and managing, 4-7
renaming files, 6-13 ORAPWD utility, 1-9
renaming members, 6-12 OS authentication, 1-7
STALE members, 6-15 OS_ROLES parameter
storing separately from datafiles, 10-4 operating-system authorization and, 24-7
unavailable when database is opened, 3-3 REMOTE_OS_ROLES and, 24-19
viewing information about, 6-18 using, 24-17
online rollback segments owner of a queued job, 8-7
about, 21-10
bringing rollback segments online, 21-11
taking offline, 21-12
P
when new, 21-8 packages
online tablespaces DBMS_LOGMNR_D.BUILD, 7-28
altering, 9-10 DBMS_LOGMNR.ADD_LOGFILE, 7-29
opening a database DBMS_LOGMNR.START_LOGMNR, 7-30
after creation, 1-19 privileges for recompiling, 20-25
mounted database, 3-7 recompiling, 20-25
operating system parallel mode
accounts, 24-17 of the database, 3-6
auditing with, 25-2 parallel query option
authentication, 24-16 number of server processes, 4-13
database administrators requirements for, 1-4 parallelizing index creation, 16-5
deleting datafiles, 9-15 parallelizing table creation, 14-4
enabling and disabling roles, 24-19 query servers, 4-13
limit of number of open files, 10-2 Parallel Server
Oracle8i process names, 4-9 ALTER CLUSTER..ALLOCATE EXTENT, 17-10
renaming and relocating files, 10-9 datafile upper bound for instances, 10-3
role identification, 24-17 licensed session limit and, 2-13
roles and, 24-16 limits on named users and, 23-5
security in, 22-3 named users and, 2-13
OPTIMAL storage parameter, 21-5 own rollback segments, 21-3
Oracle blocks, 2-11 sequence numbers and, 15-10
Index-17
session and warning limits, 23-4 creating, 1-9
specifying thread for archiving, 7-11 OS authentication, 1-7
threads of online redo log, 6-2 relocating, 1-16
V$THREAD view, 6-18 removing, 1-16
PARALLEL_MAX_SERVERS parameter, 4-13 state of, 1-16
PARALLEL_MIN_SERVERS parameter, 4-13 privileges for changing for roles, 24-6
PARALLEL_SERVER_IDLE_TIME parameter, 4-13 privileges to alter, 23-15
parameter files roles, 24-7
character set of, 3-14 security policy for users, 22-4
creating for database creation, 2-4 setting REMOTE_LOGIN_PASSWORD
editing before database creation, 2-5 parameter, 1-11
individual parameter names, 2-9 user authentication, 23-8
location of, 3-15 patch release number, 1-22
minimum set of, 2-9 PCTFREE storage parameter
number of, 3-14 altering, 14-10
sample of, 3-14 block overhead and, 12-6
partition clustered tables, 12-4
adding to index, 13-12 default, 12-3
dropping from index, 13-14 guidelines for setting, 12-3
PARTITION clause how it works, 12-2
CREATE TABLE command, 13-9 indexes, 12-4
partitioned index non-clustered tables, 12-4
rebuilding partitions, 13-20 PCTUSED and, 12-6
partitioned objects, 13-1 to 13-21 PCTINCREASE storage parameter
adding, 13-11 about, 12-8
creating, 13-9 altering, 12-11
definition, 13-2 setting for the data dictionary, 20-27
maintaining, 13-9 to 13-21 PCTUSED storage parameter
merging, 13-18 altering, 14-10
moving, 13-10 block overhead and, 12-6
quiescing applications during maintenance default, 12-5
of, 13-21 guidelines for setting, 12-5
splitting partition, 13-17 how it works, 12-4
truncating, 13-15 PCTFREE and, 12-6
partitioned table pending area, 11-5
adding partitions, 13-11 performance
converting to non-partitioned, 13-18 location of datafiles and, 10-4
splitting partition, 13-17 tuning archiving, 7-20
partitioned view performance tables
converting to partitioned table, 13-18 dynamic performance tables, 4-9
passwords physical structure of a database, 1-19
authentication file for, 1-9 PL/SQL program units
changing for roles, 24-8 dropped tables and, 14-12
initial for SYS and SYSTEM, 1-5 replaced views and, 15-9
password file, 1-12 planning
Index-18
database creation, 2-2 creating
relational design, 1-19 roles, 24-4
the database, 1-18 rollback segments, 21-7
precedence of storage parameters, 12-11 sequences, 15-10
predefined roles, 1-6 synonyms, 15-12
prerequisites tables, 14-9
for creating a database, 2-3 tablespaces, 9-4
PRIMARY KEY constraint users, 23-11
disabling, 20-19 views, 15-2
dropping associated indexes, 16-15 database administrator, 1-4
enabling, 20-19 disabling automatic archiving, 7-9
enabling on creation, 16-8 dropping
foreign key references when dropped, 20-20 clusters, 17-10
indexes associated with, 16-8 indexes, 16-15
storage of associated indexes, 16-8 online redo log members, 6-15
private redo log groups, 6-14
rollback segments, 21-8 roles, 24-9
taking offline, 21-12 rollback segments, 21-14
synonyms, 15-11 sequences, 15-11
privileges, 24-2, 24-3 synonyms, 15-12
adding datafiles to a tablespace, 10-5 tables, 14-12
adding redo log groups, 6-11 views, 15-9
altering dropping profiles, 23-21
default storage parameters, 9-8 enabling and disabling resource limits, 23-21
dispatcher privileges, 4-7 enabling and disabling triggers, 20-12
indexes, 16-13 enabling automatic archiving, 7-8
named user limit, 23-6 for changing session limits, 23-5
passwords, 23-16 forcing a log switch, 6-16
role authentication, 24-6 granting
rollback segments, 21-9 about, 24-9
sequences, 15-10 object privileges, 24-10
tables, 14-10 required privileges, 24-10
users, 23-15 system privileges, 24-9
analyzing objects, 20-3 grouping with roles, 24-4
application developers and, 22-9 individual privilege names, 24-2
audit object, 25-11 job queues and, 8-4
auditing system, 25-10 listing grants, 24-20
auditing use of, 25-9 manually archiving, 7-10
bringing datafiles offline and online, 10-8 object, 24-3
bringing tablespaces online, 9-10 on selected columns, 24-13
cascading revokes, 24-14 operating system
cluster creation, 17-6 required for database administrator, 1-4
coalescing tablespaces, 9-9 policies for managing, 22-5
column, 24-11 recompiling packages, 20-25
CREATE SCHEMA command, 20-2 recompiling procedures, 20-25
Index-19
recompiling views, 20-25 public
renaming synonyms, 15-11
datafiles of a tablespace, 10-9 public rollback segments
datafiles of several tablespaces, 10-10 making available for use, 21-10
objects, 20-2 taking offline, 21-12
redo log members, 6-12 PUBLIC user group
replacing views, 15-8 granting and revoking privileges to, 24-15
RESTRICTED SESSION system privilege, 3-4, procedures and, 24-15
3-8 PUBLIC_DEFAULT profile
revoking, 24-12 dropping profiles and, 23-21
ADMIN OPTION, 24-12 using, 23-18
GRANT OPTION, 24-13
object privileges, 24-14
system privileges, 24-12
Q
revoking object, 24-12 query server process
revoking object privileges, 24-12 about, 4-13
setting resource costs, 23-20 quotas
system, 24-2 listing, 23-22
taking tablespaces offline, 9-10 revoking from users, 23-14
truncating, 20-10 setting to zero, 23-14
procedures tablespace, 23-13
recompiling, 20-25 tablespace quotas, 9-3
processes, 4-1 temporary segments and, 23-14
SNP background processes, 8-2 unlimited, 23-14
PROCESSES parameter viewing, 23-24
setting before database creation, 2-12
profiles, 23-17 R
altering, 23-19
assigning to users, 23-18 read-only database open, 3-8
composite limit, 23-19 read-only tablespaces
creating, 23-18 altering to writable, 9-14
default, 23-18 creating, 9-12
disabling resource limits, 23-21 datafiles, 10-8
dropping, 23-21 on a WORM device, 9-14
enabling resource limits, 23-21 REBUILD PARTITION clause
listing, 23-22 ALTER INDEX command, 13-11, 13-20
managing, 23-17 rebuild_freelists procedure, 19-6, 19-10
privileges for dropping, 23-21 recompiling
privileges to alter, 23-19 automatically, 20-24
privileges to set resource costs, 23-20 functions, 20-25
PUBLIC_DEFAULT, 23-18 packages, 20-25
setting a limit to null, 23-19 procedures, 20-25
viewing, 23-24 views, 20-25
program global area (PGA) recovery
effect of MAX_ENABLED_ROLES on, 24-8 creating new control files, 5-5
Index-20
startup with automatic, 3-5 multiplexing, 6-5
redo entries groups, 6-6
content of, 6-2 if some members inaccessible, 6-7
See redo records online, 6-2
redo log buffers recovery use of, 6-2
writing of, 6-2 requirement of two, 6-3
redo log files threads of, 6-2
active (current), 6-4 online redo log, 6-1
archived planning the, 6-5 to 6-10
advantages of, 7-2 privileges
contents of, 7-2 adding groups and members, 6-11
log switches and, 6-5 redo entries, 6-2
archived redo log files, 7-7 requirements, 6-7
archived redo logs, 7-4 verifying blocks, 6-16
available for use, 6-3 viewing, 2-8
circular use of, 6-3 redo records, 6-2
clearing, 6-7, 6-17 REFERENCES privilege
restrictions, 6-17 CASCADE CONSTRAINTS option, 24-13
contents of, 6-2 revoking, 24-13
creating referential integrity constraints
groups and members, 6-11 dropping table partition with, 13-13
creating members, 6-11 truncating table partition with, 13-16
distributed transaction information in, 6-3 relational design
groups, 6-6 planning, 1-19
creating, 6-11 releases
decreasing number, 6-10 checking the release number, 1-22
dropping, 6-14 identifying for Oracle8i, 1-21
LOG_FILES initialization parameter, 6-10 maintenance release number, 1-21
members, 6-6 patch release number, 1-22
threads, 6-2 port-specific release number, 1-22
how many in redo log, 6-9 versions of other Oracle software, 1-22
inactive, 6-4 relocating
legal and illegal configurations, 6-7 control files, 5-5
LGWR and the, 6-3 datafiles, 10-9, 10-10
log sequence numbers of, 6-5 remote connections, 1-16
log switches, 6-5 connecting as SYSOPER/SYSDBA, 1-14
members, 6-6 password files, 1-9
creating, 6-11 REMOTE_LOGIN_PASSWORDFILE parameter, 1-
dropping, 6-14 11
maximum number of, 6-10 REMOTE_OS_AUTHENT parameter
mirrored setting, 23-10
log switches and, 6-7 REMOTE_OS_ROLES parameter
multiplexed setting, 24-8, 24-19
diagrammed, 6-6 RENAME command, 20-2
if all inaccessible, 6-7 renaming
Index-21
control files, 5-5 privileges and roles
datafiles, 10-9, 10-10 SYSOPER/DBA privileges, 1-13
datafiles with a single table, 10-9 revoking privileges and roles
online redo log members, 6-12 on selected columns, 24-13
schema objects, 20-2 REVOKE command, 24-12
REOPEN option shortcuts for object privileges, 24-3
LOG_ARCHIVE_DEST_n initialization when using operating-system roles, 24-18
parameter, 7-19 roles
replacing ADMIN OPTION and, 24-10
views, 15-8 application developers and, 22-10
resource allocation methods, 11-2 authorization, 24-6
resource consumer groups, 11-2 backward compatibility, 24-5
resource limits changing authorization for, 24-8
altering in profiles, 23-19 changing passwords, 24-8
assigning with profiles, 23-18 CONNECT role, 24-5
composite limits and, 23-19 database authorization, 24-7
costs and, 23-20 DBA role, 1-6, 24-5
creating profiles and, 23-18 default, 23-16
disabling, 23-21 dropping, 24-8
enabling, 23-21 EXP_FULL_DATABASE, 24-5
privileges to enable and disable, 23-21 GRANT command, 24-19
privileges to set costs, 23-20 GRANT OPTION and, 24-11
profiles, 23-17 granting
PUBLIC_DEFAULT profile and, 23-18 about, 24-9
service units, 23-19 grouping with roles, 24-4
setting to null, 23-19 IMP_FULL_DATABASE, 24-5
resource plan directives, 11-2 listing, 24-22
resource plans, 11-2 listing grants, 24-21
RESOURCE role, 24-5 listing privileges and roles in, 24-23
RESOURCE_LIMIT parameter management using the operating system, 24-16
enabling and disabling limits, 23-21 managing, 24-4
resources multi-byte characters
profiles, 23-17 in names, 24-5
responsibilities multi-byte characters in passwords, 24-7
of a database administrator, 1-2 multi-threaded server and, 24-7
of database users, 1-3 operating system granting of, 24-17, 24-19
RESTRICTED SESSION privilege operating-system authorization, 24-7
instances in restricted mode, 3-8 OS management and the multi-threaded
restricted mode and, 3-4 server, 24-19
session limits and, 23-3 passwords for enabling, 24-7
restricting access to database predefined, 1-6, 24-5
starting an instance, 3-4 privileges
REVOKE command, 24-12 changing authorization method, 24-6
when takes effect, 24-15 changing passwords, 24-6
revoking for creating, 24-4
Index-22
for dropping, 24-9 managing, 21-1
granting system privileges or roles, 24-9 monitoring, 21-6
RESOURCE role, 24-5 OFFLINE, 21-11
REVOKE command, 24-19 offline rollback segments, 21-10
revoking, 24-12 offline status, 21-12
revoking ADMIN OPTION, 24-12 online rollback segments, 21-10
security and, 22-6 online status, 21-12
SET ROLE command, 24-19 PARTLY AVAILABLE, 21-11
unique names for, 24-4 PENDING OFFLINE, 21-12
without authorization, 24-8 privileges
rollback segments for dropping, 21-14
acquiring automatically, 21-3, 21-11 required to alter, 21-9
acquiring on startup, 2-12 required to create, 21-7
allocating, 2-14 setting size of, 21-4
altering public, 21-9 status for dropping, 21-13
altering storage parameters, 21-9 status or state, 21-11
AVAILABLE, 21-11 storage parameters, 21-8
bringing storage parameters and, 21-8
online, 21-11 taking offline, 21-12
online automatically, 21-11 taking tablespaces offline and, 9-12
online when new, 21-8 transactions and, 21-13
PARTLY AVAILABLE segment online, 21-11 using multiple, 21-2
checking if offline, 21-12 ROLLBACK_SEGMENTS parameter
choosing how many, 2-14 adding rollback segments to, 21-8
choosing size for, 2-14 setting before database creation, 2-12
creating, 21-8 rows
creating after database creation, 21-3 chaining across blocks, 12-4, 20-8
creating public and private, 21-3 violating integrity constraints, 20-15
decreasing size of, 21-10
deferred, 21-16
displaying
S
all deferred rollback segments, 21-16 schema objects
deferred rollback segments, 21-16 creating multiple objects, 20-2
information on, 21-14 default audit options, 25-11
PENDING OFFLINE segments, 21-15 dependencies between, 20-23
displaying names of all, 21-15 disabling audit options, 25-12
dropping, 21-13 enabling audit options on, 25-11
equally sized extents, 21-5 listing by type, 20-31
explicitly assigning transactions to, 21-13 listing information, 20-29
guidelines for managing, 21-2 privileges to access, 24-3
initial, 21-2 privileges to rename, 20-2
invalid status, 21-14 renaming, 20-2, 20-3
listing extents in, 20-32 SCN, 10-14
location of, 21-7 security
making available for use, 21-10 accessing a database, 22-2
Index-23
administrator of, 22-2 composite limits and, 23-19
application developers and, 22-9 servers
auditing policies, 22-18 dedicated
authentication of users, 22-2 multi-threaded contrasted with, 4-3
data, 22-3 multi-threaded
database security, 22-2 dedicated contrasted with, 4-3
database users and, 22-2 session limits, license
establishing policies, 22-1 setting initially, 2-13
general users, 22-4 session monitor, 4-8
multi-byte characters session, user
in role names, 24-5 active, 4-16
in role passwords, 24-7 inactive, 4-17
in user passwords, 23-12 marked to be terminated, 4-17
operating-system security and the terminating, 4-15
database, 22-3 viewing terminated sessions, 4-17
policies for database administrators, 22-7 sessions
privilege management policies, 22-5 auditing connections and disconnections, 25-8
privileges, 22-2 limits per instance, 23-2
protecting the audit trail, 25-16 listing privilege domain of, 24-22
REMOTE_OS_ROLES parameter, 24-19 number of concurrent sessions, 2-13
roles to force security, 22-6 Parallel Server session limits, 2-13
security officer, 1-3 setting maximum for instance, 23-4
sensitivity, 22-3 setting warning limit for instance, 23-4
segments viewing current number and high water
data and index mark, 23-6
default storage parameters, 12-10 viewing memory use, 23-25
data dictionary, 20-27 SET ROLE command
displaying information on, 20-32 how password is set, 24-7
monitoring, 21-15 when using operating-system roles, 24-19
rollback, 21-1 SET TRANSACTION command
temporary storage parameters, 12-12 USE ROLLBACK SEGMENT option, 21-13
sensitivity setting archive buffer parameters, 7-22
security, 22-3 SGA
SEQUENCE_CACHE_ENTRIES parameter, 15-11 determing buffers in cache, 2-11
sequences shared mode
altering, 15-10 rollback segments and, 21-3
creating, 15-10 shared pool
dropping, 15-11 ANALYZE command and, 20-8
initialization parameters, 15-11 shared server processes
managing, 15-9 changing the minimum number of, 4-6
Parallel Server and, 15-10 privileges to change number of, 4-6
privileges for altering, 15-10 trace files for, 4-10
privileges for creating, 15-10 shared SQL areas
privileges for dropping, 15-11 ANALYZE command and, 20-8
server units shortcuts
Index-24
CONNECT, for auditing, 25-8 ALTER INDEX command, 13-18
object auditing, 25-9 ALTER TABLE command, 13-11, 13-17
object privileges, 24-3 SQL statements
statement level auditing options, 25-8 disabling audit options, 25-12
Shut Down menu enabling audit options on, 25-10
Abort Instance option, 3-12 SQL trace facility
Immediate option, 3-11 when to enable, 4-12
SHUTDOWN command SQL*Loader
ABORT option, 3-12 about, 1-17
IMMEDIATE option, 3-11 indexes and, 16-3
NORMAL option, 3-11 SQL*Plus commands
shutting down a database, 3-1 See commands, SQL*Plus
shutting down an instance SQL_TRACE parameter
aborting the instance, 3-12 trace files and, 4-10
connecting and, 3-9 STALE status
connecting as INTERNAL, 3-10 of redo log members, 6-15
example of, 3-11 standby transmission mode
immediately, 3-11 definition of, 7-15
normally, 3-10 Net8 and, 7-15
size RFS processes and, 7-15
datafile, 10-4 Start Up Instance dialog box, 3-2
hash clusters, 18-4 starting a database
rollback segments, 21-4 about, 3-1
skip_corrupt_blocks procedure, 19-5, 19-11 general procedures, 3-2
snapshot logs starting an instance
storage parameters, 12-10 at database creation, 3-3
snapshots automatically at system startup, 3-6
storage parameters, 12-10 database closed and mounted, 3-4
too old database name conflicts and, 2-9
OPTIMAL storage parameter and, 21-5 dispatcher processes and, 4-5
SNP background processes enabling automatic archiving, 7-9
about, 8-2 examples of, 3-6
software versions, 1-21 exclusive mode, 3-6
SORT_AREA_SIZE parameter forcing, 3-5
index creation and, 16-3 general procedures, 3-2
space mounting and opening the database, 3-4
adding to the database, 9-4 multi-threaded server and, 3-2
used by indexes, 16-14 normally, 3-4
space management parallel mode, 3-6
PCTFREE, 12-2 problems encountered while, 3-5
PCTUSED, 12-4 recovery and, 3-5
specifying destinations remote instance startup, 3-6
for archived redo logs, 7-11 restricted mode, 3-4
specifying multiple ARCH processes, 7-20 with multi-threaded servers, 4-4
SPLIT PARTITION clause, 13-18 without mounting a database, 3-3
Index-25
STARTUP command, 3-2 dropping, 15-12
FORCE option, 3-5 managing, 15-11
MOUNT option, 3-4 private, 15-11
NOMOUNT option, 2-6, 3-3 privileges for creating, 15-12
RECOVER option, 3-5 privileges for dropping, 15-12
specifying database name, 3-3 public, 15-11
statistics SYS
updating, 20-4 initial password, 1-5
Step, 1-18, 1-20 objects owned, 1-5
storage policies for protecting, 22-7
altering tablespaces, 9-8 privileges, 1-5
quotas and, 23-14 user, 1-5
revoking tablespaces and, 23-14 SYS.AUD$
unlimited quotas, 23-14 audit trail, 25-2
storage parameters creating and deleting, 25-4
applicable objects, 12-7 SYSOPER/SYSDBA privileges
changing settings, 12-11 adding users to the password file, 1-12
data dictionary, 20-26 connecting with, 1-14
default, 12-7 determining who has privileges, 1-13
for the data dictionary, 20-27 granting and revoking, 1-13
INITIAL, 12-7, 14-11 SYSTEM
INITRANS, 12-9, 14-11 initial password, 1-5
MAXEXTENTS, 12-8 objects owned, 1-5
MAXTRANS, 12-9, 14-11 policies for protecting, 22-7
MINEXTENTS, 12-8, 14-11 user, 1-5
NEXT, 12-8 System Change Number (SCN)
OPTIMAL (in rollback segments), 21-5 checking for a datafile, 10-14
PCTFREE, 14-10 system change number (SCN)
PCTINCREASE, 12-8 when determined, 6-2
PCTUSED, 14-10 System Global Area, 2-11
precedence of, 12-11 System Global Area (SGA), 2-11
rollback segments, 21-8 system privileges, 24-2
SYSTEM rollback segment, 21-9 SYSTEM rollback segment
temporary segments, 12-12 altering storage parameters of, 21-9
stored procedures SYSTEM tablespace
privileges for recompiling, 20-25 cannot drop, 9-15
using privileges granted to PUBLIC, 24-15 initial rollback segment, 21-2
stream non-data dictionary tables and, 14-3
tape drive, 7-23 restrictions on taking offline, 10-7
SWITCH LOGFILE option when created, 9-4
ALTER SYSTEM command, 6-16
synonyms
creating, 15-12
T
displaying dependencies of, 20-32 table partition
dropped tables and, 14-12 containing global index, 13-12
Index-26
creating, 13-9 UNRECOVERABLE, 14-4
dropping, 13-12 validating structure, 20-8
exchanging, 13-18 tablespace set, 9-20
splitting, 13-17 tablespaces
truncating, 13-15 adding datafiles, 10-5
tables altering availability, 9-10
adding partitions, 13-11 altering storage settings, 9-8
allocating extents, 14-11 assigning defaults for users, 23-12
altering, 14-10, 14-11 assigning user quotas, 9-3
analyzing statistics, 20-3 bringing online, 9-10
clustered, 17-2 checking default storage parameters, 9-31
clustered tables coalescing, 9-8
altering, 17-9 creating, 9-3
creating, 17-6 creating additional, 9-4
dropping, 17-10 default quota, 23-13
managing, 17-1 default storage parameters for, 12-10
privileges to drop, 17-10 default temporary, 23-13
creating, 14-9 dropping
designing before creating, 14-2 about, 9-14
dropping, 14-12 required privileges, 9-15
estimating size, 14-5 guidelines for managing, 9-2
guidelines for managing, 14-1, 14-6 listing files of, 9-31
hash clustered listing free space in, 9-32
creating, 18-4 location, 10-4
managing, 18-1 managing, 10-1
increasing column length, 14-10 monitoring, 10-13
indexes and, 16-2 privileges for creating, 9-4
key-preserved, 15-5 privileges to take offline, 9-10
limiting indexes on, 16-3 quotas
location, 14-10 assigning, 9-3
location of, 14-3 quotas for users, 23-13
managing, 14-1 read-only, 9-12
parallelizing creation of, 14-4 revoking from users, 23-14
privileges for creation, 14-9 rollback segments required, 9-5
privileges for dropping, 14-12 setting default storage parameters for, 9-3
privileges to alter, 14-10 SYSTEM tablespace, 9-4
schema of clustered, 17-7 taking offline normal, 9-10
separating from indexes, 14-6 taking offline temporarily, 9-11
specifying PCTFREE for, 12-4 temporary, 23-13
specifying tablespace, 14-3, 14-10 unlimited quotas, 23-14
storage parameters, 12-10 using multiple, 9-2
SYSTEM tablespace and, 14-3 viewing quotas, 23-24
temporary space and, 14-6 writable, 9-14
transaction parameters, 14-3 taking offline
truncating, 20-9 tablespaces, 9-10
Index-27
tape drives transmitting archived redo logs, 7-14
streaming for archiving, 7-23 in normal transmission mode, 7-14
temporary segments in standby transmission mode, 7-14
index creation and, 16-3 transportable tablespaces, 9-18
temporary space triggers
allocating, 14-6 auditing, 25-20
terminating disabling, 20-12
a user session, 4-15 dropped tables and, 14-12
terminating sessions enabling, 20-12
active sessions, 4-16 examples, 25-20
identifying sessions, 4-16 privileges for enabling and disabling, 20-12
inactive session, example, 4-17 TRUNCATE command, 20-9
inactive sessions, 4-17 DROP STORAGE option, 20-11
test REUSE STORAGE option, 20-11
security for databases, 22-9 TRUNCATE PARTITION clause
threads ALTER TABLE command, 13-15
online redo log, 6-2 truncating
time window clusters, 20-9
moving, in historical table, 13-20 partitioned objects, 13-15
tip privileges for, 20-10
object privilege shortcut, 24-3 tables, 20-9
shortcuts for auditing objects, 25-9 tuning
statement auditing shortcut, 25-8 archiving, 7-20
TNSNAMES.ORA file, 7-12 databases, 1-20
To, 10-5, 21-12 initially, 2-14
trace files
job failures and, 8-10
location of, 4-11
U
log writer, 4-11 UNIQUE key constraints
log writer process and, 6-6 disabling, 20-19
size of, 4-11 dropping associated indexes, 16-15
using, 4-10, 4-11 enabling, 20-19
when written, 4-12 enabling on creation, 16-8
transaction entries foreign key references when dropped, 20-20
guidelines for storage, 12-9 indexes associated with, 16-8
transactions storage of associated indexes, 16-8
assigning to specific rollback segment, 21-13 UNLIMITED TABLESPACE privilege, 23-14
committing unrecoverable
writing redo log buffers and, 6-2 tables, 14-4
rollback segments and, 21-13 UNRECOVERABLE DATAFILE option
TRANSACTIONS parameter ALTER DATABASE command, 6-17
using, 21-2 unrecoverable indexes
TRANSACTIONS_PER_ROLLBACK_SEGMENT indexes, 16-5
parameter UPDATE privilege
using, 21-2 revoking, 24-13
Index-28
Use, 10-10, 23-10 privileges for dropping, 23-17
USER_DUMP_DEST parameter, 4-11 PUBLIC group, 24-15
USER_EXTENTS, 10-13 security and, 22-2
USER_FREE, 9-31, 10-13 security for general users, 22-4
USER_INDEXES view session, terminating, 4-17
filling with data, 20-5 specifying user names, 23-12
USER_SEGMENTS, 9-31, 10-13 tablespace quotas, 23-13
USER_TAB_COLUMNS view unique user names, 2-13, 23-5
filling with data, 20-5 viewing information on, 23-23
USER_TABLES view viewing memory use, 23-25
filling with data, 20-5 viewing tablespace quotas, 23-24
USER_TABLESPACES, 9-31, 10-13 utilities
usernames Export, 1-17
SYS and SYSTEM, 1-5 for the database administrator, 1-17
users Import, 1-17
altering, 23-15 SQL*Loader, 1-17
assigning profiles to, 23-18 UTLCHAIN.SQL, 20-8
assigning tablespace quotas, 9-3 UTLLOCKT.SQL script, 4-8
assigning unlimited quotas for, 23-14
auhentication
database authentication, 23-8
V
authentication V$ARCHIVE view, 7-23
about, 22-2, 23-7 V$ARCHIVE_DEST view
changing default roles, 23-16 obtaining destination status, 7-14
composite limits and, 23-19 V$DATABASE view, 7-24
default tablespaces, 23-12 V$DATAFILE, 9-31, 10-13
dropping, 23-16 V$DBFILE view, 2-8
dropping profiles and, 23-21 V$DISPATCHER view
dropping roles and, 24-8 controlling dispatcher process load, 4-7
end-user security policies, 22-5 V$LICENSE view, 23-6
enrolling, 1-20 V$LOG view, 7-23
identification, 23-7 displaying archiving status, 7-23
in a newly created database, 2-14 online redo log, 6-18
limiting number of, 2-13 viewing redo data with, 6-18
listing, 23-22 V$LOGFILE view, 2-8
listing privileges granted to, 24-20 logfile status, 6-15
listing roles granted to, 24-21 viewing redo data with, 6-18
managing, 23-11 V$LOGMNR_CONTENTS view, 7-31
multi-byte characters using to analyze archived redo logs, 7-25
in passwords, 23-12 V$PWFILE_USERS view, 1-13
objects after dropping, 23-16 V$QUEUE view
password security, 22-4 controlling dispatcher process load, 4-7
policies for managing privileges, 22-5 V$ROLLNAME
privileges for changing passwords, 23-15 finding PENDING OFFLINE segments, 21-15
privileges for creating, 23-11 V$ROLLSTAT
Index-29
finding PENDING OFFLINE segments, 21-15 W
V$SESSION, 8-14
V$SESSION view, 4-17 warning
V$THREAD view, 6-18 changing data dictionary storage
viewing redo data with, 6-18 parameters, 20-27
valid destination state creating a rollback segment, 2-12
for archived redo logs, 7-13 disabling audit options, 25-12
VALIDATE STRUCTURE option, 20-8 enabling auditing, 25-10
VARCHAR2 datatype, 12-17 setting the CONTROL_FILES parameter, 2-10
space use of, 12-17 use mirrored control files, 5-2
verifying blocks wildcards
redo log files, 6-16 in views, 15-4
versions, 1-21 WORM devices
of other Oracle software, 1-22 and read-only tablespaces, 9-14
view writable tablespaces, 9-14
partitioned
converting to partitioned table, 13-18
views
creating, 15-2
creating with errors, 15-4
displaying dependencies of, 20-32
dropped tables and, 14-12
dropping, 15-9
FOR UPDATE clause and, 15-3
managing, 15-1, 15-9
ORDER BY clause and, 15-3
privileges, 15-2
privileges for dropping, 15-9
privileges for recompiling, 20-25
privileges to replace, 15-8
recompiling, 20-25
replacing, 15-8
V$ARCHIVE, 7-23
V$ARCHIVE_DEST, 7-14
V$DATABASE, 7-24
V$LOG, 6-18, 7-23
V$LOGFILE, 6-15, 6-18
V$LOGMNR_CONTENTS, 7-25, 7-31
V$THREAD, 6-18
wildcards in, 15-4
WITH CHECK OPTION, 15-3
violating integrity constraints, 20-15
Index-30