Oracle DBA
Oracle DBA
FOR
ORACLE DBAs
By
Anup Kumar Srivastav
All rights reserved. No part of this publication may be reproduced, stored in retrieval system or
transmitted, in any form or by any means, electronic, mechanical, photocopying, recording, or
otherwise, without the prior permission of the author.
Powered by
Pothi.com
http://pothi.com
ABOUT ME
I am Anup Kumar Srivastav. I have been over 7+ years of experience in IT Industries, of which
3-year as ERP techno-Functional, 1.2 years as a Oracle Core DBA and 3 years 8 months as a
Oracle Application DBA, with major clients like Container Corporation of India, FLEX
Industries Ltd, Shiv-Vani Oil & Exploration Services Ltd, Jhunjulwala Vanaspati Ltd and Lord
Distillery.
I have a broad experience in Implementation, Managing and up-gradation of Oracle Database and
Oracle Application (EBS) on Windows, Linux and Sun Solaris platform.
At present I am working as Senior System Engineer with HCL COMNET LIMITED, Noida.
Thanking You.
Yours Truly,
Anup Kumar Srivastava
Content
1.
Description of Content
Page No
Oracle Architecture
How to Oracle work
Oracle Architecture (Diagram)
Basics of Oracle Architecture
Starting up a database
Shutdown the database
7-16
7
7
7
11
12
13
13
2.
17-22
17
20
3.
23-45
23
24
25
33
37
41
42
4.
Managing Tablespace
How to Manage Tablespace
Query Related to Tablespace
46-49
46
48
5.
Managing Database
Managing Data file
How to Drop Data file
Managing Control File
Managing Redo Log File
Managing Temporary Tablespace
Managing Undo Tablespace
50-64
50
52
54
56
58
61
6.
65-80
65
66
66
69
72
75
76
78
7.
81-90
83
8.
91-104
91
93
96
101
103
9.
105-120
105
109
112
114
10
Standby Database
Creating standby database using Manual
Creating standby database using RMAN
Standby Database Maintenance
Database Switchover/Failover
Standby Diagnosis Query for Primary Node
Standby Diagnosis Query for Standby Node
121-136
121
127
131
131
132
135
11.
137-170
137
137
137
140
140
143
145
12.
Project A
Upgrade 8.1.x(x>7) to 8.1.7
147
Project B
Upgrade 8.1.7 to 9.2.0
154
Project C
Upgrade 8.1.7 to 10.2.0
163
Project D
Upgrade 9.2.0 to 10.2.0
168
171-187
171
172
176
181
Oracle Architecture
How oracle works?
An instance is currently running on the computer that is executing Oracle called database server.
A computer is running an application (local machine) runs the application in a user process.
The client application attempts to establish a connection to the server using the proper Net8
driver.
When the oracle server detects the connection request from the client its check client
authentication, if authentication pass the oracle server creates a (dedicated) server process on
behalf of the user process. When the user executes a SQL statement and commits the transaction.
For example, the user changes a name in a row of a table. The server process receives the
statement and checks the shared pool for any shared SQL area that contains an identical SQL
statement. If a shared SQL area is found, the server process checks the user's access privileges to
the requested data and the previously existing shared SQL area is used to process the statement;
if not, a new shared SQL area is allocated for the statement so that it can be parsed and
processed. The server process retrieves any necessary data values from the actual datafile or
those stored in the system global area. The server process modifies data block in the system
global area. The DBWn process writes modified blocks permanently to disk when doing so is
efficient. Because the transaction committed, the LGWR process immediately records the
transaction in the online redo log file. If the transaction is successful, the server process sends a
message across the network to the application. If it is not successful, an appropriate error
message is transmitted. Throughout this entire procedure, the other background processes run,
watching for conditions that require intervention.
Oracle Architecture Diagram
Buffer Cache
Buffer cache is used to stores the copies of data block that retrieved from datafiles. That is, when
user retrieves data from database, the data will be stored in buffer cache. Its size can be
manipulated via DB_CACHE_SIZE parameter in init.ora initialization parameter file.
Shared Pool
Shared pool is broken into two small part memories Library Cache and Dictionary Cache. The
library cache is used to stores information about the commonly used SQL and PL/SQL
statements; and is managed by a Least Recently Used (LRU) algorithm. It is also enables the
sharing those statements among users. In the other hand, dictionary cache is used to stores
information about object definitions in the database, such as columns, tables, indexes, users,
privileges, etc.
The shared pool size can be set via SHARED_POOL_SIZE parameter in init.ora initialization
parameter file.
Redo Log Buffer
Each DML statement (insert, update, and delete) executed by users will generates the redo entry.
What is a redo entry? It is an information about all data changes made by users. That redo entry
is stored in redo log buffer before it is written into the redo log files. To manipulate the size of
redo log buffer, you can use the LOG_BUFFER parameter in init.ora initialization parameter file.
Large Pool
Large pool is an optional area of memory in the SGA. It is used to relieves the burden place on
the shared pool. It is also used for I/O processes. The large pool size can be set by
LARGE_POOL_SIZE
parameter
in
init.ora
initialization
parameter
file.
Java Pool
As its name, Java pool is used to services parsing of the Java commands. Its size can be set by
JAVA_POOL_SIZE
parameter
in
init.ora
initialization
parameter
file.
Oracle Background Processes
Oracle background processes is the processes behind the scene that work together with the
memories.
DBWn
Database writer (DBWn) process is used to write data from buffer cache into the datafiles.
Historically, the database writer is named DBWR. But since some of Oracle version allows us to
have more than one database writer, the name is changed to DBWn, where n value is a number 0
to
9.
LGWR
Log writer (LGWR) process is similar to DBWn. It writes the redo entries from redo log buffer
into the redo log files.
CKPT
Checkpoint (CKPT) is a process to give a signal to DBWn to writes data in the buffer cache into
datafiles. It will also updates datafiles and control files header when log file switch occurs.
SMON
System Monitor (SMON) process is used to recover the system crach or instance failure by
applying
the
entries
in
the
redo
log
files
to
the
datafiles.
PMON
Process Monitor (PMON) process is used to clean up work after failed processes by rolling back
the
transactions
and
releasing
other
resources.
Database
We can broken up database into two main structures Logical structures and Physical
structures.
Logical Structures
The logical units are tablespace, segment, extent, and data block.
Bellow Figure, will illustrate the relationships between those units.
Tablespace
A Tablespace is a grouping logical database objects. A database must have one or more
tablespaces. In the Figure 3, we have three tablespaces SYSTEM tablespace, Tablespace 1,
and Tablespace 2. Tablespace is composed by one or more datafiles.
Segment
A Tablespace is further broken into segments. A segment is used to stores same type of objects.
That is, every table in the database will store into a specific segment (named Data Segment) and
every index in the database will also store in its own segment (named Index Segment). The other
10
segment
types
are
Temporary
Segment
and
Rollback
Segment.
Extent
A segment is further broken into extents. An extent consists of one or more data block. When the
database object is enlarged, an extent will be allocated. Unlike a tablespace or a segment, an
extent
cannot
be
named.
Data Block
A data block is the smallest unit of storage in the Oracle database. The data block size is a
specific number of bytes within tablespace and it has the same number of bytes.
Physical Structures
The physical structures are structures of an Oracle database (in this case the disk files) that are
not directly manipulated by users. The physical structure consists of datafiles, redo log files, and
control files.
Datafiles
A datafile is a file that correspondens with a tablespace. One datafile can be used by one
tablespace, but one tablespace can has more than one datafiles.
Redo Log Files
Redo log files are the files that store the redo entries generated by DML statements. It can be
used for recovery processes.
Control Files
Control files are used to store information about physical structure of database, such as datafiles
size and location, redo log files location, etc.
Starting up a database
This article explains the procedures involved in starting an Oracle instance and database.
First Stage: Oracle engine start an Oracle Instance
When Oracle starts an instance, it reads the initialization parameter file to determine the values of
initialization parameters. Then, it allocates an SGA, which is a shared area of memory used for
database information, and creates background processes. At this point, no database is associated
with these memory structures and processes.
Second Stage: Mount the Database
To mount the database, the instance finds the database control files and opens them. Control files
are specified in the CONTROL_FILES initialization parameter in the parameter file used to start
the instance. Oracle then reads the control files to get the names of the database's datafiles and
redo log files.
At this point, the database is still closed and is accessible only to the database administrator. The
database administrator can keep the database closed while completing specific maintenance
operations. However, the database is not yet available for normal operations.
Final Stage: Database opens for normal operation
11
Opening a mounted database makes it available for normal database operations. Usually, a
database administrator opens the database to make it available for general use.
When you open the database, Oracle opens the online datafiles and online redo log files. If a
tablespace was offline when the database was previously shut down, the tablespace and its
corresponding datafiles will still be offline when you reopen the database.
If any of the datafiles or redo log files are not present when you attempt to open the database,
then Oracle returns an error. You must perform recovery on a backup of any damaged or missing
files before you can open the database.
Open a Database in Read-Only Mode
You can open any database in read-only mode to prevent its data from being modified by user
transactions. Read-only mode restricts database access to read-only transactions, which cannot
write to the datafiles or to the redo log files.
Disk writes to other files, such as control files, operating system audit trails, trace files, and alert
files, can continue in read-only mode. Temporary tablespaces for sort operations are not affected
by the database being open in read-only mode. However, you cannot take permanent tablespaces
offline while a database is open in read-only mode. Also, job queues are not available in readonly mode.
Read-only mode does not restrict database recovery or operations that change the database's state
without generating redo data. For example, in read-only mode: Datafiles can be taken offline and
online
Offline datafiles and tablespaces can be recovered
The control file remains available for updates about the state of the database one useful
application of read-only mode is that standby databases can function as temporary reporting
databases.
Database Shutdown
The three steps to shutting down a database and its associated instance are:
Close the database.
Unmount the database.
Shut down the instance.
Close a Database
When you close a database, Oracle writes all database data and recovery data in the SGA to the
datafiles and redo log files, respectively. Next, Oracle closes all online datafiles and online redo
log files. At this point, the database is closed and inaccessible for normal operations. The control
files remain open after a database is closed but still mounted.
Close the Database by Terminating the Instance
In rare emergency situations, you can terminate the instance of an open database to close and
completely shut down the database instantaneously. This process is fast, because the operation of
writing all data in the buffers of the SGA to the datafiles and redo log files is skipped. The
subsequent reopening of the database requires recovery, which Oracle performs automatically.
12
Un mount a Database
After the database is closed, Oracle un mounts the database to disassociate it from the
instance. At this point, the instance remains in the memory of your computer.
After a database is un mounted, Oracle closes the control files of the database.
Shut Down an Instance
The final step in database shutdown is shutting down the instance. When you shut down an
instance, the SGA is removed from memory and the background processes are terminated.
Abnormal Instance Shutdown
In unusual circumstances, shutdown of an instance might not occur cleanly; all memory
structures might not be removed from memory or one of the background processes might not be
terminated. When remnants of a previous instance exist, a subsequent instance startup most likely
will fail. In such situations, the database administrator can force the new instance to start up by
first removing the remnants of the previous instance and then starting a new instance, or by
issuing a SHUTDOWN ABORT statement in Enterprise Manager.
Managing an oracle instance
When Oracle engine starts an instance, it reads the initialization parameter file to determine the
values of initialization parameters. Then, it allocates an SGA and creates background processes.
At this point, no database is associated with these memory structures and processes.
Type of initialization file:
Static (PFILE)
Text file
Modification with an OS editor
Modification made manually
Persistent (SPFILE)
Binary file
Cannot Modified
Maintained by the Server
13
Before work user must connect to an Instance. When user LOG on Oracle Server Oracle Engine
create a process called Server processes. Server process communicate with oracle instance on the
behalf of user process.
Each background process is useful for a specific purpose and its role is well defined.
Background processes are invoked automatically when the instance is started.
Database Writer (DBWr)
Process Name: DBW0 through DBW9 and DBWa through DBWj
Max Processes: 20
This process writes the dirty buffers for the database buffer cache to data files. One database
writer process is sufficient for most systems; more can be configured if essential. The
initialisation parameter, DB_WRITER_PROCESSES, specifies the number of database writer
processes to start.
The DBWn process writes dirty buffer to disk under the following conditions:
Log writer will write synchronously to the redo log groups in a circular fashion. If any damage is
identified with a redo log file, the log writer will log an error in the LGWR trace file and the
system Alert Log. Sometimes, when additional redo log buffer space is required, the LGWR will
14
even write uncommitted redo log entries to release the held buffers. LGWR can also use group
commits (multiple committed transaction's redo entries taken together) to write to redo logs when
a database is undergoing heavy write operations.
The log writer must always be running for an instance.
System Monitor
Process Name: SMON
Max Processes: 1
This process is responsible for instance recovery, if necessary, at instance startup. SMON also
cleans up temporary segments that are no longer in use. SMON wakes up about every 5 minutes
to perform housekeeping activities. SMON must always be running for an instance.
Process Monitor
Process Name: PMON
Max Processes: 1
This process is responsible for performing recovery if a user process fails. It will rollback
uncommitted transactions. PMON is also responsible for cleaning up the database buffer cache
and freeing resources that were allocated to a process. PMON also registers information about
the instance and dispatcher processes with network listener.
PMON wakes up every 3 seconds to perform housekeeping activities. PMON must always be
running for an instance.
Checkpoint Process
Process Name: CKPT
Max processes: 1
Checkpoint process signals the synchronization of all database files with the checkpoint
information. It ensures data consistency and faster database recovery in case of a crash.
CKPT ensures that all database changes present in the buffer cache at that point are written to the
data files, the actual writing is done by the Database Writer process. The datafile headers and the
control files are updated with the latest SCN (when the checkpoint occurred), this is done by the
log writer process.
The CKPT process is invoked under the following conditions:
but
these
Incremental Checkpoints initiate the writing of recovery information to datafile headers and
controlfiles. Database writer is not signaled to perform buffer cache flushing activity here.
16
Note: Bellow mention steps are for 10g Release 1 & 2 (32-bit/64-bit) on Red Hat Enterprise
Linux 4.
Note: When you install Linux for Oracle you should create separate file system for ORACLE
Software and Oracle Database
2.5 GB of disk space should be required by Oracle Software
1.3 GB of disk space should be required by
General
Purpose
Database.
Here we have created two mount point /oracle and /database for oracle software and database.
Prerequisite Steps:
Check Memory
Memory should be 512 MB of RAM.
How to check the size of physical memory?
$ grep MemTotal /proc/meminfo
Check swap space
Swap space should be 1 GB or Twice the size of RAM
How to check the size of Swap space?
$ grep SwapTotal /proc/meminfo
If you dont have 1 GB of twice of the size of RAM. You must add temporary swap space to
your system by creating a temporary swap file. Bellow I am describing How to Add Swap
Space?
How to Add Swap Space?
Log in as a ROOT
$ dd if=/dev/zero of=tmpswap bs=1k count=900000
$ chmod 600 tmpswap
$ mkswap tmpswap
$ swapon tmpswap
Check TMP Space
Oracle Universal Installer (OUI) requires up to 400 MB of free space in the /tmp directory.
How to check the space in /tmp?
$ df /tmp
If you dont have enough space in the /tmp filesystem, you must temporarily create a tmp
directory in another filesystem. Bellow I am describing all steps for adding temp space?
17
18
Note: Just you add minimum required value in /etc/sysctl.conf file which is used during the boot
process:
kernel.shmmax=2147483648
kernel.sem=250 32000 100 128
fs.file-max=65536
net.ipv4.ip_local_port_range=1024 65000
net.core.rmem_default=1048576
net.core.rmem_max=1048576
net.core.wmem_default=262144
net.core.wmem_max=262144
become effective immediately, execute the following command:
Step 1 Log in as a Root
Step 2 $ sysctl p
Installation Steps:
Creating Oracle User Accounts and Group
Step 1 Log in as a Root
Step 2 $groupadd dba # group of users to be granted SYSDBA system privilege
Step 3 $groupadd oinstall # group owner of Oracle files
Step 4 useradd -c "Oracle software owner" d /home/oracle -g oinstall -G dba oracle
Step 5 chown oracle:dba /home/oracle /oracle
Step 6 chown oracle:dba /home/oracle /database
Step 5 passwd oracle
Start Installation
Starting Oracle Universal InstallerInsert the Oracle CD that contains the image of the software. If
you install Oracle10g from a CD, mount the CD by running the following commands in another
terminal:
su root
mount /mnt/cdrom
run installer ./ runInstaller
Note: Message: - Can not connect to X11 windows server
Step Log in as a Root user
Step 2 $xhost +
Step 3 su oracle
Step 4 Export Display=localhost:0.0
Step 5 ./runinstaller
Post Installation Steps:
Put following entries in bash file.
export ORACLE_HOME=$ORACLE_BASE/oracle/product/10.2.0/db_1
export PATH=$PATH:$ORACLE_HOME/bin
export LD_LIBRARY_PATH=$ORACLE_HOME/lib
19
Note: Bellow mention steps are for 10g Release 1 & 2 (32-bit/64-bit) on Solaris 10.
Note: When you install Solaris for Oracle you should create separate file system for ORACLE
Software and Oracle Database
2.5 GB of disk space should be required by Oracle Software
1.3 GB of disk space should be required by
General
Purpose
Database.
Here we have created two mount point /oracle and /database for oracle software and database.
Note: No specified operating system patches are required with Solaris 10 OS.
Per-requisite Steps:
Make sure that following software packages has been installed.
SUNWarc
SUNWbtool
SUNWhea
SUNWlibm
SUNWlibms
SUNWsprot
SUNWtoo
SUNWilof
SUNWxwfnt
SUNWilcs
SUNWsprox
SUNWil5cs
we can verify that packages are installed or not by using following command: $pkginfo -i.
Check following executable file must be presents in /usr/ccs/bin
make
ar
il
nm
Checks swap space
Swap space should be 512MB or Twice the size of RAM. Use following command to know
about Physical Memory and Swap space:
$ /usr/sbin/prtconf grep size
$ /usr/sbin/swap l
20
21
export PATH=$PATH:$ORACLE_HOME/bin
export LD_LIBRARY_PATH=$ORACLE_HOME/lib
Set X window enveiroment.
Log in as a root with CDE (Common Desktop Environment Session)
$ DISPLAY=:0.0
$export DISPLAY
$xhost +
$ su oracle
$DISPLAY=:0.0
$export DISPLAY
$/usr/openwin/bin/xclock
Execute runInstaller
Log in as a ORACLE user and execute run installer.
$./runInstaller
22
23
CATALOG.SQL-- creates the views of data dictionary tables and the dynamic
performance views.
CATPROC.SQL-- establishes the usage of PL/SQL functionality and creates many of the
PL/SQL Oracle supplied packages.
(For OMF Enable Database on Linux/Solaris)
24
@?/rdbms/admin/catalog.sql
@?/rdbms/admin/catproc.sql
(Implement ASM Enable Database on LHEL 4)
Now we are ready to implement ASM Enable Database on RHEL. We divided all task in
following Section:
Section A: Prepare/Install RHEL Machine
Section B: Prepare ASM Disk.
Section C: Configure Oracle Automatic Storage Management *
Section D: Create ASM Instance Manually
Section E: Create ASM Enable Database
*Important: We can use two methods to configure Oracle Automatic Storage Management
on Linux:
ASM With ASMLib I/O Database created on raw block devices with this method.
ASM With Standard Linux I/O Database created on raw character devices with this method.
NOTE: Here we use ASM with ASMLib I/O method in our task and also discuss about ASM
with Standard Linux I/O in Section C.
SECTION - A First we prepare RHEL4 machine.
Step 1 Install the RHEL 4
Here, we create partition (/dev/sda2) for Oracle Software at the time of OS Installation.
Step 2 Create the oracle user (log in as a ROOT and execute following)
# groupadd oinstall
# groupadd dba
# useradd -d /oracle -g oinstall -G dba -s /bin/ksh oracle
# chown oracle:dba /oracle
# passwd oracle
New Password:
Re-enter new Password:
25
26
27
29
# raw devices
ram*:root:disk:0660
#raw/*:root:disk:0660
raw/*:oracle:dba:0660
If you do not want to use above parameter just set only one parameter
INSTANCE_TYPE=ASM. ASM instance will start with default values for other
parameters.
Bellow I am giving you 5 key parameter that you must configure for ASM instance.
INSTANCE_TYPE
DB_UNIQUE_NAME
ASM_POWER_LIMIT (Indicate the maximum speed to be used by this ASM instance during a
disk rebalancing operation. Default is 1 and the range 1 to 11)
ASM_DISKSTRING (set the disk location for oracle to consider during a disk-discovery
process)
30
ASM_DISKGROUP (specify the name of any disk group that you want the ASM instance to
automatically mount at instance startup)
ASM instance uses the LARGE_POOL_SIZE memory buffer. We should allocate at least
8MB .
31
32
33
$mkdir /oracle/ASM/pfile
Step 3 Create ASM Instance Parameter file. In /oracle/ASM/pfile directory
INSTANCE_TYPE=ASM
DB_UNIQUE_NAME=+ASM
LARGE_POOL_SIZE=16M
BACKGROUND_DUMP_DEST = '/oracle/ASM/bdump'
USER_DUMP_DEST=/oracle/ASM/udump
CORE_DUMP_DEST = '/oracle/ASM/cdump'
ASM_DISKGROUPS='DB_DATA'
ASM_DISKSTRING =/dev/rdsk/*
Important:
If you do not want to use above parameter just set only one parameter INSTANCE_TYPE=ASM.
ASM instance will start with default values for other parameters.
Bellow I am giving you 5 key parameter that you must configure for ASM instance.
INSTANCE_TYPE
DB_UNIQUE_NAME
ASM_POWER_LIMIT (Indicate the maximum speed to be used by this ASM instance during a
disk rebalancing operation. Default is 1 and the range 1 to 11)
ASM_DISKSTRING (set the disk location for oracle to consider during a disk-discovery
process)
ASM_DISKGROUP (specify the name of any disk group that you want the ASM instance to
automatically mount at instance startup)
ASM instance uses the LARGE_POOL_SIZE memory buffer. We should allocate at least 8MB .
34
Let's start by determining if Oracle can find these four new disks: The view V$ASM_DISK can
be queried from the ASM instance to determine which disks are being used or may potentially be
used as ASM disks.
$export oracle_sid=+ASM
$sqlplus "/ as sysdba"
SQL> SELECT group_number, disk_number, mount_status, header_status, state, path FROM
v$asm_disk
Note:
The value of zero in the GROUP_NUMBER column for all four disks. This indicates that a disk
is available but hasn't yet been assigned to a disk group.
Using SQL*Plus, the following will create a disk group with normal redundancy and two failure
groups:
SQL> CREATE DISKGROUP DB_DATA NORMAL REDUNDANCY
FAILGROUP controller1 DISK ' c0d1s0'
FAILGROUP controller2 DISK ' c1d1s0';
Diskgroup created.
Step 6 ALTER DISKGROUP ALL MOUNT;
Now your ASM Instance has been created. Restart the ASM Instance.
SECTION - E Create (OMF) Database
Step 1 Set your ORACLE_SID
export ORACLE_SID=INDIAN
Step 2 Create a minimal init.ora ( In Default Location -$ORACLE_HOME/dbs/init<sid>.ora)
control_files = +DB_DATA
undo_management = AUTO
db_name = test
db_block_size = 8192
sga_max_size = 1073741824
sga_target = 1073741824
db_create_file_dest = +DB_DATA
db_create_online_log_dest_1 = +DB_DATA
35
sqlplus / as sysdba
startup nomount
Step 5 Create the database
create database indian
character set WE8ISO8859P1
national character set utf8
undo tablespace undotbs1
default temporary tablespace temp;
36
37
38
39
40
41
42
Note:
The value of zero in the GROUP_NUMBER column for all four disks. This indicates that a disk
43
is available but hasn't yet been assigned to a disk group. Using SQL*Plus, the following will
create a disk group with normal redundancy and two failure groups:
set oracle_sid=+ASM
sqlplus "/ as sysdba"
SQL> CREATE DISKGROUP DB_DATA NORMAL REDUNDANCY FAILGROUP
controller1
DISK 'G:\DISK1', 'H:\DISK2'
FAILGROUP controller2 DISK 'G:\DISK3', 'H:\DISK4';
Diskgroup created.
Step 9 ALTER DISKGROUP ALL MOUNT;
Now your ASM Instance has been created. Restart the ASM Instance.
Step 10 Create a initSID.ora(Example: initTEST.ora) file in $ORACLE_HOME/database/
directory.
Example: $ORACLE_HOME/dbs/initTEST.ora
Put following entry in initTEST.ora file
background_dump_dest=<put BDUMP log destination>
core_dump_dest=<put CDUMP log destination>
user_dump_dest=<put UDUMP log destination>
control_files = +DB_DATA
undo_management = AUTO
undo_tablespace = UNDOTBS1
db_name = test
db_block_size = 8192
sga_max_size = 1073741824
sga_target = 1073741824
db_create_file_dest = +DB_DATA #OMF
db_create_online_log_dest_1 = +DB_DATA #OMF
db_create_online_log_dest_2 = +DB_DATA #OMF
#db_recovery_file_dest = +DB_DATA
44
45
Managing Tablespace
How to Manage Tablespace
Tablespace is a logical storage unit. Why we are say logical because a tablespace is not visible in
the file system. Oracle store data physically is datafiles. A tablespace consist of one or more
datafile.
Type of Tablespace
System Tablespace
Created with the database
Required in all database
Contain the data dictionary
Non System Tablespace:
Separate undo, temporary, application data and application index segments Control the
amount of space allocation to the users objects
Enable more flexibility in database administration
How to Create Tablespace?
CREATE TABLESPACE "tablespace name"
DATAFILE <DATAFILE LOCATION> SIZE <Size of Datafile> REUSE
MENIMUM EXTENT (This ensure that every used extent size in the tablespace is a multiple of
the integer)
BLOCKSIZE
LOGGING | NOLOGGING (Logging: By default tablespace have all changes written to redo,
Nologging: tablespace do not have all changes written to redo)
ONLIN | OFFLINE (OFFLINE: tablespace unavailable immediately after creation)
PERMANENT | TEMPORARY (Permanent: tablespace can used to hold permanent object,
temporary: tablespace can used to hold temp object)
EXTENT MANAGEMENT clause
Example:
CREATE TABLESPACE "USER1"
DATAFILE 'C: \LOCAL\ORADATA\USER_DATA.DBF' SIZE 10m REUSE
BLOCKSIZE 8192
LOGGING
ONLINE
PERMANENT
EXTENT MANAGEMENT LOCAL
How to manage space in Tablespace?
Note: Tablespace allocate space in extent.
Locally managed tablespace:
46
The extents are managed with in the tablespace via bitmaps. In locally managed tablespace, all
tablespace information store in datafile header and dont use data dictionary table for store
information. Advantage of locally managed tablespace is that no DML generate and reduce
contention on data dictionary tables and no undo generated when space allocation or deallocation
occurs.
Extent Management [Local | Dictionary]
The storage parameters NEXT, PCTINCREASE, MINEXTENTS, MAXEXTENTS, and
DEFAULT STORAGE are not valid for segments stored in locally managed tablespaces.
To create a locally managed tablespace, you specify LOCAL in the extent management clause of
the CREATE TABLESPACE statement. You then have two options. You can have Oracle
manage extents for you automatically with the AUTOALLOCATE option, or you can specify
that the tablespace is managed with uniform extents of a specific size (UNIFORM SIZE).
If the tablespace is expected to contain objects of varying sizes requiring different extent sizes
and having many extents, then AUTOALLOCATE is the best choice.
If you do not specify either AUTOALLOCATE or UNIFORM with the LOCAL parameter, then
AUTOALLOCATE is the default.
Dictionary Managed tablespace
When we declaring a tablespace as a Dictionary Managed, the data dictionary manages the
extents. The Oracle server updates the appropriate tables (sys.fet$ and sys.uet$) in the data
dictionary whenever an extent is allocated or deallocated.
How to Create a Locally Managed Tablespace?
The following statement creates a locally managed tablespace named USERS, where
AUTOALLOCATE causes Oracle to automatically manage extent size.
CREATE TABLESPACE users
DATAFILE 'C:\LOCAL\ORADATA\USER_DATA.DBF' SIZE 50M
EXTENT MANAGEMENT LOCAL AUTOALLOCATE;
Alternatively, this tablespace could be created specifying the UNIFORM clause. In this example,
a 512K extent size is specified. Each 512K extent (which is equivalent to 64 Oracle blocks of
8K) is represented by a bit in the bitmap for this file.
CREATE TABLESPACE users
DATAFILE 'C:\LOCAL\ORADATA\USER_DATA.DBF' SIZE 50M
EXTENT MANAGEMENT LOCAL UNIFORM SIZE 512K;
How to Create a Dictionary Managed Tablespace?
The following is an example of creating a DICTIONARY managed tablespace in Oracle9i:
CREATE TABLESPACE users
47
48
PCT_INCREASE
FROM DBA_TABLESPACES;
How to retrieve information tablesapce and associated datafile?
SELECT FILE_NAME, BLOCKS, TABLESPACE_NAME
FROM DBA_DATA_FILES;
How to retrieve Statistics for Free Space (Extents) of Each Tablespace?
SELECT TABLESPACE_NAME "TABLESPACE", FILE_ID,
COUNT(*) "PIECES",
MAX(blocks) "MAXIMUM",
MIN(blocks) "MINIMUM",
AVG(blocks) "AVERAGE",
SUM(blocks) "TOTAL"
FROM DBA_FREE_SPACE
GROUP BY TABLESPACE_NAME, FILE_ID;
PIECES
shows the number of free space extents in the tablespace file, MAXIMUM and MINIMUM
show the largest and smallest contiguous area of space in database blocks, AVERAGE shows the
average size in blocks of a free space extent, and TOTAL shows the amount of free space in each
tablespace file in blocks. This query is useful when you are going to create a new object or you
know that a segment is about to extend, and you want to make sure that there is enough space in
the containing tablespace.
49
Managing Database
What is datafile?
Datafiles are physical files of the OS that store the data of all logical structures in the database.
Datafile must be created for each tablespace.
How to determine the number of datafiles?
At least one datafile is required for the SYSTEM tablespace. We can create separate datafile for
other teblespace. When we create DATABASE , MAXDATAFILES may be or not specify in
create database statement clause. Oracle assassin db_files default value to 200. We can also
specify the number of datafiles in init file.
When we start the oracle instance , the DB_FILES initialization parameter reserve for datafile
information and the maximum number of datafile in SGA. We can change the value of DB_FILES
(by changing the initialization parameter setting), but the new value does not take effect until you
shut down and restart the instance.
Important:
If the value of DB_FILES is too low, you cannot add datafiles beyond the DB_FILES limit.
Example : if init parameter db_files set to 2 then you can not add more then 2 in your
database.
If the value of DB_FILES is too high, memory is unnecessarily consumed.
When you issue CREATE DATABASE or CREATE CONTROLFILE statements, the
MAXDATAFILES parameter specifies an initial size. However, if you attempt to add a new
file whose number is greater than MAXDATAFILES, but less than or equal to DB_FILES, the
control file will expand automatically so that the datafiles section can accommodate more
files.
Note:
If you add new datafiles to a tablespace and do not fully specify the filenames, the database
creates the datafiles in the default database directory . Oracle recommends you always specify a
fully qualified name for a datafile. Unless you want to reuse existing files, make sure the new
filenames do not conflict with other files. Old files that have been previously dropped will be
overwritten.
How to add datafile in execting tablespace?
alter tablespace <Tablespace_Name> add datafile <Datafile Path.dbf size 10m autoextend on;
How to resize the datafile?
alter database datafile '/............../......./file01.dbf' resize 100M;
How to bring datafile online and offline?
alter database datafile '/............../......./file01.dbf' online;
alter database datafile '/............../......./file01.dbf' offline;
50
51
'/u02/oracle/rbdb1/sort01.dbf',
'/u02/oracle/rbdb1/user3.dbf'
TO '/u02/oracle/rbdb1/temp01.dbf',
'/u02/oracle/rbdb1/users03.dbf;
Step:4 Back up the database. After making any structural changes to a database, always perform
an immediate and complete backup.
How to drop a datafile from a Tablespace
Important: Oracle does not provide an interface for dropping datafiles in the same way you
would drop a schema object such as a table or a user.
Reasons why you want to remove a datafile from a tablespace:
Important: Once the DBA creates a datafile for a tablespace, the datafile cannot be removed. If
you want to do any critical operation like dropping datafiles, ensure you have a full backup of the
database.
Step: 1 determining how many datafiles makes up a tablespace
To determine how many and which datafiles make up a tablespace, you can use the following
query:
SELECT
file_name, tablespace_name
FROM
dba_data_files
WHERE
tablespace_name ='<name of tablespace>';
Case 1
If you have only one datafile in the tablespace and you want to remove it. You can simply drop
the entire tablespace using the following:
DROP TABLESPACE <tablespace name> INCLUDING CONTENTS;
The above command will remove the tablespace, the datafile, and the tablespace's contents from
the data dictionary.
Important: Oracle will not drop the physical datafile after the DROP TABLESPACE command.
This action needs to be performed at the operating system.
Case 2
52
If you have more than one datafile in the tablespace, and you wnat to remove all datafiles and
also no need the information contained in that tablespace, then use the same command as above:
DROP TABLESPACE <tablespace name> INCLUDING CONTENTS;
Case 3
If you have more than one datafile in the tablespace and you want to remove only one or two (not
all) datafile in the tablesapce or you want to keep the objects that reside in the other datafile(s)
which are part of this tablespace, then you must export all the objects inside the tablespace.
Step: 1 Gather information on the current datafiles within the tablespace by running the
following query in SQL*Plus:
SELECT
file_name, tablespace_name
FROM
dba_data_files
WHERE
tablespace_name ='<name of tablespace>';
Step: 2 you now need to identify which objects are inside the tablespace for the purpose of
running an export. To do this, run the following query:
SELECT
owner, segment_name, segment_type
FROM
dba_segments
WHERE
tablespace_name='<name of tablespace>'
Step : 3 Now, export all the objects that you wish to keep.
Step : 4 Once the export is done, issue the
DROP TABLESPACE <tablespace name> INCLUDING CONTENTS.
Step : 5 Delete the datafiles belonging to this tablespace using the operating system.
Step : 6 Recreate the tablespace with the datafile(s) desired, then import the objects into that
tablespace.
Case : 4
If you do not want to follow any of these procedures, there are other things that can be done
besides dropping the tablespace.
If the reason you wanted to drop the file is because you mistakenly created the file of the
wrong size, then consider using the RESIZE command.
If you really added the datafile by mistake, and Oracle has not yet allocated any space
within this datafile, then you can use ALTER DATABASE DATAFILE <filename>
RESIZE; command to make the file smaller than 5 Oracle blocks. If the datafile is resized
to smaller than 5 oracle blocks, then it will never be considered for extent allocation. At
some later date, the tablespace can be rebuilt to exclude the incorrect datafile.
53
Important : If you are running in archivelog mode, you can also use: ALTER DATABASE
DATAFILE <datafile name> OFFLINE; instead of OFFLINE DROP. Once the datafile is
offline, Oracle no longer attempts to access it, but it is still considered part of that tablespace.
This datafile is marked only as offline in the controlfile and there is no SCN comparison done
between the controlfile and the datafile during startup (This also allows you to startup a database
with a non-critical datafile missing). The entry for that datafile is not deleted from the controlfile
to give us the opportunity to recover that datafile.
Managing Control Files
A control file is a small binary file that records the physical structure of the database with
database name, Names and locations of associated datafiles, online redo log files, timestamp of
the database creation, current log sequence number and Checkpoint information.
Note:
54
Step: 3 edit the CONTROL_FILES parameter in the database's initialization parameter file to add
the new control file's name, or to change the existing control filename.
Step: 4 restart the database.
When you Create New Control Files?
All control files for the database have been permanently damaged and you do not have a
control file backup.
You want to change one of the permanent database parameter settings originally specified
in the CREATE DATABASE statement. These settings include the database's name and
the
following
parameters:
MAXLOGFILES,
MAXLOGMEMBERS,
MAXLOGHISTORY, MAXDATAFILES, and MAXINSTANCES.
55
If you specify NORESETLOGS when creation the control file, use following commands:
ALTER DATABASE OPEN;
If you specified RESETLOGS when creating the control file, use the ALTER
DATABASE statement, indicating RESETLOGS.
ALTER DATABASE OPEN RESETLOGS;
TIPS:
When creating a new control file, select the RESETLOGS option if you have lost any online redo
log groups in addition to control files. In this case, you will need to recover from the loss of the
redo logs . You must also specify the RESETLOGS option if you have renamed the database.
Otherwise, select the NORESETLOGS option.
Backing Up Control Files
Method 1:
Back up the control file to a binary file (duplicate of existing control file) using the following
statement:
ALTER DATABASE BACKUP CONTROLFILE TO '<DISK>:\Directory\control.bkp';
Method 2:
Produce SQL statements that can later be used to re-create your control file:
ALTER DATABASE BACKUP CONTROLFILE TO TRACE;
How to retrieve information related to Control File:
Redo logs consists of two or more pre allocated files that store all changes made to the database.
Every instance of an Oracle database has an associated online redo log to protect the database in
case of an instance failure.
Main points to consider before creating redo log files?
Members of the same group should be stores in separate disk so that no single disk failure
can cause LGWR and database instance to fail.
Set the archive destination to separate disk other than redo log members to avoid
contention between LGWR and Arch.
With mirrored groups of online redo logs , all members of the same group must be the
same size.
MAXLOGFILES
MAXLOGMEMEBERS
to
<new
57
Syntax:
That a temporary tablespace cannot contain permanent objects and therefore doesn't need
to be backed up.
When we create a TEMPFILE, Oracle only writes to the header and last block of the file.
This is why it is much quicker to create a TEMPFILE than to create a normal database
file.
TEMPFILEs are not recorded in the database's control file.
We cannot remove datafiles from a tablespace until you drop the entire tablespace but we
can remove a TEMPFILE from a database:
58
Except for adding a tempfile, you cannot use the ALTER TABLESPACE statement for a
locally managed temporary tablespace (operations like rename, set to read only, recover,
etc. will fail).
Restriction:
The following restrictions apply to default temporary tablespaces:
59
60
DEFAULT_TABLESPACE
-----------------------------USERS
TEMPORARY_TABLESPACE
-----------------------------TEMP
Before commit, Oracle Database keeps records of actions of transaction because Oracle needs
this information to rollback or Undo the Changes.
What are the main Init.ora Parameters for Automatic Undo Management?
UNDO_MANAGEMENT:
The default value for this parameter is MANUAL. If you want to set the database in an
automated mode, set this value to AUTO. (UNDO_MANAGEMENT = AUTO)
UNDO_TABLESPACE:
UNDO_TABLESPACE defines the tablespaces that are to be used as Undo Tablespaces. If no
value is specified, Oracle will use the system rollback segment to startup. This value is dynamic
and can be changed online (UNDO_TABLESPACE = <Tablespace_Name>)
UNDO_RETENTION:
61
The default value for this parameter is 900 Secs. This value specifies the amount of time, Undo is
kept in the Tablespace. This applies to both committed and uncommitted transactions since the
introduction of FlashBack Query feature in Oracle needs this information to create a read
consistent copy of the data in the past.
UNDO_SUPRESS_ERRORS:
Default values are FALSE. Set this to true to suppress the errors generated when manual
management SQL operations are issued in an automated management mode.
How to Creating UNDO Tablespaces?
UNDO Tablespace can be created during the database creation time or can be added to an
existing database using the create UNDO Tablespace command
Scripts at the time of Database creation:
CREATE DATABASE <DB_NAME>
MAXINSTANCES 1
MAXLOGHISTORY 1
MAXLOGFILES 5
MAXLOGMEMBERS 5
MAXDATAFILES 100
DATAFILE '<DISK>:\Directory\<FILE_NAME>.DBF' SIZE 204800K REUSE
AUTOEXTEND ON NEXT 20480K MAXSIZE 32767M
UNDO TABLESPACE "<UNDO_TABLESPACE_NAME>"
DATAFILE '<DISK>:\DIRECTORY\<FILE_NAME>.DBF SIZE 1178624K REUSE
AUTOEXTEND ON NEXT 1024K MAXSIZE 32767M
CHARACTER SET WE8MSWIN1252
NATIONAL CHARACTER SET AL16UTF16
LOGFILE GROUP 1 (<DISK>:\DIRECTORY\<FILE_NAME>.LOG') SIZE 5024K,
GROUP 2 ('<DISK>:\DIRECTORY\<FILE_NAME>.LOG') SIZE 5024K,
GROUP 3 (<DISK>:\DIRECTORY\<FILE_NAME>.LOG') SIZE 5024K;
Scripts after creating Database:
CREATE UNDO TABLESPACE "<UNDO_TABLESPACE_NAME"
DATAFILE '<DISK>:\DIRECTORY\<FILE_NAME>.DBF' SIZE 1178624K REUSE
AUTOEXTEND ON;
How to Dropping an Undo Tablespace?
You cannot drop Active undo Tablespace. Means, undo Tablespace can only be dropped if it is
not currently used by any instance. Use the DROP TABLESPACE statement to drop an undo
Tablespace and all contents of the undo Tablespace are removed.
Example:
DROP TABLESPACE <UNDO_TABLESPACE_NAME> including contents;
How to Switching Undo Tablespace?
62
We can switch form one undo Tablespace to another undo Tablespace. Because the
UNDO_TABLESPACE initialization parameter is a dynamic parameter, the ALTER SYSTEM
SET statement can be used to assign a new undo Tablespace.
Step 1: Create another UNDO TABLESPACE
CREATE UNDO TABLESPACE "<ANOTHER_UNDO_TABLESPACE>"
DATAFILE '<DISK>:\Directory\<FILE_NAME>.DBF' SIZE 1178624K REUSE
AUTOEXTEND ON;
Step 2: Switches to a new undo Tablespace:
alter system set UNDO_TABLESPACE=<UNDO_TABLESPACE>;
Step 3: Drop old UNDO TABLESPACE
drop tablespace <UNDO_TABLESPACE> including contents;
IMPORTANT:
The database is online while the switch operation is performed, and user transactions can be
executed while this command is being executed. When the switch operation completes
successfully, all transactions started after the switch operation began are assigned to transaction
tables in the new undo Tablespace.
The switch operation does not wait for transactions in the old undo Tablespace to commit. If
there is any pending transactions in the old undo Tablespace, the old undo Tablespace enters into
a PENDING OFFLINE mode (status). In this mode, existing transactions can continue to
execute, but undo records for new user transactions cannot be stored in this undo Tablespace.
An undo Tablespace can exist in this PENDING OFFLINE mode, even after the switch operation
completes successfully. A PENDING OFFLINE undo Tablespace cannot used by another
instance, nor can it be dropped. Eventually, after all active transactions have committed, the undo
Tablespace automatically goes from the PENDING OFFLINE mode to the OFFLINE mode.
From then on, the undo Tablespace is available for other instances (in an Oracle Real Application
Cluster environment).
If the parameter value for UNDO TABLESPACE is set to '' (two single quotes), the current undo
Tablespace will be switched out without switching in any other undo Tablespace. This can be
used, for example, to unassign an undo Tablespace in the event that you want to revert to manual
undo management mode.
The following example unassigns the current undo tablespace:
ALTER SYSTEM SET UNDO_TABLESPACE = '';
How to Monitoring Undo Space?
The V$UNDOSTAT view is useful for monitoring the effects of transaction execution on undo
space in the current instance. Statistics are available for undo space consumption, transaction
concurrency, and length of queries in the instance.
The following example shows the results of a query on the V$UNDOSTAT view.
63
64
65
66
Operating System cannot see ASM file but RMAN and other oracle utility can view.
Aliases
Aliases allow you to reference ASM files using user-friendly names, rather than the fully
qualified ASM filenames.
How to create an alias using the fully qualified filename:
ALTER DISKGROUP DB_DATA ADD ALIAS '+DB_DATA/my_dir/my_file.dbf'
FOR '+DB_DATA/mydb/datafile/my_ts.342.3';
How to create an alias using the numeric form filename:
ALTER DISKGROUP Db_DATA ADD ALIAS '+DB_DATA/my_dir/my_file.dbf'
FOR '+DB_DATA.342.3';
How to rename an alias:
ALTER DISKGROUP DB_DATA RENAME ALIAS '+DB_DATA/my_dir/my_file.dbf'
TO '+DB_DATA/my_dir/my_file2.dbf';
How to delete an alias:
ALTER DISKGROUP DB_DATA DELETE ALIAS '+DB_DATA/my_dir/my_file.dbf';
Files
Files are not deleted automatically if they are created using aliases, as they are not Oracle
Managed Files (OMF), or if a recovery is done to a point-in-time before the file was created. For
these circumstances it is necessary to manually delete the files, as shown below.
How to Drop file using an alias?
ALTER DISKGROUP DB_DATA DROP FILE '+DB_DATA/my_dir/my_file.dbf';
How to Drop file using a numeric form filename?
ALTER DISKGROUP Db_DATA DROP FILE '+DB_DATA.342.3';
How to Drop file using a fully qualified filename?
ALTER DISKGROUP DB_DATA DROP FILE'+DB_DATA/mydb/datafile/my_ts.342.3';
Metadata
The internal consistency of disk group metadata can be checked in a number of ways using the
CHECK clause of the ALTER DISKGROUP statement.
How to check metadata for a specific file?
68
69
We can create disk group by using Database Control Disk Group Administration page or
manually by using CREATE DISKGROUP command.
How to Create DISK GROUP
Example:
Suppose we have two Disk Controller and a total six disks. DiskA1 through DiskA3 are on a
separate SCSI Controller and DiskB1 through DiskB3 are on yet another disk controller. Here we
create two failure group, each with three disks. The first three disk (DiskA1,DiskA2 and DiskA3)
will be on disk controller 1 and second 3 disk (DiskB1,DiskB2 and DiskB3)will be on disk
controller 2.
Now we start ASM instance in NOMOUNT mode. ASM instance is ready for create a disk
group. Then we create Disk group to correspond with 2 fail groups.
SQL> create diskgroup DB_DATA normal redundancy
Failgroup groupA disk /devices/DiskA1, /devices/DiskA2, /devices/DiskA3,
Failgroup groupB disk /devices/DiskB1, /devices/DiskB2, /devices/DiskB3,
When oracle writes data to the disks in the first failure group GroupA, it also wites those extents
to disk in the other failure group Group B
Important:
When you dont specify a FAILGROUP clause, the disk is in its own failure group.
70
Note:
DROPT DISKGROUP statements requires the instance to be in MOUNT state.
When a disk is dropped, the disk group is rebalanced by moving all of the file extents
from the dropped disk to other disks in the disk group. The header on the dropped disk is
then cleared.
If you specify the FORCE clause for the drop operation, the disk is dropped even if
Automatic Storage Management cannot read or write to the disk.
You can also drop all of the disks in specified failure groups using the DROP DISKS IN
FAILGROUP clause.
Rebalance Disk Group
ASM rebalance a disk group automatically, whenever we add or remove disks form disk group.
Disk groups can be rebalanced manually using the REBALANCE clause of the ALTER DISKGROUP
statement. If the POWER clause is omitted the ASM_POWER_LIMIT parameter value is used.
ALTER DISKGROUP DB_DATA REBALANCE POWER 5;
Resize the Disk
Undrop Disk
The UNDROP DISKS clause of the ALTER DISKGROUP statement allows pending disk drops to be
undone. It will not revert drops that have completed, or disk drops associated with the dropping
of a disk group.
ALTER DISKGROUP disk_group_1 UNDROP DISKS;
Mount and Dismount the ASM DISKGROUP
Disk groups are mounted at ASM instance startup and unmounted at ASM instance shutdown.
Manual mounting and dismounting can be accomplished using the ALTER DISKGROUP statement
as seen below.
ALTER DISKGROUP ALL DISMOUNT;
ALTER DISKGROUP ALL MOUNT;
ALTER DISKGROUP disk_group_1 DISMOUNT;
71
72
STATUS
----------
----------------
CURRENT
INACTIVE
INACTIVE
73
74
1. Create a text-based initialization parameter file from the current binary SPFILE located on the
local file system:
SQL> CREATE PFILE FROM SPFILE;
File created.
2. Create new SPFILE in an ASM disk group:
SQL> CREATE SPFILE='+DB_DATA1/TEST/spfileTEST.ora' FROM PFILE='Location of
pfile on local system';
File created.
3. Shutdown the Oracle database:
SQL> SHUTDOWN IMMEDIATE
Database closed.
Database dismounted.
ORACLE instance shut down.
4. Remove (actually rename) the old SPFILE on the local file system so that the new text-based
init<SID>.ora will be used:
5. Open the Oracle database using the new SPFILE:
SQL> STARTUP
Step 13 Verify that all database files have been created in ASM and delete old file.
$ sqlplus "/ as sysdba"
75
NORMAL
DISK
VOL1, VOL2
VOL4, VOL5
Execute following query on ASM instance. (This will show you ASM disk status)
Add Disk physically, create partition, and scan ASM disk and list ASM disk. You
will show delete disk (VOL4 and VOL5)
76
Now delete the ASM Disk, scan ASM Disk and list disk (VOL4 and VOL5)
Create ASM Disk, scan and list ASM Disk with Different Disk name.
NORMAL
DISK
VOL1, VOL2
VOL4, VOL5
Execute following query on ASM instance. (This will show you ASM disk status)
77
Add Disk physically, create partition, and scan ASM disk and list ASM disk. You
will show delete disk (VOL5)
Now delete the ASM Disk, scan ASM Disk and list disk (VOL5)
Create ASM Disk, scan and list ASM Disk with Different Disk name.
DISK_NUMBER
MOUNT_S
HEADER_STATU
STATE
PATH
CLOSED
CANDIDATE
NORMAL
C:\ASMDISKS\_FILE_DISK1
CLOSED
CANDIDATE
NORMAL
C:\ASMDISKS\_FILE_DISK2
CLOSED
CANDIDATE
NORMAL
C:\ASMDISKS\_FILE_DISK3
CLOSED
CANDIDATE
NORMAL
C:\ASMDISKS\_FILE_DISK4
78
Note:
The value of zero in the GROUP_NUMBER column for all four disks. This indicates that a disk
is available but hasn't yet been assigned to a disk group.
Dynamice Performance Views
V$ASM_DISKGROUP
This view provides information about a disk group. In a database instance, this view
contains one row for every ASM disk group mounted by the ASM instance.
V$ASM_CLIENT
This view identifies all the client databases using various disk groups. In a Database
instance, the view contains one row for the ASM instance if the database has any open
ASM files.
V$ASM_DISK
This view contains one row for every disk discovered by the ASM instance. In a database
instance, the view will only contain rows for disks in use by that database instance.
V$ASM_FILE
This view contains one row for every ASM file in every disk group mounted by the ASM
instance.
V$ASM_TEMPLATE
This view contains one row for every template present in every disk group mounted by
the ASM instance.
WHERE a.group#=b.group#),
(SELECT SUM(bytes)/(1024*1024*1024) t
FROM v$tempfile
WHERE status='ONLINE')
80
Recovery Senerio
recover to the point of failure
recover to the point of the last
backup
Backup all data file, control file and log file by using an operating system command. we
can also include password file and parameter file.
You ensure that the online redo logs are archived, either by enabling the Oracle automatic
archiving (ARCn) process.
81
Use an operating system backup utility to copy all database in the tablespace to backup
storage.
After the datafile of the tablespace have been backed up, set them into mode by issuing
the following command:
82
Incomplete Recovery
Incomplete Recovery occurs when complete recovery is impossible or you want to lose some
information that was entered by mistake.
You can say, you do not apply all of the redo records generated after the most recent backup.
You usually perform incomplete recovery of the whole database in the following situations:
A user error causes data loss, for example, a user inadvertently drops a table.
You cannot perform complete recovery because an archived redo log is missing.
You lose your current control file and must use a backup control file to open the database.
To perform incomplete media recovery, you must restore all datafiles from backups created prior
to the time to which you want to recover and then open the database with the RESETLOGS
option when recovery completes.
Difference between ResetLogs and NoResetLogs option?
After incomplete recovery (where the entire redo stream wasn't applied) we use RESETLOGS
option. RESETLOGS will initialize the logs, reset your log sequence number, and start a new
"incarnation"
of
the
database.
After complete recovery (when the entire redo stream was applied) we use NORESETLOGS
option. Oracle will continue using the existing (valid) log files.
What is a cancel-based recovery?
A cancel-based recovery is a user-managed incomplete recovery that is performed by specifying
the UNTIL CANCEL clause with the RECOVER command. UNTIL CANCEL clause to perform
recovery until the user manually cancels the recovery process. Oracle Cancel-Based Recovery is
usually performed when there is a requirement to recover up to a particular archived redo log file.
If the user does not specify CANCEL then the recovery process will automatically stop when all
redo has been applied to the database.
When Cancel Based Recovery required (Scenario)?
83
For example consider a situation where someone dropped a table and one of the online
redo logs is missing and is not archived and the table needs to be recovered.
Another case is where your backup control file does not know anything about the
arhivelogs that got created after your last backup.
Another scenario can be where you have lost all logs pass a specific sequence say X (For
example, you may know that you have lost all logs past sequence 1234, so you want to
cancel recovery after log 1233 is applied) and you want to control which archived log
terminates recovery. Or a scenario where one of the archived redo log files required for
the complete recovery is corrupt or missing and the only recovery option is to recover up
to the missing archived redo log file.
NOTE: Remember the online logs must be reset after you perform an incomplete recovery or
you perform recovery with a backup control file. So finally you will need to open database in
RESETLOGS mode. To synchronize datafiles with control files and redo logs, open database
using "resetlogs" options.
What is a point in time recovery?
A point in time recovery is a method to recover your database to any point in time since the last
database backup.
We use RECOVER DATABASE UNTIL TIME statement to begin time-based recovery. The
time is always specified using the following format, delimited by single quotation marks:
'YYYY-MM-DD:HH24:MI:SS'.
Example: RECOVER DATABASE UNTIL TIME '2000-12-31:12:47:30'
If a backup of the control file is being used with this incomplete recovery, then indicate this in
the statement used to start recovery.
Example: RECOVER DATABASE UNTIL TIME '2000-12-31:12:47:30' USING BACKUP
CONTROLFILE
In this type of recovery, apply redo logs until the last required redo log has been applied to the
restored datafiles. Oracle automatically terminates the recovery when it reaches the correct time,
and returns a message indicating whether recovery is successful.
What is change-based recovery?
Recovers until the specified SCN.
Change-based recovery is a recovery technique using which a database is recovered up to a
specified system change number (SCN). Using the UNTIL CHANGE clause with the RECOVER
command performs a manual change-based recovery. However, RMAN uses the UNTIL SCN
clause to perform a change-based recovery.
Begin change-based recovery, specifying the SCN for recovery termination. The SCN is
specified as a decimal number without quotation marks. For example, to recover through SCN
10034 issue:
RECOVER DATABASE UNTIL CHANGE 10034;
Continue applying redo log files until the last required redo log file has been applied to the
restored datafiles. Oracle automatically terminates the recovery when it reaches the correct SCN,
and returns a message indicating whether recovery is successful.
84
When u start database by using startup command system show following error:
SQL> startup
ORACLE instance started.
Total System Global Area 122755896 bytes
Fixed Size 453432 bytes
Variable Size 67108864 bytes
Database Buffers 54525952 bytes
Redo Buffers 667648 bytes
Database mounted.
ORA-01157: cannot identify/lock data file 1 - see DBWR trace file
ORA-01110: data file 1: 'C: \O\ORADATA\SYSTEM.DBF'
Read DBWR trace file or alert log file and find details of missing data files. Restore
missing files from backup storage area by using OS Copy command and try to open
database by using alter database open command.
Open databae:
85
86
87
Scenario
if your database is running in archive log mode and every Sunday you take full/cold backup of
database (all data file , control file and redolog file ) and every day Monday to Saturday you take
only archive log file backup. If a situation in which your database server has been destroyed at
Saturday , how will u recover data up to Saturday
Steps
Build the server
You need a server to host the database, so the first step is to acquire or build the new machine.
This is not strictly a DBA task, so we won't delve into details here. The main point to keep in
mind is that the replacement server should, as far as possible, be identical to the old one. In
particular, pay attention to the following areas:
Disk layout and capacity: Ideally the server should have the same number of disks as the original.
This avoids messy renaming of files during recovery. Obviously, the new disks should also have
enough space to hold all software and data that was on the original server.
Operating system, service pack and patches: The operating system environment should be the
same as the original, right up to service pack and patch level.
Memory: The new server must have enough memory to cater to Oracle and operating system /
other software requirements. Oracle memory structures (Shared pool, db buffer caches etc) will
be sized identically to the original database instance. Use of the backup server parameter file will
ensure this.
Install Oracle Software
Now we get to the meat of the database recovery process. The next step is to install Oracle
software on the machine. The following points should be kept in mind when installing the
software:
Install the same version of Oracle as was on the destroyed server. The version number should
match right down to the patch level, so this may be a multi-step process involving installation
followed by the application of one or more patchsets and patches.
Do not create a new database at this stage.
Create a listener using the Network Configuration Assistant. Ensure that it has the same name
and listening ports as the original listener. Relevant listener configuration information can be
found in the backed up listener.ora file.
Create directory structure for database files
After software installation is completed, create all directories required for datafiles, (online and
archived) logs, control files and backups. All directory paths should match those on the original
server. This, though not mandatory, saves additional steps associated with renaming files during
recovery.
Don't worry if you do not know where the database files should be located. You can obtain the
required information from the backup spfile and control file at a later stage. Continue reading we'll come back to this later.
88
89
It is possible that a final non-archived log sequence is requested to complete the recovery. This
will only hold one System Change Number (SCN) and no transactions relating to the database,
up to, and including the time of the FULL ONLINE Oracle backup. If this is the case, the
following message will be returned by Oracle:
ORA-00308: cannot open archived log
'E:\ORACLE\ORADATA\KIMSTAD\ARCHIVE\KIMSTADT00036949.ARC'
ORA-27041: unable to open file
OSD-04002: unable to open file
O/S-Error: (OS 2) The system cannot find the file specified.
To finish the recovery, stay in server manager with the database mounted, and type:
RECOVER DATABASE UNTIL CANCEL USING BACKUP CONTROLFILE
Then press <Enter>
When Oracle requests this final sequence again, type:
CANCEL
Then press <Enter>
Oracle will return the following message:
Media recovery canceled
The media recovery of the database is complete.
To open the database and to synchronize the log sequence, type:
ALTER DATABASE OPEN RESETLOGS
Then press <Enter>
The Oracle database server is now restored to full working order up to the time of the latest full
online Oracle backup.
90
91
repository, and if no CONNECT CATALOG command has been issued yet, then RMAN
automatically connects in the default NOCATALOG mode. After that point, the CONNECT
CATALOG command is not valid in the session.
Types of Database Connections
You can connect to the following types of databases.
Target database
RMAN connects you to the target database with the SYSDBA privilege. If you do not
have this privilege, then the connection fails.
Recovery catalog database
This database is optional: you can also use RMAN with the default NOCATALOG option.
Auxiliary database
You can connect to a standby database, duplicate database, or auxiliary instance (standby
instance or tablespace point-in-time recovery instance
Note: That a SYSDBA privilege is not required when connecting to the recovery catalog. The
only requirement is that the RECOVERY_CATALOG_OWNER role be granted to the schema
owner.
Using Basic RMAN Commands
After you have learned how to connect to a target database, you can immediately begin
performing backup and recovery operations. Use the examples in this section to go through a
basic backup and restore scenario using a test database. These examples assume the following:
The test database is in ARCHIVELOG mode.
You are running in the default NOCATALOG mode.
The RMAN executable is running on the same host as the test database.
Connecting to the Target Database
rman TARGET /
If the database is already mounted or open, then RMAN displays output similar to the following:
Recovery Manager: Release 9.2.0.0.0
connected to target database: RMAN (DBID=1237603294)
Reporting the Current Schema of the Target Database
In this example, you generate a report describing the target datafiles. Run the report schema
command as follows:
RMAN> REPORT SCHEMA; (RMAN displays the datafiles currently in the target database.)
Backing Up the Database
92
In this task, you back up the database to the default disk location. Because you do not specify the
format parameter in this example, RMAN assigns the backup a unique filename.
You can make two basic types of backups: full and incremental.
Making a Full Backup
Run the backup command at the RMAN prompt as follows to make a full backup of the datafiles,
control file, and current server parameter file (if the instance is started with a server parameter
file) to the default device type:
RMAN> BACKUP DATABASE;
Making an Incremental Backup
Incremental backups are a convenient way to conserve storage space because they back up only
database blocks that have changed. RMAN compares the current datafiles to a base backup, also
called a level 0 backup, to determine which blocks to back up.
RMAN> BACKUP INCREMENTAL LEVEL 1 DATABASE;
Backing Up Archived Logs
Typically, database administrators back up archived logs on disk to a third-party storage medium
such as tape. You can also back up archived logs to disk. In either case, you can delete the input
logs automatically after the backup completes.To back up all archived logs and delete the input
logs (from the primary archiving destination only), run the backup command at the RMAN
prompt as follows:
RMAN> BACKUP ARCHIVELOG ALL DELETE INPUT;
Listing Backups and Copies
To list the backup sets and image copies that you have created, run the list command as follows:
RMAN> LIST BACKUP;
To list image copies, run the following command:
RMAN> LIST COPY;
Validating the Restore of a Backup
Check that you are able to restore the backups that you created without actually restoring them.
Run
the
RESTORE
...
VALIDATE
command
as
follows:
RMAN> RESTORE DATABASE VALIDATE;
Type of RMAN Backup Tutorial
Full Backups
A full backup reads the entire file and copies all blocks into the backup set, only skipping datafile
blocks that have never been used.
About Incremental Backups
93
Rman create backup only changed block since a previous backup. You can use RMAN to create
incremental backups of datafiles, tablespaces, or the whole database.
How Incremental Backups Work
Each data block in a datafile contains a system change number (SCN), which is the SCN at which
the most recent change was made to the block. During an incremental backup, RMAN reads the
SCN of each data block in the input file and compares it to the checkpoint SCN of the parent
incremental backup. RMAN reads the entire file every time whether or not the blocks have been
used.
The parent backup is the backup that RMAN uses for comparing the SCNs. If the current
incremental is a differential backup at level n, then the parent is the most recent incremental of
level n or less. If the current incremental is a cumulative backup at level n, then the parent is the
most recent incremental of level n-1 or less. If the SCN in the input data block is greater than or
equal to the checkpoint SCN of the parent, then RMAN copies the block.
Multilevel Incremental Backups
RMAN can create multilevel incremental backups. Each incremental level is denoted by an
integer, for example, 0, 1, 2, and so forth. A level 0 incremental backup, which is the base for
subsequent incremental backups, copies all blocks containing data. The only difference between
a level 0 backup and a full backup is that a full backup is never included in an incremental
strategy.
If no level 0 backup exists when you run a level 1 or higher backup, RMAN makes a level 0
backup automatically to serve as the base.
The benefit of performing multilevel incremental backups is that RMAN does not back up all
blocks all of the time.
Differential Incremental Backups
In a differential level n incremental backup, RMAN backs up all blocks that have changed since
the most recent backup at level n or lower.
For example, in differential level 2 backups, RMAN determines which level 2 or level 1 backup
occurred most recently and backs up all blocks modified after that backup. If no level 1 is
available, RMAN copies all blocks changed since the base level 0 backup. If no level 0 backup is
available, RMAN makes a new base level 0 backup for this file.
Case 1: if you want to implement incremental backup strategy as a DBA in your organization:
94
CHECKPOINT_CHANGE#
271365
271369
271371
271374
271378
271380
271383
BLOCKS
59595
2
1
2
2
1
2
95
96
If you restore datafile 'C:_DATA.DBF to its default location, then RMAN restores the file
C:_DTAA.DBF and overwrites any file that it finds with the same filename.
if you run a SET NEWNAME command before you restore a file, then RMAN creates a datafile
copy with the name that you specify. For example, assume that you run the following commands:
Run
{
SET NEWNAME FOR DATAFILE 'C:_DATA.DBF TO C:_DATA.DBF;
RESTORE DATAFILE 'C:_DTAA.DBF;
SWITCH DATAFILE 'C:_DATA.DBF' TO DATAFILECOPY 'C:_DATA.DBF;
}
In this case, RMAN creates a datafile copy of 'C:_DATA.DBF named 'C:_DATA.DBF and
records it in the repository. To change the name for datafile 'C:_DATA.DBF to
'C:_DATA.DBF in the control file, run a SWITCH command so that RMAN considers the
restored file as the current database file.
RMAN Recovery: Basic Steps
If possible, make the recovery catalog available to perform the media recovery. If it is not
available, then RMAN uses metadata from the target database control file. Assuming that you
have backups of the datafiles and at least one autobackup of the control file.
The generic steps for media recovery using RMAN are as follows:
Place the database in the appropriate state: mounted or open. For example, mount the database
when performing whole database recovery, or open the database when performing online
tablespace recovery.
Restore the necessary files using the RESTORE command.
Recover the datafiles using the RECOVER command.
Place the database in its normal state.
Mechanism of Restore and Recovery operation:
The DBA runs the following commands:
RESTORE DATABASE;
RECOVER DATABASE;
The RMAN recovery catalog obtains its metadata from the target database control file. RMAN
decides which backup sets to restore, and which incremental backups and archived logs to use for
recovery. A server session on the target database instance performs the actual work of restore and
recovery.
Mechanics of Recovery: Incremental Backups and Redo Logs
97
RMAN does not need to apply incremental backups to a restored level 0 incremental backup: it
can also apply archived logs. RMAN simply restores the datafiles that it needs from available
backups and copies, and then applies incremental backups to the datafiles if it can and if not
applies logs.
How RMAN Searches for Archived Redo Logs During Recovery
If RMAN cannot find an incremental backup, then it looks in the repository for the names of
archived redo logs to use for recovery. Oracle records an archived log in the control file
whenever one of the following occurs:
The archiver process archives a redo log
RMAN restores an archived log
The RMAN COPY command copies a log
The RMAN CATALOG command catalogs a user-managed backup of an archived log
RMAN propagates archived log data into the recovery catalog during resynchronization,
classifying archived logs as image copies. You can view the log information through:
The LIST command
The V$ARCHIVED_LOG control file view
The RC_ARCHIVED_LOG recovery catalog view
During recovery, RMAN looks for the needed logs using the filenames specified in the
V$ARCHIVED_LOG view. If the logs were created in multiple destinations or were generated
by the COPY, CATALOG, or RESTORE commands, then multiple, identical copies of each log
sequence number exist on disk.
If the RMAN repository indicates that a log has been deleted or uncataloged, then RMAN ceases
to consider it as available for recovery. For example, assume that the database archives log 100 to
directories /dest1 and /dest2. The RMAN repository indicates that /dest1/log100.arc and
/dest2/log100.arc exist. If you delete /dest1/log100.arc with the DELETE command, then the
repository indicates that only /dest2/log100.arc is available for recovery.
If the RMAN repository indicates that no copies of a needed log sequence number exist on disk,
then RMAN looks in backups and restores archived redo logs as needed to perform the media
recovery. By default, RMAN restores the archived redo logs to the first local archiving
destination specified in the initialization parameter file. You can run the SET ARCHIVELOG
DESTINATION command to specify a different restore location. If you specify the DELETE
ARCHIVELOG option on RECOVER, then RMAN deletes the archived logs after restoring and
applying them. If you also specify MAXSIZE integer on the RECOVER command, then RMAN
staggers the restores so that they consume no more than integer amount of disk space at a time.
Incomplete Recovery
98
RMAN can perform either complete or incomplete recovery. You can specify a time, SCN, or log
sequence number as a limit for incomplete recovery with the SET UNTIL command or with an
UNTIL clause specified directory on the RESTORE and RECOVER commands. After
performing incomplete recovery, you must open the database with the RESETLOGS option.
Disaster Recovery with a Control File Autobackup
Assume that you lose both the target database and the recovery catalog. All that you have
remaining is a tape with RMAN backups of the target database and archived redo logs. Can you
still recover the database? Yes, assuming that you enabled the control file autobackup feature. In
a disaster recovery situation, RMAN can determine the name of a control file autobackup even
without a repository available. You can then restore this control file, mount the database, and
perform media recovery.
About Block Media Recovery
You can also use the RMAN BLOCKRECOVER command to perform block media recovery.
Block media recovery recovers an individual corrupt datablock or set of datablocks within a
datafile. In cases when a small number of blocks require media recovery, you can selectively
restore and recover damaged blocks rather than whole datafiles.
Note: Restrictions of block media recovery:
You can only perform block media recovery with Recovery Manager. No SQL*Plus
recovery interface is available.
You can only perform complete recovery of individual blocks. In other words, you cannot
stop recovery before all redo has been applied to the block.
You can only recover blocks marked media corrupt. The
V$DATABASE_BLOCK_CORRUPTION view indicates which blocks in a file were
marked corrupt since the most recent BACKUP, BACKUP ... VALIDATE, or COPY
command was run against the file.
You must have a full RMAN backup. Incremental backups are not allowed.
Blocks that are marked media corrupt are not accessible to users until recovery is
complete. Any attempt to use a block undergoing media recovery results in an error
message indicating that the block is media corrupt.
99
Like datafile media recovery, block media recovery cannot survive a missing or inaccessible
archived log. Where is datafile recovery requires an unbroken series of redo changes from the
beginning of recovery to the end, block media recovery only requires an unbroken set of redo
changes for the blocks being recovered.
When RMAN first detects missing or corrupt redo records during block media recovery, it does
not immediately signal an error because the block undergoing recovery may become a newed
block later in the redo stream. When a block is newed all previous redo for that block becomes
irrelevant because the redo applies to an old incarnation of the block. For example, Oracle can
new a block when users delete all the rows recorded in the block or drop a table.
Deciding Whether to Use RMAN with a Recovery Catalog
By default, RMAN connects to the target database in NOCATALOG mode, meaning that it uses
the control file in the target database as the sole repository of RMAN metadata. Perhaps the most
important decision you make when using RMAN is whether to create a recovery catalog as the
RMAN repository for normal production operations. A recovery catalog is a schema created in a
separate database that contains metadata obtained from the target control file.
Benefits of Using the Recovery Catalog as the RMAN Repository
When you use a recovery catalog, RMAN can perform a wider variety of automated backup and
recovery functions than when you use the control file in the target database as the sole repository
of metadata.
The following features are available only with a catalog:
You can store metadata about multiple target databases in a single catalog.
You can store metadata about multiple incarnations of a single target database in the
catalog. Hence, you can restore backups from any incarnation.
Resynchronizing
the
recovery
catalog
at
intervals
less
than
the
CONTROL_FILE_RECORD_KEEP_TIME setting, you can keep historical metadata.
When restoring and recovering to a time when the database files that exist in the database
are different from the files recorded in the mounted control file, the recovery catalog
specifies which files that are needed. Without a catalog, you must first restore a control
file backup that lists the correct set of database files.
If the control file is lost and must be restored from backup, and if persistent
configurations have been made to automate the tape channel allocation, these
configurations are still available when the database is not mounted.
Current datafiles
Backup sets
RMAN can invoked from the command line on the database host machine like so:
C:\>rman target sys/sys_password
Recovery Manager: Release 9.2.0.1.0 - Production
Copyright (c) 1995, 2002, Oracle Corporation. All rights reserved.
Connected to target database: ORCL (DBID=1036216947)
RMAN> show all;
RMAN configuration parameters are:
CONFIGURE RETENTION POLICY TO REDUNDANCY 2;
CONFIGURE BACKUP OPTIMIZATION OFF; # default
CONFIGURE DEFAULT DEVICE TYPE TO DISK; # default
CONFIGURE CONTROLFILE AUTOBACKUP ON;
CONFIGURE CONTROLFILE AUTOBACKUP FORMAT FOR DEVICE TYPE DISK TO
'e:\backup\ctl_sp_bak_%F';
CONFIGURE DEVICE TYPE DISK PARALLELISM 2;
CONFIGURE DATAFILE BACKUP COPIES FOR DEVICE TYPE DISK TO 1; # default
CONFIGURE ARCHIVELOG BACKUP COPIES FOR DEVICE TYPE DISK TO 1; # default
CONFIGURE CHANNEL 1 DEVICE TYPE DISK FORMAT 'e:\backup\%U.bak'
MAXPIECESIZE 4G;
CONFIGURE CHANNEL 2 DEVICE TYPE DISK FORMAT 'e:\backup\%U.bak'
MAXPIECESIZE 4G;
CONFIGURE MAXSETSIZE TO UNLIMITED; # default
101
102
new
RMAN
configuration
parameters
are
successfully
stored
RMAN>
Complete Steps for Using RMAN through Catalog
Recovery manager is a platform independent utility for coordinating your backup and restoration
procedures across multiple servers.
How to Create Recovery Catalog
First create a user to hold the recovery catalog:
CONNECT sys/password@w2k1 AS SYSDBA
Step 1 Create tablepsace to hold repository
CREATE TABLESPACE "RMAN"
DATAFILE 'C:\ORACLE\ORADATA\W2K1\RMAN01.DBF' SIZE 6208K REUSE
AUTOEXTEND ON NEXT 64K MAXSIZE 32767M
EXTENT MANAGEMENT LOCAL
SEGMENT SPACE MANAGEMENT AUTO;
Step 2 Create rman schema owner
CREATE USER rman IDENTIFIED BY rman
TEMPORARY TABLESPACE temp
DEFAULT TABLESPACE rman
QUOTA UNLIMITED ON rman;
GRANT connect, resource, recovery_catalog_owner TO rman;
Step 3 then create the recovery catalog:
C:>rman catalog=rman/rman@w2k1
Recovery Manager: Release 9.2.0.1.0 - Production
Copyright (c) 1995, 2002, Oracle Corporation. All rights reserved.
Connected to recovery catalog database
Recovery catalog is not installed
RMAN> create catalog tablespace "RMAN";
Recovery catalog created
RMAN> exit
Recovery Manager complete.
C:>
Step 4 Register Database
Each database to be backed up by RMAN must be registered:
C:>rman catalog=rman/rman@w2k1 target=sys/password@w2k2\
<mailto:target=sys/password@w2k2\>
Recovery Manager: Release 9.2.0.1.0 - Production
Copyright (c) 1995, 2002, Oracle Corporation. All rights reserved.
connected to target database: W2K2 (DBID=1371963417)
connected to recovery catalog database
RMAN> register database;
database registered in recovery catalog
starting full resync of recovery catalog
full resync complete
103
RMAN>
Full Backup
First we configure several persistent parameters for this instance:
RMAN> configure retention policy to recovery window of 7 days;
RMAN> configure default device type to disk;
RMAN> configure controlfile autobackup on;
RMAN> configure channel device type disk format
'C:\Oracle\Admin\W2K2\Backup%d_DB_%u_%s_%p';
Next we perform a complete database backup using a single command:
RMAN> run
{backup database plus archivelog;
delete noprompt obsolete;
}
The recovery catalog should be resyncronized on a regular basis so that changes to the database
structure and presence of new archive logs is recorded. Some commands perform partial and full
resyncs implicitly, but if you are in doubt you can perform a full resync using the follwoing
command:
RMAN> resync catalog;
104
SQL>
The error message tells us that file# 4 is missing. Note that although the startup command has
failed, the database is in the mount state. Thus, the database control file, which is also the RMAN
repository can be accessed by the instance and by RMAN. We now recover the missing file using
RMAN. The transcript of the recovery session is reproduced below (bold lines are typed
commands, comments in italics, the rest is feedback from RMAN):
--logon to RMAN
C:\>rman target /
Recovery Manager: Release 9.2.0.4.0 - Production
Copyright (c) 1995, 2002, Oracle Corporation. All rights reserved.
connected to target database: ORCL (DBID=1507972899)
--restore missing datafile
RMAN> restore datafile 4;
105
106
107
Since we know the file and block number, we can perform block level recovery using RMAN. This is best illustrated by
example:
C:\>rman target /
Recovery Manager: Release 9.2.0.4.0 - Production
Copyright (c) 1995, 2002, Oracle Corporation. All rights reserved.
connected to target database: ORCL (DBID=1507972899)
--restore AND recover specific block
RMAN> blockrecover datafile 4 block 2015;
Starting blockrecover at 26/JAN/05
using target database controlfile instead of recovery catalog
allocated channel: ORA_DISK_1
channel ORA_DISK_1: sid=19 devtype=DISK
allocated channel: ORA_DISK_2
channel ORA_DISK_2: sid=20 devtype=DISK
channel ORA_DISK_1: restoring block(s)
channel ORA_DISK_1: specifying block(s) to restore from backup set
restoring blocks of datafile 00004
channel ORA_DISK_1: restored block(s) from backup piece 1
piece handle=E:\BACKUP\0QGB0UEC_1_1.BAK tag=TAG20050124T152708 params=NULL
channel ORA_DISK_1: block restore complete
starting media recovery
media recovery complete
Finished blockrecover at 26/JAN/05
RMAN>
Now our user should be able to query the table from her SQLPlus session. Here's her session
transcript after block recovery.
SQL> select count(*) from test_table;
COUNT(*)
---------217001
SQL>
A couple of important points regarding block recovery:
1. Block recovery can only be done using RMAN.
2. The entire database can be open while performing block recovery.
3. Check all database files for corruption. This is important - there could be other corrupted
blocks. Verification of database files can be done using RMAN or the dbverify utility. To verify
using RMAN simply do a complete database backup with default settings. If RMAN detects
block corruption, it will exit with an error message pointing out the guilty file/block.
108
109
110
111
'D:\oracle_data\datafiles\ORCL\TEMP01.DBF';
Tablespace altered.
SQL>
Check that the file is available by querying v$TEMPFILE
Recovery from missing or corrupted redo log group
Case 1: A multiplexed copy of the missing log is available.
If a redo log is missing, it should be restored from a multiplexed copy, if possible. Here's an
example, where I attempt to startup from SQLPLUS when a redo log is missing:
SQL> startup
ORACLE instance started.
Total System Global Area 131555128 bytes
Fixed Size 454456 bytes
Variable Size 88080384 bytes
Database Buffers 41943040 bytes
Redo Buffers 1077248 bytes
Database mounted.
ORA-00313: open failed for members of log group 3 of thread 1
ORA-00312: online log 3 thread 1: 'D:\ORACLE_DATA\LOGS\ORCL\REDO03A.LOG'
SQL>
To fix this we simply copy REDO03A.LOG from its multiplexed location on E: to the above
location on D:
SQL> alter database open;
Database altered.
SQL>
That's it - the database is open for use.
Case 2: All members of a log group lost.
In this case an incomplete recovery is the best we can do. We will lose all transactions from the
missing log and all subsequent logs. We illustrate using the same example as above. The error
message indicates that members of log group 3 are missing. We don't have a copy of this file, so
we know that an incomplete recovery is required. The first step is to determine how much can be
recovered. In order to do this, we query the V$LOG view (when in the mount state) to find the
system change number (SCN) that we can recover to (Reminder: the SCN is a monotonically
increasing number that is incremented whenever a commit is issued)
--The database should be in the mount state for v$log access
SQL> select first_change# from v$log whnhi.ere group#=3 ;
FIRST_CHANGE#
-------------
112
370255
SQL>
The FIRST_CHANGE# is the first SCN stamped in the missing log. This implies that the last
SCN stamped in the previous log is 370254 (FIRST_CHANGE#-1). This is the highest SCN that
we can recover to. In order to do the recovery we must first restore ALL datafiles to this SCN,
followed by recovery (also up to this SCN). This is an incomplete recovery, so we must open the
database resetlogs after we're done. Here's a transcript of the recovery session (typed commands
in bold, comments in italics, all other lines are RMAN feedback):
C:\>rman target /
Recovery Manager: Release 9.2.0.4.0 Production
Copyright (c) 1995, 2002, Oracle Corporation. All rights reserved.
connected to target database: ORCL (DBID=1507972899)
--Restore ENTIRE database to determined SCN
RMAN> restore database until scn 370254;
Starting restore at 26/JAN/05
using channel ORA_DISK_1
using channel ORA_DISK_2
channel ORA_DISK_1: starting datafile backupset restore
channel ORA_DISK_1: specifying datafile(s) to restore from backup set
restoring datafile 00001 to D:\ORACLE_DATA\DATAFILES\ORCL\SYSTEM01.DBF
restoring datafile 00004 to D:\ORACLE_DATA\DATAFILES\ORCL\USERS01.DBF
channel ORA_DISK_2: starting datafile backupset restore
channel ORA_DISK_2: specifying datafile(s) to restore from backup set
restoring datafile 00002 to D:\ORACLE_DATA\DATAFILES\ORCL\UNDOTBS01.DBF
restoring datafile 00003 to D:\ORACLE_DATA\DATAFILES\ORCL\TOOLS01.DBF
channel ORA_DISK_2: restored backup piece 1
piece handle=E:\BACKUP\13GB14IB_1_1.BAK tag=TAG20050124T171139 params=NUL
channel ORA_DISK_2: restore complete
channel ORA_DISK_1: restored backup piece 1
piece handle=E:\BACKUP\14GB14IB_1_1.BAK tag=TAG20050124T171139 params=NUL
channel ORA_DISK_1: restore complete
Finished restore at 26/JAN/05
--Recover database
RMAN> recover database until scn 370254;
Starting recover at 26/JAN/05
using channel ORA_DISK_1
using channel ORA_DISK_2
starting media recovery
archive log thread 1 sequence 9 is already on disk as file
E:\ORACLE_ARCHIVE\ORCL\1_9.ARC
archive log thread 1 sequence 10 is already on disk as file
E:\ORACLE_ARCHIVE\ORCL\1_10.ARC
archive log thread 1 sequence 11 is already on disk as file
E:\ORACLE_ARCHIVE\ORCL\1_11.ARC
113
RMAN>
The following points should be noted:
1. The entire database must be restored to the SCN that has been determined by querying v$log.
2. All changes beyond that SCN are lost. This method of recovery should be used only if you are
sure that you cannot do better. Be sure to multiplex your redo logs, and (space permitting) your
archived logs!
3. The database must be opened with RESETLOGS, as a required log has not been applied. This
resets the log sequence to zero, thereby rendering all prior backups worthless. Therefore, the first
step after opening a database RESETLOGS is to take a fresh backup. Note that the
RESETLOGS option must be used for any incomplete recovery.
Disaster Recovery
Introduction:
i.e. a situation in which your database server has been destroyed and has taken all your database
files (control files, logs and data files) with it. Obviously, recovery from a disaster of this nature
is dependent on what you have in terms of backups and hardware resources. We assume you
have the following available after the disaster:
With the above items at hand, it is possible to recover all data up to the last full backup. One can
do better if subsequent archive logs (after the last backup) are available. In our case these aren't
available, since our only archive destination was on the destroyed server ). Oracle provides
methods to achieve better data protection. We will discuss some of these towards the end of the
article.
Now on with the task at hand, the high-level steps involved in disaster recovery are:
Ideally the server should have the same number of disks as the original. The new disks
should also have enough space to hold all software and data that was on the original
server.
The operating system environment should be the same as the original, right up to service
pack and patch level.
The new server must have enough memory to cater to Oracle and operating system / other
software requirements. Oracle memory structures (Shared pool, db buffer caches etc) will
be sized identically to the original database instance. Use of the backup server parameter
file will ensure this.
Install the same version of Oracle as was on the destroyed server. The version number
should match right down to the patch level, so this may be a multi-step process involving
installation followed by the application of one or more patch sets and patches.
Create a listener using the Network Configuration Assistant. Ensure that it has the same
name and listening ports as the original listener. Relevant listener configuration
information can be found in the backed up listener.ora file.
After software installation is completed, create all directories required for datafiles,
(online and archived) logs, control files and backups. All directory paths should match
those on the original server.
Don't worry if you do not know where the database files should be located. You can
obtain the required information from the backup spfile and control file at a later stage.
Continue reading - we'll come back to this later.
115
Copy PASSWORD and TNSNAMES file from backup: The backed up password file and
tnsnames.ora files should be copied from the backup directory to the proper locations.
Default location for password and tnsnames files are ORACLE_HOME\database
ORACLE_HOME\network\admin respectively.
Set ORACLE_SID environment variable: ORACLE_SID should be set to the proper SID
name (ORCL in our case). This can be set either in the registry (registry key:
HKLM\Software\Oracle\HOME<X>\ORACLE_SID) or from the system applet in the
control panel
Invoke RMAN and set the DBID: We invoke rman and connect to the target database as
usual. No login credentials are required since we connect from an OS account belonging
to ORA_DBA. Note that RMAN accepts a connection to the database although the
database is yet to be recovered. RMAN doesn't as yet "know" which database we intend
to connect to. We therefore need to identify the (to be restored) database to RMAN. This
is done through the database identifier (DBID). The DBID can be figured out from the
name of the controlfile backup. Example: if you use the controlfile backup format , your
controlfile backup name will be something like "CTL_SP_BAK_C-150797289920050228-00". In this case the DBID is 1507972899. Here's a transcript illustrating the
process of setting the DBID:
C:\>rman
Recovery Manager: Release 9.2.0.4.0 Production
Copyright (c) 1995, 2002, Oracle Corporation. All rights reserved.
RMAN> set dbid 1507972899
executing command: SET DBID
RMAN>connect target /
connected to target database (not started)
RMAN>
Restore spfile from backup: To restore the spfile, you first need to startup the database in the
nomount state. This starts up the database using a dummy parameter file. After that you can
restore the spfile from the backup (which has been restored from tape ). Finally you restart the
database in nomount state. Here is an example RMAN transcript for the foregoing procedure.
Note the difference in SGA size and components between the two startups:
RMAN> startup nomount
116
RMAN>
has its proper configuration parameters, as these are stored in the control file.
Here is a RMAN session transcript showing the steps detailed here:
RMAN> restore controlfile from 'e:\backup\CTL_SP_BAK_C-1507972899-20050228-00';
Starting restore at 01/MAR/05
allocated channel: ORA_DISK_1
hannel ORA_DISK_1: sid=13 devtype=DISK
channel ORA_DISK_1: restoring controlfile
channel ORA_DISK_1: restore complete
replicating controlfile
input filename=D:\ORACLE_DATA\CONTROLFILE\ORCL\CONTROL01.CTL
output filename=E:\ORACLE_DATA\CONTROLFILE\ORCL\CONTROL02.CTL
output filename=C:\ORACLE_DUP_DEST\CONTROLFILE\ORCL\CONTROL03.CTL
Finished restore at 01/MAR/0
RMAN> shutdown
Oracle instance shut down
RMAN> exit
Recovery Manager complete.
C:\>rman target /
Recovery Manager: Release 9.2.0.4.0 Production
Copyright (c) 1995, 2002, Oracle Corporation. All rights reserved.
connected to target database (not started)
RMAN>startup mount;
Oracle instance started
database mounted
Total System Global Area 1520937712 bytes
Fixed Size 457456 bytes
Variable Size 763363328 bytes
Database Buffers 754974720 bytes
Redo Buffers 2142208 bytes
RMAN> show all;
using target database controlfile instead of recovery catalog
RMAN configuration parameters are:
CONFIGURE RETENTION POLICY TO REDUNDANCY 2;
CONFIGURE BACKUP OPTIMIZATION OFF; # default
CONFIGURE DEFAULT DEVICE TYPE TO DISK; # default
CONFIGURE CONTROLFILE AUTOBACKUP ON;
CONFIGURE CONTROLFILE AUTOBACKUP FORMAT FOR DEVICE TYPE DISK TO
'e:\backup\ctl_sp_bak_%F';
CONFIGURE DEVICE TYPE DISK PARALLELISM 2;
118
119
120
Standby Database
Oracle Standby Database
This document contains a guide on setting up standby databases for maximum protection, using
command line mode, and avoiding using the GUI. To do this Oracle9i has a feature called Data
Guard and the following sections describe the tasks undertaken to set-up primary and standby
servers and a couple of WINDOWS servers.
Database PROD is replicated from production Server to Standby Server via DataGuard
Data Guard Operational Prerequisites:
Same Oracle software release must be used for both primary and standby databases. The
operating system running on primary and standby locations must be same, but operating
system release may not need to be same.
The Primary Database must run in ARCHIVELOG mode.
The hardware and Operating system architecture on primary and standby location must be
same.
Each primary and standby database must have its own control file.
Architecture:
The Oracle9i Data Guard architecture incorporates the following items:
Primary Database - A production database that is used to create standby databases. The archive
logs from the primary database are transferred and applied to standby databases. Each standby
can only be associated with a single primary database, but a single primary database can be
associated with multiple standby databases.
Standby Database - A replica of the primary database.
Log Transport Services - Control the automatic transfer of archive redo log files from the
primary database to one or more standby destinations.
Network Configuration - The primary database is connected to one or more standby databases
using Oracle Net.
Log Apply Services - Apply the archived redo logs to the standby database. The Managed
Recovery Process (MRP) actually does the work of maintaining and applying the archived redo
logs.
Role Management Services - Control the changing of database roles from primary to standby.
The services include switchover, switchback and fail over.
The services required on the primary database are:
Log Writer Process (LGWR) - Collects redo information and updates the online redo logs. It can
also create local archived redo logs and transmit online redo to standby databases.
121
Archiver Process (ARCn) - One or more archiver processes make copies of online redo logs
either locally or remotely for standby databases.
Fetch Archive Log (FAL) Server - Services requests for archive redo logs from FAL clients
running on multiple standby databases. Multiple FAL servers can be run on a primary database,
one for each FAL request.
The services required on the standby database are:
Fetch Archive Log (FAL) Client - Pulls archived redo log files from the primary site. Initiates
transfer of archived redo logs when it detects a gap sequence.
Remote File Server (RFS) - Receives archived and/or standby redo logs from the primary
database. Archiver (ARCn) Processes - Archives the standby redo logs applied by the managed
recovery process (MRP).
Managed Recovery Process (MRP) - Applies archive redo log information to the standby
database.
Step-by-Step Stand by Database Configuration:
Step1: Configure Listener in Production Server and Standby Server.
TIPS: You should try to Create Listener (Standby) by using Net Configuration Assistant on
Standby Server.
TIPS: Assume Listener already configure with PROD name on Primary Node. If Listener not
configured on Primery Node , You Should Create Listener by using Net Configuration
Assistant on Primary Server.
Step2: Configure TNSNAMES.ORA in Production Server and Standby Server. following TNSNAMES.ORA entry on
122
Step3: Put your production database in Archive Log mode if your database not running in
Archive log mode add following entries in init.ora file in Production Server.
LOG_ARCHIVE_START=TRUE
LOG_ARCHIVE_DEST_1='LOCATION=C:\oracle\database\archive MANDATORY
REOPEN=30'
LOG_ARCHIVE_DEST_2='SERVICE=STANDBY REOPEN=300'
LOG_ARCHIVE_DEST_STATE_1=enable
LOG_ARCHIVE_DEST_STATE_2=enable
LOG_ARCHIVE_FORMAT=ARC%S.arc
REMOTE_ARCHIVE_ENABLE=true
STANDBY_FILE_MANAGEMENT=AUTO
STANDBY_ARCHIVE_DEST = 'C:\standby\archive '
# db_file_name_convert: do not need; same directory structure
# log_file_name_convert: do not need; same directory structure
Step4 : After add above syntax in init.ora file, copy init.ora file from production server to
Standby server in Oracle_Home\Database\ folder.
Step5 : On the both system, the same directory structure was set-up
Step6 : Place production database in FORCE LOGGING mode by using following statement:
SQL> alter database force logging;
Database altered.
Step7 : Identify the primary database Data files:
SQL> select name from v$datafile;
Step8 : Make a copy of Production data files and redo flog file by performing following steps:
Shutdown the Primary Databas
SQL> shutdown immediate and put your primary database in Archive log mode after archive
log enable shutdown the database the database.
Copy the Datafiles and redo log files to standby location by using OS Command
Note: Primary Database must be shutdown while coping the files.
Step9 : Restart the Production Database
SQL> startup;
Step10 : Create Control file for Standby Database Issue the following command on production
database to create control file for the standby database.
SQL> Alter database create standby controlfile as 'c:\controlfile_standby.ctl';
Database altered.
Note: The filename for newly created standby control file must be different of current control file
123
of the production database. Also control file for standby database must be created after the last
timestamp for the backup Datafiles.
Step11 : Create init.ora file for standby database.
Copy init.ora file from Production Server to Stand by Server in Database folder in oracle home directory and add
following entries:
LOG_ARCHIVE_START = TRUE
LOG_ARCHIVE_DEST_1 = 'LOCATION=c:\oracle\database\archive MANDATORY'
LOG_ARCHIVE_FORMAT = arch%s.arc
REMOTE_ARCHIVE_ENABLE = true
STANDBY_FILE_MANAGEMENT = AUTO
LOG_ARCHIVE_MIN_SUCCEED_DEST=1
STANDBY_ARCHIVE_DEST = 'C:\standby\archive '
fal_server = FAL
fal_client = STANDBY
# db_file_name_convert: do not need; same directory structure
# log_file_name_convert: do not need; same directory structure
Note: Although most of the initialization parameter settings in the text initialization parameter
file that you copied from the primary system are also appropriate for the physical standby
database, some modifications need to be made.
Edit created pfile from primary database.
control_files - Specify the path name and filename for the standby control file.
standby_archive_dest - Specify the location of the archived redo logs that will be received from
the primary database.
db_file_name_convert - Specify the location of the primary database datafiles followed by the
standby location of the datafiles. This parameter will convert the filename of the primary
database datafiles to the filename of the standby datafile filenames. If the standby database is on
the same system as the primary database or if the directory structure where the datafiles are
located on the standby site is different from the primary site then this parameter is required.
log_file_name_convert Specify the location of the primary database logs followed by the
standby location of the logs. This parameter will convert the filename of the primary database log
to the filenames of the standby log. If the standby database is on the same system as the primary
database or if the directory structure where the logs are located on the standby site is different
from the primary site then this parameter is required.
log_archive_dest_1 - Specify the location where the redo logs are to be archived on the standby
system. (If a switchover occurs and this instance becomes the primary database, then this
parameter will specify the location where the online redo logs will be archived.)
standby_file_management - Set to AUTO.
remote_archive_enable - Set to TRUE.
instance_name - If this parameter is defined, specify a different value for the standby database
124
than the primary database when the primary and standby databases reside on the same host.
lock_name_space - Specify the standby database instance name. Use this parameter when you
create the physical standby database on the same system as the primary database. Change the
INSTANCE_NAME parameter to a value other than its primary database value, and set this
LOCK_NAME_SPACE initialization parameter to the same value that you specified for the
standby database INSTANCE_NAME initialization parameter.
Also change the values of the parameters background_dump_dest, core_dump_dest and
user_dump_dest to specify location of the standby database.
Step12 : Create a Window service in Standby Server.
If standby database is running on windows system, then oradim utility is used to create windows
service. Issue following command from the command prompt window
C:\>oradim -new -sid PROD -intpwd PROD -startmode a
Step: 13 Start Physical standby database
Start up the stand by database using following commands
C:\>set oracle_sid=PROD
C:\>sqlplus /nolog
SQL> conn sys/prod as sysdba
Connected to an idle instance.
SQL> startup nomount;
ORACLE instance started.
Total System Global Area 135338868 bytes
Fixed Size 453492 bytes
Variable Size 109051904 bytes
Database Buffers 25165824 bytes
Redo Buffers 667648 bytes
SQL> alter database mount standby database;
Database altered.
Step: 14 Initiate Log apply services The example includes the DISCONNECT FROM SESSION
option so that log apply services run in a background session.
SQL> ALTER DATABASE RECOVER MANAGED STANDBY DATABASE DISCONNECT FROM SESSION;
Database altered.
Step: 15 Now go to production database prompt
SQL> alter system switch logfile;
Database altered.
Step: 16 Verifying the Standby Database On standby database query the V$ARCHIVED_LOG
view to verify that redo log received.
125
126
Result:
PROCESS STATUS THREAD# SEQUENCE# BLOCK# BLOCKS
ARCH CONNECTED 0 0 0 0
ARCH CONNECTED 0 0 0 0
MRP0 WAIT_FOR_LOG 1 4205 0 0
RFS RECEIVING 0 0 0 0
RFS RECEIVING 1 3524 2445 2445
RFS WRITING 1 4205 14947 20480
If we do the same query on the production database
SQL> SELECT PROCESS, STATUS, THREAD#, SEQUENCE#, BLOCK#, BLOCKS FROM
V$MANAGED_STANDBY;
PROCESS STATUS THREAD# SEQUENCE# BLOCK# BLOCKS
ARCH CLOSING 1 4203 2049 124
ARCH CLOSING 1 4204 1 1551
LGWR WRITING 1 4205 14947 1
From the query on the primary database, we see the current sequence being written to in the redo
log area is 4205, and on the standby database we also see the current archive log being applied is
for sequence 4205. In the directory that receives archive files on the standby database, the file
DWH0P01_0000004205.arc will exist and will be the same size as the redo log on the primary
database. However the primary database will not have DWH0P01_0000004205.arc as a file in
the archive area, as a log switch will not have occurred yet, but both databases are synchronized
at the same sequence and block number, 14947.
Step: 18 Log files to check on both systems
On production database in the bdump directory, the alert log and files generated by lgwr and lnsx
can be checked for any problems On standby database in the bdump directory, the alert log and
files generated by mrpx can be checked for any problems.
Standby Oracle Database by using RMAN
You can use the Recovery Manager DUPLICATE TARGET DATABASE FOR STANDBY
command to create a standby database.
RMAN automates the following steps of the creation procedure:
Restores the standby control file.
Restores the primary datafile backups and copies.
Optionally, RMAN recovers the standby database (after the control file has been mounted) up
to the specified time or to the latest archived redo log generated.
RMAN leaves the database mounted so that the user can activate it, place it in manual or
managed recovery mode, or open it in read-only mode.
After the standby database is created, RMAN can back up the standby database and archived
redo logs as part of your backup strategy. These standby backups are fully interchangeable with
primary backups. In other words, you can restore a backup of a standby datafile to the primary
database, and you can restore a backup of a primary datafile to the standby database.
127
128
129
C:\>set oracle_sid=PROD
C:\>sqlplus /nolog
SQL> conn sys/prod as sysdba
Connected to an idle instance.
SQL> startup nomount;
ORACLE instance started.
Total System Global Area 135338868 bytes
Fixed Size 453492 bytes
Variable Size 109051904 bytes
Database Buffers 25165824 bytes
Redo Buffers 667648 bytes
Step13 : Go to the Standby server and connect RMAN Run the following:
CMD> rman target sys/change_on_install@prod_conn_string
RMAN > connect auxiliary sys/change_on_install
Step14 : The following RUN block can be used to fully duplicate the target database from the
latest full backup. This will create the standby database:
run {
# Set the last log sequence number
set until sequence = 100 thread = 1;
# Allocate the channel for the duplicate work
allocate auxiliary channel ch1 type disk;
# Duplicate the database to ORA920
duplicate target database for standby dorecover nofilenamecheck ;
}
RMAN> exit
Step15 : Put the Standby in Managed recovery Mode
On the standby database, run the following:
SQL> sqlplus "/ as sysdba"
SQL> recover standby database;
SQL> alter database recover managed standby database disconnect;
Database altered.
130
CONNECT / AS SYSDBA
SHUTDOWN IMMEDIATE
STARTUP NOMOUNT
ALTER DATABASE MOUNT STANDBY DATABASE;
RECOVER MANAGED STANDBY DATABASE DISCONNECT FROM SESSION;
Database Switchover
A database can be in one of two mutually exclusive modes (primary or standby). These roles can
be altered at runtime without loss of data or resetting of redo logs. This process is known as a
Switchover and can be performed using the following statements:
While connected to the primary database, issue the following commands:
CONNECT / AS SYSDBA
ALTER DATABASE COMMIT TO SWITCHOVER TO STANDBY;
SHUTDOWN IMMEDIATE;
STARTUP NOMOUNT
ALTER DATABASE MOUNT STANDBY DATABASE;
ALTER DATABASE RECOVER MANAGED STANDBY DATABASE
DISCONNECT FROM SESSION;
Now the original Primary database is in Standby mode and waiting for the new Primary database
to activate, which is done while connected to the standby database (not the original primary)
CONNECT / AS SYSDBA
ALTER DATABASE COMMIT TO SWITCHOVER TO PRIMARY;
SHUTDOWN IMMEDIATE;
STARTUP
This process has no affect on alternative standby locations. The process of converting the
instances back to their original roles is known as a Switchback. The switchback is accomplished
by performing another switchover.
131
This process will recovery all or some of the application data using the standby redo logs,
therefore avoiding reinstantiation of other standby databases. If completed successfully, only the
primary database will need to be reinstatiated as a standby database.
Standby Diagnosis Query for Primary Node
Query 1: protection_level should match the protection_mode after the next log switch
select name,database_role role,log_mode, protection_mode,protection_level from v$database;
NAME
ROLE
LOG_MODE
PROTECTION_MODE
PROTECTION_LEVEL
TEST
PRIMARY
ARCHIVELOG MAXIMUM PERFORMANCE
MAXIMUM PERFORMANCE
1 row selected.
Query 2: ARCHIVER can be (STOPPED | STARTED | FAILED). FAILED means that the
archiver failed to archive a log last time, but will try again within 5 minutes.
LOG_SWITCH_WAIT The ARCHIVE LOG/CLEAR LOG/CHECKPOINT event log switching
is waiting for. Note that if ALTER SYSTEM SWITCH LOGFILE is hung, but there is room in
the current online redo log, then value is NUL
select instance_name,host_name,version,archiver,log_switch_wait from v$instance
INSTANCE_NAME
TEST
1 row selected.
HOST_NAME
flex-suntdb
VERSION
9.2.0.5.0
ARCHIVE
STARTED
LOG_SWITCH_
MODIFIED
19-NOV-2004 10:12:27
STATUS
VALID
Query 4: Force logging is not mandatory but is recommended. Supplemental logging must be
enabled if thestandby associated with this primary is a logical standby. During normal operations
it is acceptable for SWITCHOVER_STATUS to be SESSIONS ACTIVE or TO STANDBY.
132
select force_logging,remote_archive,supplemental_log_data_pk,supplemental_log_data_ui,
switchover_status,dataguard_broker from v$database;
FORCE_LOGGING
REMOTE_ARCHIVE
DATAGUARD_BROKER
NO
ENABLED
ACTIVE
DISABLED
SUP
SUP
NO
NO
SWITCHOVER_STATUS
SESSIONS
1 row selected.
Query 5: This query produces a list of all archive destinations. It shows if they are enabled, what
process is servicing that destination, if the destination is local or remote, and if remote what the
current mount ID is
select dest_id "ID",destination,status,target,schedule,process,mountid mid from v$archive_dest
order by dest_id;
ID
DESTINATION
STATUS
MID
1
/applprod/archprod VALID
2
STANDBY
VALID
........
........
TARGET
SCHEDULE
PRIMARY ACTIVE
STANDBY ACTIVE
PROCESS
ARCH
ARCH
0
0
10 rows selected.
Query 6: This select will give further detail on the destinations as to what options have been set.
Register indicates whether or not the archived redo log is registered in the remote destination
control file.
select dest_id "ID",archiver,transmit_mode,affirm,async_blocks async, net_timeout
net_time,delay_mins delay,reopen_secs reopen, register,binding from v$archive_dest order by
dest_id;
ID
ARCHIVER TRANSMIT_MOD
REG BINDING
AFF
ASYNC
NET_TIME
DELAY REOPEN
1
300
ARCH
YES
SYNCHRONOUS
MANDATORY
NO
2
300
...
...
ARCH
YES
SYNCHRONOUS
OPTIONAL
NO
10 rows selected.
Query 7: The following select will show any errors that occured the last time an attempt to
archive to the destination was attempted. If ERROR is blank and status is VALID then the
archive completed correctly.
133
ERROR
Query 8: The query below will determine if any error conditions have been reached by querying
the v$dataguard_status view (view only available in 9.2.0 and above):
select message, timestamp from v$dataguard_status where severity in ('Error','Fatal') order by
timestamp;
no rows selected
Query 9: The following query will determine the current sequence number and the last sequence
archived. If you are remotely archiving using the LGWR process then the archived sequence
should be one higher than the current sequence. If remotely archiving using the ARCH process
then the archived sequence should be equal to the current sequence. The applied sequence
information is updated at log switch time.
select ads.dest_id,max(sequence#) "Current Sequence", max(log_sequence) "Last Archived"
from v$archived_log al, v$archive_dest ad, v$archive_dest_status ads where
ad.dest_id=al.dest_id and al.dest_id=ads.dest_id group by ads.dest_id;
DEST_ID
1
2
Current Sequence
233
233
Last Archived
233
233
2 rows selected.
Query 10: The following select will attempt to gather as much information as possible from the
standby. SRLs are not supported with Logical Standby until Version 10.1
select dest_id id,database_mode db_mode,recovery_mode,
protection_mode,standby_logfile_count "SRLs", standby_logfile_active ACTIVE, archived_seq#
from v$archive_dest_status;
ID
DB_MODE
RECOVER
PROTECTION_MODE
SRLs
ARCHIVED_SEQ#
1
OPEN
IDLE
MAXIMUM PERFORMANCE 0
2
MOUNTED-STANDBY IDLE MAXIMUM PERFORMANCE 0
...
ACTIVE
0
0
233
2 33
...\
10 rows selected.
134
Query 11: Query v$managed_standby to see the status of processes involved in the shipping redo
on this system. Does not include processes needed to apply redo.
select process,status,client_process,sequence# from v$managed_standby;
PROCESS
ARCH
ARCH
STATUS
CLOSING
CLOSING
CLIENT_P
ARCH
ARCH
SEQUENCE#
233
232
2 rows selected.
Query 12: The following query is run on the primary to see if SRL's have been created in
preparation for switchover.
select group#,sequence#,bytes from v$standby_log;
no rows selected
Query 13: The above SRL's should match in number and in size with the ORL's returned below:
select group#,thread#,sequence#,bytes,archived,status from v$log;
SUP
SUP
SWITCHOVER_STATUS
135
Query 4: This query produces a list of all archive destinations and shows if they are enabled,
what process is servicing that destination, if the destination is local or remote, and if remote what
the current mount ID is. For a physical standby we should have at least one remote destination
that points the primary set but it should be deferred.
select dest_id
v$archive_dest;
"ID",destination,status,target,
archiver,schedule,process,mountid
from
Query 5: If the protection mode of the standby is set to anything higher than max performance
then we need to make sure the remote destination that points to the primary is set with the correct
options else we will have issues during switchover.
select dest_id,process,transmit_mode,async_blocks,net_timeout,delay_mins,reopen_secs,register,binding from
v$archive_dest;
Query 6: The following select will show any errors that occured the last time an attempt to
archive to the destination was attempted. If ERROR is blank and status is VALID then the
archive completed correctly.
select dest_id,status,error from v$archive_dest;
Query 7: Determine if any error conditions have been
thev$dataguard_status view (view only available in 9.2.0 and above):
reached
by
querying
from
Query 11: Verify that the last sequence# received and the last sequence# applied to standby
database.
select max(al.sequence#) "Last Seq Recieved", max(lh.sequence#) "Last Seq Applied" from
v$archived_log al, v$log_history lh;
Query 12: The V$ARCHIVE_GAP fixed view on a physical standby database only returns the
next gap that is currently blocking redo apply from continuing. After resolving the identified gap
and starting redo apply, query the V$ARCHIVE_GAP fixed view again on the physical standby
database to determine the next gap sequence, if there is one.
select * from v$archive_gap;
136
Analyze current database status and make a plan for upgrade process.
Meeting with everyone involved in the upgrade process and clearly defining their roles
Performing test upgrades
Scheduling the test and production upgrades
Performing backups of the production database
Completing the upgrade of the production database
Performing backups of the newly upgraded Oracle Database production database
Export/Import
DBUA
Manually by using Scripts
Export/Import
Export/Import utilities only physically copy data from current database to a new database. The
current database's Export utility copies specified parts of the database into an export dump file.
Then, the Import utility of the new Oracle Database loads the exported data into a new database.
137
The Database Upgrade Assistant does not begin the upgrade until it completes all of the preupgrade steps.
The Database Upgrade Assistant automatically modifies or creates new required tablespaces,
invokes the appropriate upgrade scripts, archives the redo logs, and disables archiving during the
upgrade phase.
While the upgrade is running, the Database Upgrade Assistant shows the upgrade progress for
each component. The Database Upgrade Assistant writes detailed trace and log files and
produces a complete HTML report for later reference. To enhance security, the Database
Upgrade Assistant automatically locks new user accounts in the upgraded database. The
Database Upgrade Assistant then proceeds to create new configuration files (parameter and
listener files) in the new Oracle home.
Manual Upgrade
A manual upgrade consists of running SQL scripts and utilities from a command line to upgrade
a database to the new Oracle Database release.
While a manual upgrade gives you finer control over the upgrade process, it is more susceptible
to error if any of the upgrade or pre-upgrade steps are either not followed or are performed out of
order.
When manually upgrading a database, you must perform the following pre-upgrade steps:
Analyze the database using the Pre-Upgrade Information Tool. The Upgrade Information Tool is
a SQL script that ships with the new Oracle Database 10g release, and must be run in the
environment of the database being upgraded.
The Upgrade Information Tool displays warnings about possible upgrade issues with the
database. It also displays information about required initialization parameters for the new Oracle
Database 10g release. Before starting up the new Oracle Database 10g release, make the
necessary adjustments to the database.
138
Add free space to any tablespaces in the database that require additional space, and drop
and re-create any redo log files whose size is insufficient for the upgrade.
Adjust the parameter file for the upgrade, removing obsolete initialization parameters and
adjusting initialization parameters that might cause upgrade problems.
Depending on the release of the database being upgraded, you may need to perform
additional pre-upgrade steps.
139
140
The release number 10.1.0.1.0 is displayed. The significance of each number (reading from left
to right) is shown in the following table:
Number Significance
10
Major database release number
1
Database maintenance release number
0
Application server release number
1
Component specific release number
0
Platform specific release number
Upgrade Path
Upgrade path for Oracle database 11g release 1
Direct Upgrade Path
Source Database Target Database
9.2.0.4.0 (or higher) 11.1.x
10.1.0.2.0 (or higher) 11.1.x
10.2.0.1.0 (or higher) 11.1.x
Indirect Upgrade Path
Source Database Intermediate Upgrade Path Target Database
7.3.3.0.0 (or lower)
7.3.4.x > 9.2.0.8
11.1.x
8.0.5.0.0 (or lower)
11.1.x
11.1.x
11.1.x
141
Source Database
Target Database
10.2.x
10.2.x
10.2.x
10.2.x
7.3.4
8.1.7.4
10.2.x
8.0.n
8.1.7.4
10.2.x
8.1.n
8.1.7.4
10.2.x
142
Startup SQL*PLUS, connect with 32-bit database instance AS SYSDBA and shutdown database
by using SHUTDOWN IMMEDIATE command.
Step 2
Install 64-bit version of the same oracle software realease in Different ORACLE_HOME.
Step 4
143
If you are working with in Oracle 8.0.x, Oracle8i or Oracle9i 9.0.x database, run STARTUP
RESTRICT:
SQL> STARTUP RESTRICT
If you are working with in Oracle9i 9.2.0.x database, run STARTUP MIGRATE:
SQL> STARTUP MIGRATE
If you are working with in Oracle10g database, run STARTUP UPGRADE:
SQL> STARTUP UPGRADE
If you are working with in Oracle11g database, run STARTUP UPGRADE:
SQL> STARTUP UPGRADE
Step 9
Set the system to spool results to a log file for later verification of success:
SQL> SPOOL catoutw.log
Step 10
Run utlirp.sql:
SQL> @$ORACLE_HOME/rdbms/admin/utlirp.sql
This script recompiles existing PL/SQL modules in the format required by the new database.
This script first alters certain dictionary tables. Then, it reloads package STANDARD and
DBMS_STANDARD, which are necessary for using PL/SQL.
Optional Steps:
If the patchset level is not being changed (for example, you are migrating a 9.2.0.8 32-bit
database to 9.2.0.8 64-bit) then there is no need of optional steps.
If the patchset level is change to run then need to optional steps.
If you are working with in Oracle 8.0, Oracle8i or Oracle 9i 9.0.x database, run the following
script:
SQL> @$ORACLE_HOME/rdbms/admin/catalog.sql
If you are working with in Oracle9i 9.2.0.x database, run the following
script:
SQL> @$ORACLE_HOME/rdbms/admin/catpatch.sql
If you are migrating an Oracle10g 10.1.0.x or 10.2.0.x database, run the following script:
144
SQL> @$ORACLE_HOME/rdbms/admin/catupgrd.sql
Step 11
Run utlrp.sql:
SQL> @$ORACLE_HOME/rdbms/admin/utlrp.sql
This script recompiles all invalid objects
TIPS:
If you are using same machine for converting 32-bit to 64 bit. Only you will create new
ORACLE_HOME for 64-bit oracle software and you will use same phiysical database structure.
If you are using different machine for convertion 32-bit to 64-bit . You will install Oracle 64 bit
oracle software on differnet machine and you will clone your 32 bit database on new machine.
If you are using UNIX based OS and want to use different machine for converting 32 bit to 64 bit
better you create same database file structure and restore from old box to new box.
Moving From the Standard Edition to the Enterprise Edition and via-verse
If you are using a Standard Edition database (Release prior to 11gR1), then you can change it to
an Enterprise Edition database.
Step 1
Standard Edition database software should be same as the Enterprise Edition database software.
Step 2
Shutdown the database
Step 3
Shut down your all oracle services including oracle database
Step 4
De-install the Standard Edition oracle software
Step 5
Install the Enterprise Edition server software using the Oracle Universal Installer.
Step 6
Select the same Oracle home that was used for the de-installed Standard Edition. During the
installation, be sure to select the Enterprise Edition. When prompted, choose Software only from
the Database Configuration screen.
Step 7
Start up your database.
Your database is now upgraded to the Enterprise Edition.
145
Tips:
1. You can only convert Standard Edition Database to the Enterprise Edition Database by
using above method.
2. If you want to convert from an Enterprise Edition database to a Standard Edition
database, you must use Export/Import operation. Without Export/Import you can not
convert.
Inside Story:
1. The Enterprise Edition contains data dictionary objects which are not available in the
Standard Edition. If you just install the Standard Edition software, then you will end up
with data dictionary objects which are useless. Some of them might be invalid and
possibly create problems when maintaining the database.
2. The Export/Import operation does not introduce data dictionary objects specific to the
Enterprise Edition, because the SYS schema objects are not exported. Oracle recommends
using the Standard Edition EXP utility to export the data.
3. After the Import in the Standard Edition database, you are only required to drop all user
schemas related to Enterprise Edition features, such as the MDSYS account used with
Oracle Spatial.
146
Upgrade Project
Project A
Upgrade Oracle Database from Version 8.1.x(x>7) to 8.1.7
.WORLD
A valid domain setting for your environment
1.2. Make sure the _SYSTEM_TRIG_ENABLED initialization parameter is set to FALSE in the
initialization parameter file. If this initialization parameter is not currently set, then explicitly set
it to FALSE:
_SYSTEM_TRIG_ENABLED = FALSE
147
2.1. Stop the OracleServiceSID Oracle service of the oracle 8.1.6 database if you are using
Windows
C :\> NET STOP OracleService
2.2. Selete the OracleServiceSID at command line of 8.1.6 Home, if you are using Windows
C:\>ORADIM DELETE SID
Step 3 Install Oracle 8.1.7 database software only in new oracle home.
Step 4 Create the new oracle database 8.1.7 service at command prompt using the following
command, if you are using windows platform
C:\>ORADIM NEW SID INTPWD STARTMODE A
Step 5 Put your init file from 8.1.6 Oracle home to 8.1.7 oracle home default location and adjust
the initialization parameter file for use with the new 8.1.7.
db_domain = .WORLD
optimizer_mode = choose
job_queue_processes = 0
aq_tm_processes = 0
Step 6 Connect to the new oracle 8.1.7 instance as a user SYSDBA privilege and issue following
command:
SQL>STARTUP RESTRICT
148
You dont need to use the PFILE option to specify the location of your initialization parameter
file in our case because we are using INIT file in default location (which is reside in 8.1.7
Home). We have just put init file at new oracle 8.1.7 home from 8.1.6.
Step 7 Execute following scripts:
SPOOL c:\revoke_restricted_session.log;
SELECT 'REVOKE restricted session FROM ' username ';' FROM dba_users
WHERE username NOT IN ('SYS','SYSTEM');
SPOOL OFF;
Step 8 Run Spool File:
@c: \revoke_restricted_session.log;
Step 9 Enable Restricted Session
SQL>ALTER SYSTEM ENABLE RESTRICTED SESSION;
Step 10 Run Migration Scripts Which is reside in 8.1.7 oracle home/rdbms/admin location
SPOOL catoutu.log
SET ECHO ON
@u0801060.sql # Script for 8.1.6 -> 8.1.7
SET ECHO OFF
SPOOL OFF
Step 11 ALTER SYSTEM DISABLE RESTRICTED SESSION;
Step 12 SHUTDOWN IMMEDIATE
NOTE:
This script creates and alters certain dictionary tables. It also runs the catalog.sql and catproc.sql
scripts that come with the release to which you are upgrading, which create the system catalog
views
and
all
the
necessary
packages
for
using
PL/SQL.
Step 13: (Post migration Steps)
13.1. Startup database and must execute additional scripts:
# Run all sql scripts for replication option
@$ORACLE_HOME/rdbms/admin/catrep.sql
# Collect I/O per table (actually object) statistics by statistical sampling
149
@$ORACLE_HOME/rdbms/admin/catio.sql
# This package creates a table into which references to the chained rows for an IOT (Index-OnlyTable) can be placed using the ANALYZE command.
@$ORACLE_HOME/rdbms/admin/dbmsiotc.sql
# Wrap Package which creates IOTs (Index-Only-Table
@$ORACLE_HOME/rdbms/admin/prvtiotc.plb
# This package allows you to display the sizes of objects in the shared pool, and mark them for
keeping or unkeeping in order to reduce memory fragmentation.
@$ORACLE_HOME/rdbms/admin/dbmspool.sql
# Creates the default table for storing the output of the ANALYZE LIST CHAINED ROWS
command
@$ORACLE_HOME/rdbms/admin/utlchain.sql
# Creates the EXCEPTION table
@$ORACLE_HOME/rdbms/admin/utlexcpt.sql
# Grant public access to all views used by TKPROF with verbose=y option
@$ORACLE_HOME/rdbms/admin/utltkprf.sql
# Create table PLAN_TABLE that is used by the EXPLAIN PLAN statement. The explain
statement requires the presence of this table in order to store the descriptions ofthe row sources.
@$ORACLE_HOME/rdbms/admin/utlxplan.sql
# Create performance tuning views
@$ORACLE_HOME/rdbms/admin/catperf.sql
# Create v7 style export/import views against the v8 RDBMS so that EXP/IMP v7 can be used to
read out data in a v8 RDBMS. These views are necessary if you want to exportfrom Oracle8 and
import in an Oracle7 database.
@$ORACLE_HOME/rdbms/admin/catexp7.sql
# Create views of oracle locks
@$ORACLE_HOME/rdbms/admin/catblock.sql
# Print out the lock wait-for graph in a tree structured fashion
150
@$ORACLE_HOME/rdbms/admin/utllockt.sql
# Creates the default table for storing the output of the analyze validate command on a
partitioned table
@$ORACLE_HOME/rdbms/admin/utlvalid.sql
# PL/SQL Package of utility routines for raw datatypes
@$ORACLE_HOME/rdbms/admin/utlraw.sql
@$ORACLE_HOME/rdbms/admin/prvtrawb.plb
# Contains the PL/SQL interface to the cryptographic toolkit
@$ORACLE_HOME/rdbms/admin/dbmsoctk.sql
@$ORACLE_HOME/rdbms/admin/prvtoctk.plb
# This package provides a built-in random number generator. It is faster than generators written
in PL/SQL because it calls Oracle's internal random number generator.
@$ORACLE_HOME/rdbms/admin/dbmsrand.sql
# DBMS package specification for Oracle8 Large Object This package provides routines for
operations on BLOB and CLOB datatypes.
@$ORACLE_HOME/rdbms/admin/dbmslob.sql
# Procedures for instrumenting database applications DBMS_APPLICATION_INFO package
spec.
@$ORACLE_HOME/rdbms/admin/dbmsapin.sql
# Run obfuscation toolkit script.
@$ORACLE_HOME/rdbms/admin/catobtk.sql
# Create Heterogeneous Services data dictionary objects.
@$ORACLE_HOME/rdbms/admin/caths.sql
# Stored procedures for Oracle Trace server
@$ORACLE_HOME/rdbms/admin/otrcsvr.sql
# Oracle8i Profiler for PL/SQL Profilers is helpful tools to investigate programs and identify
slow program parts and bottle necks. Furthermore you can determine which procedure; function
or any other code part is executed how many times. To be able to use the DBMS_PROFILER
package you have to install once for your database the following packages. Do this as user SYS
@$ORACLE_HOME/rdbms/admin/profload.sql
151
@$ORACLE_HOME/rdbms/admin/proftab.sql
@$ORACLE_HOME/rdbms/admin/dbmspbp.sql
@$ORACLE_HOME/rdbms/admin/prvtpbp.plb
13.2. Recompiling Invalid PL/SQL Modules
Run UTLRP.sql scripts to recompiles all INVALID objects, such as packages, procedures, types,
etc.
SQL>@$ORACLE_HOME/rdbms/admin/utlrp.sql
13.3. Additional Checks after the Migration
Check for Bad Date Constraints
A bad date constraint involves invalid date manipulation, which is a date manipulation that
implicitly assumes the century in the date, causing problems at the year 2000. The utlconst.sql
script runs through all of the check constraints in the database and marks constraints as bad if
they include any invalid date manipulation. This script selects all the bad constraints at the end.
Oracle7 let you create constraints with a two-digit year date constant. However, version 8 returns
an error if the check constraint dates constant does not include a four-digit year.
To run the utlconst.sql script, complete the following steps:
SQL> SPOOL utlresult.log
SQL> @utlconst.sql
SQL> SPOOL OFF
Server Output ON
Statement processed.
Statement processed.
Checking for bad date constraints
Finished checking -- All constraints OK!
After you run the script, the utlresult.log log file includes all the constraints that have invalid date
constraints. The utlconst.sql script does not correct bad constraints, but instead it disables them.
You should either drop the bad constraints or recreate them after you make the necessary
changes.
13.4. Rebuild Unusable Bitmap Indexes
During migration, some bitmap indexes may become unusable. To find these indexes, issue the
following SQL statement:
SELECT index_name, index_type, table_owner, status FROM dba_indexesWHERE index_type
= 'BITMAP'AND status = 'UNUSABLE';
13.5. Rebuild Unusable Function-Based Indexes
During upgrade, some function-based indexes may become unusable. To find these indexes,
issue the following SQL statement:
152
153
Project B
Upgrade Oracle Database from Version 8.1.7 to 9.2.0
Export / Import
Database Upgrade Assistant
Manually by using Scripts
database)
(We
prefer
manually method
to
upgrade
our
9.0.1
16 MB
30 MB
8.1.7
52 MB
80 MB
8.0.6
70 MB
N/A
7.3.4
85 MB
N/A
154
oracle
database
9i
service
at
command
prompt
using
the
ORA-00401: the value for parameter compatible is not supported by this release.
Details about COMPATIBLE issue:
If you are upgrading from Release: 7.3.4 then remove COMPATIBLE from your parameter file,
or set COMPATIBLE to 8.1.0.
If you are upgrading from Release: 8.0.6 then remove COMPATIBLE from your parameter file,
or set COMPATIBLE to 8.1.0
If you are upgrading from Release: 8.1.7, then Set COMPATIBLE To: If COMPATIBLE is set
to 8.0.x, then either remove COMPATIBLE from your parameter file, or set COMPATIBLE to
8.1.0. If COMPATIBLE is set to 8.1.x, then leave the setting as is.
If you are upgrading from Release: 9.0.1, then if one or more automatic segment-space managed
tablespaces exist in the database, then set COMPATIBLE to 9.0.1.3 Otherwise, leave the setting
as is.
6. Connect to the new oracle 9i instance as a user SYSDBA privilege and issue following
command:
SQL>STARTUP migrate
You dont need to use the PFILE option to specify the location of your initialization parameter
file in our case because we are using INIT file in default location (which is reside in
Oracle9iHome/database). We have just put init file at new oracle 9i home from 8i.
155
7. Set the system to spool results to a log file for later verification of success:
SQL> SPOOL upgrade.log
8. Run the upgrade scripts.
SQL>@u0801070.sql
Details about Upgrade Scripts: (Run Scripts according your Old release)
Old Release
Run Script
7.3.4
8.0.6
8.1.7
9.0.1
u0703040.sql
u0800060.sql
u0801070.sql
u0900010.sql
You
only need to run one script. For example, if your old release was 8.1.7, then you only need
to run u0801070.sql
The script you run creates and alters certain dictionary tables. It also runs the catalog.sql and
catproc.sql scripts that come with the new 9.2 release, which create the system catalog views and
all the necessary packages for using PL/SQL.
9. Display the contents of the component registry to determine which components need to be
upgraded:
SQL> SELECT comp_name, version, status FROM dba_registry;
COMP_NAME
Oracle9i Catalog Views
VERSION
STATUS
9.2.0.8.0
VALID
VALID
LOADED
Java Packages
8.1.7
LOADED
8.1.7
LOADED
8.1.7
LOADED
Oracle interMedia
8.1.7.0.0
LOADED
Oracle Spatial
8.1.7.0.0
LOADED
LOADED
156
10. Run the cmpdbmig.sql script to upgrade components that can be upgraded while connected
with SYSDBA privileges:
SQL>@cmpdbmig.sql
The following components are upgraded by running the cmpdbmig.sql script:
JServer JAVA Virtual Machine
Oracle9i Java Packages
Oracle XDK for Java
Messaging Gateway
Oracle9i Real Application Clusters
Oracle Workspace Manager
Oracle Data Mining
OLAP Catalog
OLAP Analytic Workspace
Oracle Label Security
11. Display the contents of the component registry to determine which components were
upgraded:
SQL> SELECT comp_name, version, status FROM dba_registry;
12. Turn off the spooling of script results to the log file:
SQL> SPOOL OFF
13. Then, check the spool file and verify that the packages and procedures compiled successfully.
Correct any problems you find in this file and rerun the appropriate upgrade scripts if necessary.
14. Shut down and restart the instance to reinitialize the system parameters for normal operation.
SQL> SHUTDOWN IMMEDIATE
15. Upgrade any remaining components that existed in the previous database.
The following components require separate upgrade steps:
Oracle Text
Oracle Ultra Search
Oracle Spatial
Oracle interMedia
Oracle Visual Information Retrieval
16. Run utlrp.sql to recompile any remaining stored PL/SQL and Java code.
SQL> @utlrp.sql
17. Verify that all expected packages and classes are valid:
157
a) Connect / as SYSDBA
b) First upgrade Oracle interMedia Common Files.
SQL>@<ORACLE_HOME>\ord\admin\u0nnnnn0.sql
c) Then upgrade interMedia.
SQL>@<ORACLE_HOME>\ord\im\admin\u0nnnnn0.sql
4) Verify the upgrade:
a) Connect as ORDSYS user.
B) Run following Command:
SQL>@<ORACLE_HOME>\ord\im\admin\imchk.sql
Upgrading Oracle Visual Information Retrieval
(From Release 8.1.5, 8.1.6 or 8.1.7 to 9i release 9.2.0)
1. Invoke the virdbma SQL script to decide whether or not you need to upgrade.
2. Connect as SYSDBA
SQL> @<ORACLE_HOME>\ord\vir\admin\virdbma.sql
This script displays one of the following strings:
NOT_INSTALLED - if no prior Visual Information Retrieval release was installed on your
system.
INSTALLED - if Visual Information Retrieval Compatible API is already installed.
u0nnnnn0.sql - the script for upgrade. nnnnn is the release of Visual Information Retrieval that
you have currently installed. For example, u0801070.sql upgrades from Visual Information
Retrieval release 8.1.7.0.0.
3. If an upgrade is required, perform the upgrade:
SQL>@<ORACLE_HOME>\ord\vir\admin\u0nnnnn0.sql
Where u0nnnnn0.sql is the upgrade script displayed by step 1, if an upgrade is necessary.
If the Oracle system has Oracle Text installed, then complete the following steps:
1. Log in to the system as the owner of the Oracle home directory of the new release.
159
160
SQL> @u0902000.sql
This script upgrades the CTXSYS schema to release 9.2.
Connect to the database instance as a user with SYSSBA privileges. Check for any invalid
CTXSYS objects and alter compile as needed. Turn off the spooling of script results to the log
file:
SQL> SPOOL OFF
Then, check the spool file and verify that the packages and procedures compiled successfully.
11. Shut down the instance:
SQL> SHUTDOWN IMMEDIATE
Exit SQL*Plus.
Upgrade User NCHAR Columns (Tasks to Complete Only After Upgrading a Release
8.1.7 Database)
If you upgraded from a version 8 release and your database contains user tables with NCHAR
columns, you must upgrade the NCHAR columns before they can be used in the Oracle
Database.
You will encounter the following error when attempting to use the NCHAR columns in the
Oracle Database until you perform the steps in this section:
ORA-12714: invalid national character set specified
To upgrade user tables with NCHAR columns, perform the following steps:
1. Connect to the database instance as a user with SYSDBA privileges.
2. If the instance is running, shut it down using SHUTDOWN IMMEDIATE:
SQL> SHUTDOWN IMMEDIATE
3. Start up the instance in RESTRICT mode:
SQL> STARTUP RESTRICT
4. Run utlnchar.sql:
SQL> @utlnchar.sql
Alternatively, to override the default upgrade selection, run n_switch.sql:
SQL> @n_switch.sql
161
162
Project C
Upgrade Oracle Database from Version 8.1.7 to 10.2.0
If your source database version 8.1.7.4 (or higher) then you can directly upgrade your current or
source database to 10gR2, otherwise you must follow indirect upgrade path, which is mention in
bellow table.
Indirect Upgrade Path
Source Database
10.2.x
7.3.4
8.1.7.4
10.2.x
8.0.n
8.1.7.4
10.2.x
8.1.n
8.1.7.4
10.2.x
163
164
SYSAUX Table space Section: Create table space in the Oracle Database 10.2 environment. New
"SYSAUX" table space minimum required size for database upgrade: 500 MB.
Tips:
If you are using SPFILE in current database, you must create INIT file in default location.
Step 2 (Only for window platform, UNIX platform user only Shutdown the database and
listener services)
2.1. Shutdown the 8i instance
SQL>SHUTDOWN IMMEDIATE
2.2. Stop the OracleServiceSID Oracle service of the oracle 8i database
C :\> NET STOP OracleService
2.3. Delete the OracleServiceSID at command line of 8i Home
C:\>ORADIM DELETE SID
Step 3 (Only for Window platform, if you are using UNIX based platform just follow 3.2
step)
3.1. Create the new oracle database 10g service at command prompt using the following
command.
C:\>ORADIM NEW SID INTPWD STARTMODE A
3.2. Put your init file in database folder at new oracle 10g home from 8i.
Step 4
Connect to the new oracle 10g instance as a user SYSDBA privilege and issue following
command:
SQL>STARTUP UPGRADE
You dont need to use the PFILE option to specify the location of your initialization parameter
file in our case because we are using INIT file in default location (which is reside in
Oracle10gHome/database). We have just put init file at new oracle 10g home from 8i.
IMPORTANT:
The error may be occurring, when you attempting to start the new oracle Database 10g release. If
you receive, issue the SHUTDOWN ABORT commands to shut down the database and correct
the problem.
Step 5
165
166
7.3. Run utlrp.sql scripts to recompile any remaining stored Pl/SQL and java codes.
SQL > @utlro.sql
7.4. Upgrade User NCHAR Columns (Tasks to Complete Only After Upgrading a Release 8.1.7
Database)
If you upgraded from a version 8 releases and your database contains user tables with NCHAR
columns, you must upgrade the NCHAR columns before they can be used in the Oracle
Database.
You will encounter the following error when attempting to use the NCHAR columns in the
Oracle Database until you perform the steps in this section:
ORA-12714: invalid national character set specified
To upgrade user tables with NCHAR columns, perform the following steps:
7.4.1. Connect to the database instance as a user with SYSDBA privileges.
7.4.2. If the instance is running, shut it down using SHUTDOWN IMMEDIATE:
SQL> SHUTDOWN IMMEDIATE
7.4.3. Start up the instance in RESTRICT mode:
SQL> STARTUP RESTRICT
7.4.4. Run utlnchar.sql:
SQL> @utlnchar.sql
Alternatively, to override the default upgrade selection, run n_switch.sql:
SQL> @n_switch.sql
5. Shut down the instance:
SQL> SHUTDOWN IMMEDIATE
6. Exit SQL* PLUS
167
Project D
Upgrade Oracle Database from Version 9.2.0 to 10.2.0
If your source database version is 9.0.1.4 (or higher) or 9.2.0.4 (or higher) then you can directly
upgrade your current or source database to 10gR2, otherwise you must apply latest patch set on
source database. You can download latest patch set from metal ink.
Here I am describing Manual methods to upgrade our oracle database.
Suppose your current oracle 9.2.0 software reside in /data01 and all database reside in /data02
mount point in UNIX based platform. If you are using Window based platform suppose your
oracle software reside in D drive and database reside in E drive.
Step 1 (Pre upgrade Task)
1.1 Install the oracle 10g database software in new oracle home.
1.2 Connect to the oracle database form 9i and run pre-upgrade scripts (utlu102i), which is store,
in Oracle 10g Home/rdbms/admin.
1.3 Follow the steps suggested from output of above steps.
Tips:
If you are using SPFILE in current database, you must create INIT file in default location.
Step 2 (Only for Window platform, if you are using UNIX based platform only shutdown
database service and Listener Services)
2.1 Shutdown the 9i instance
SQL>SHUTDOWN IMMEDIATE
2.2 Stop the OracleServiceSID Oracle service of the oracle 9i database
C :\> NET STOP OracleService
2.3 Delete the OracleServiceSID at command line of 9i Home
C:\>ORADIM DELETE SID
Step 3 (Only for Window platform, if you are using UNIX based platform just follow 3.2
step)
3.1 Create the new oracle database 10g service at command prompt using the following
command.
C:\>ORADIM NEW SID INTPWD STARTMODE A
3.2 Put your init file in 10g default location from 9i.
Step 4
Connect to the new oracle 10g instance as a user sysdba privilege and issue following command:
168
SQL>STARTUP UPGRADE
You dont need to use the PFILE option to specify the location of your initialization parameter
file in our case because we are using INIT file in default location (which is reside in
Oracle10gHome/database). We have just put init file at new oracle 10g home from 9i.
The error may be occurring, when you attempting to start the new oracle Database 10g release. If
you receive, issue the SHUTDOWN ABORT commands to shut down the database and correct
the problem.
Step 5 (Create SYSAUX tablespace)
Create SYSAUX table space for the database.
CREATE TABLESPACE sysaux DATAFILE sysaux01.dbf
SIZE 500M REUSE
EXTENT MANAGEMENT LOCAL
SEGMENT SPACE MANAGEMENT AUTO
ONLINE;
NOTE:
If you are upgrading from 10.1, then skip step5 otherwise create SYSAUX tablespace. In oracle
10g, the SYSAUX tablespace is used to consolidate data from a number of tablespace that where
separated in previous release.
Step 6
6.1 Set the system to spool results to a log file for later verification of success:
SQL> SPOOL upgrade.log
6.2 Run the upgrade scripts.
SQL>@catupgrd.sql
The catupgrd.sql script determines which scripts need to be run and then runs each necessary
scripts.
6.3 Run the result of the upgrade display report.
SQL>@utlu102s.sql
The Post-upgrade status Tool display the status of the database components in the upgrade
database and time required to complete each component upgrade.
6.4 Turn off the spooling of script result to the log file
SQL>spool off;
169
170
Export / Import.
Transportable Tablespaces 10G or Later
RMAN Convert Database functions. 10G or Later
Project
Migration of Database across OS Platform through Export and Import
We can use Export and Import utility for moving an existing Oracle Database from one platform
to another (i.e. UNIX to NT or via-verse).
A full database export and import can be used in all Oracle versions to transfer a database across
platform.
Example:
-Source Database is 32-bit 9.2.0 database on 32-Bit Windows platform
- Target Database is 64-bit 10.2.0 database on 64-bit any UNIX based platform.
Step 1
Query the source database views dba_tablespaces, dba_data_files and dba_temp_files. You will
need this information later in the process.
Step 2
Perform a full export from the source database:
exp system/manager FULL=y FILE=exp_full.dmp LOG=exp_full.log
Step 3
Transfer the export dump file in binary mode to the HP-Unix 11.22 server.
Step 4
Create a new database on the target server.
171
Step 5
Before importing the dump file, you must first create your tablespaces structure , using the
information obtained in step 1.
Step 6
Perform a full import with the IGNORE parameter enabled:
imp system/manager FULL=y FILE=exp_full.dmp LOG=imp_full.log IGNORE=y
Using IGNORE=y instructs Oracle to ignore any creation errors during the import and permit the
import to complete.
Project
Use RMAN CONVERT DATABASE on Source Host for Cross platform Migration
Here I am explaining the CONVERT DATABASE procedure on source host.
Restriction:
The principal restriction on cross-platform transportable database is that the source and
destination platform must share the same endian format.
Redo log files and control files from the source database are not transported. New control
files and redo log files are created for the new database during the transport process, and
an OPEN RESETLOGS is performed once the new database is created. Similarly,
tempfiles belonging to locally managed temporary tablespaces are not transported. The
temporary tablespace will be re-created on the target platform when the transport script is
run.
BFILEs, External tables and directories, Password files are not transported.
Step 1
Check that the source and destination platform belong to same ENDIAN format. We will try to
transport a database from Windows (32-bit) to Linux (32-bit).
SQL> select PLATFORM_NAME, ENDIAN_FORMAT from V$TRANSPORTABLE_PLATFORM;
Note: If the two platforms are not on the same ENDIAN format, you will need to use
TRANSPORTABLE TABLESPACE instead of CONVERT DATABASE
Step 2
Check the database can be transported to a destination platform and the current state of the
database database (such as incorrect compatibility settings, in-doubt or active transactions)
permits transport.
172
Make sure your database is open in READ ONLY mode and call DBMS_TDB.CHECK_DB.
For Example: We need to transport to Linux 32-Bit we will call the procedure with following
arguments:
SQL> set serveroutput on
SQL> declare
db_ready boolean;
begin
db_ready := dbms_tdb.check_db('Linux IA (32-bit)');
end;
/
PL/SQL procedure successfully completed.
If there are no external objects, then this procedure completes with no output. If there are external
objects,
however,
the
output
will
be
somewhat
similar
to
above.
Step 4
If above three steps has been completed successfully, the database is ready for transport. We will
use RMAN CONVERT DATABASE command and specify a destination platform.
Bellow steps create transport script, which contains SQL statements used to create the new
database on the destination platform:
C:\>rman target / nocatalog
RMAN> CONVERT DATABASE NEW DATABASE 'TESTL'
TRANSPORT SCRIPT 'D:\Transport.sql'
TO PLATFORM 'Linux IA (32-bit)';
173
Note: This will be convert data file and put in Windows Default location
(Example: ORACLE_HOME/database), we can use format parameter for convert data
file other the default.
Uses bellow Command for creating converted datafile in other then default
location:
CONVERT DATABASE NEW DATABASE 'TESTL'
TRANSPORT SCRIPT 'D:\Transport.sql'
TO PLATFORM 'Linux IA (32-bit)'
FORMAT='D:\%U';
Step 5
After completion of above task, now copy the Transport.sql, converted datafiles, and the pfile
from Windows OS to Linux.
Step 6
Go to Destination Machine and edit the PFILE to change any settings for the destination
database.
Step 7
Go to Destination Machine and edit the TRANSPORT.SQL script to reflect the new path for
datafiles in the CREATE CONTROLFILE section of the script.
Step 8
Go to Destination Machine and run Transport.sql Scripts:
$ export ORACLE_HOME=/home/oracle/product/ora10g
$ export ORACLE_SID=TESTL
$ export PATH=$ORACLE_HOME/bin:$PATH
$ sqlplus "/ as sysdba"
Connected to an idle instance.
SQL> @TRANSPORT.SQL
ORACLE instance started.
Total System Global Area
Fixed Size
Variable Size
Database Buffers
Redo Buffers
201326592
1218484
67110988
125829120
7168000
bytes
bytes
bytes
bytes
bytes
174
201326592
1218484
67110988
125829120
7168000
bytes
bytes
bytes
bytes
bytes
...
...
...
Step 9
Check error during recompilation:
SQL> select COUNT(*) "ERRORS DURING RECOMPILATION" from utl_recomp_errors;
ERRORS DURING RECOMPILATION
--------------------------0
SQL>
SQL>
Step 10
Run component validation procedure
SQL> SET serveroutput on
SQL> EXECUTE dbms_registry_sys.validate_components;
PL/SQL procedure successfully completed.
SQL> SET serveroutput off
Step 11
Change database identifier
1. Put your database in mount stage.
2. To verify the DBID and database name
SQL> SELECT dbid, name FROM v$_database;
3. set PATH and execute nid command in terminal
$ nid target=/
-----Change database ID of database ORCLLNX? (Y/[N]) => Y
Proceeding with operation
.
.
175
$ mkdir $ORACLE_HOME/admin/dpdump
SQL> select directory_path from dba_directories;
SQL> update dba_directories set directory_path=/d01/oracle/apexdb/admin/testdb/dpdump
where directory_path=/u02/oracle/apexdb/admin/apex/dpdump;
Project
Cross-Platform Migration on Destination Host Using RMAN Convert Database
Here I am explaining the CONVERT DATABASE procedure on destination Host source host.
Restriction:
The principal restriction on cross-platform transportable database is that the source and
destination platform must share the same endian format.
Redo log files and control files from the source database are not transported. New control
files and redo log files are created for the new database during the transport process, and
an OPEN RESETLOGS is performed once the new database is created. Similarly,
tempfiles belonging to locally managed temporary tablespaces are not transported. The
temporary tablespace will be re-created on the target platform when the transport script is
run.
BFILEs, External tables and directories, Password files are not transported.
The Source and the target database version must be equal / greater than 10.2.0. version
176
Step 1
Check that the source and destination platform belong to same ENDIAN format. We will try to
transport a database from Windows (32-bit) to Linux (32-bit).
SQL> select PLATFORM_NAME, ENDIAN_FORMAT from V$TRANSPORTABLE_PLATFORM;
Note: If the two platforms are not on the same ENDIAN format, you will need to use
TRANSPORTABLE TABLESPACE instead of CONVERT DATABASE
Step 2
Check the database can be transported to a destination platform and the current state of the
database database (such as incorrect compatibility settings, in-doubt or active transactions)
permits transport.
Make sure your database is open in READ ONLY mode and call DBMS_TDB.CHECK_DB.
For Example: We need to transport to Linux 32-Bit we will call the procedure with following
arguments:
SQL> set serveroutput on
SQL> declare
db_ready boolean;
begin
db_ready := dbms_tdb.check_db('Linux IA (32-bit)');
end;
/
PL/SQL procedure successfully completed.
177
If there are no external objects, then this procedure completes with no output. If there are external
objects, however, the output will be somewhat similar to above.
Step 4
If above three steps has been completed successfully, the database is ready for transport. We will
use RMAN CONVERT DATABASE command and specify a destination platform.
C:\>rman target / nocatalog
RMAN> CONVERT DATABASE ON TARGET PLATFORM
CONVERT SCRIPT 'D:\convertscript.rman'
TRANSPORT SCRIPT 'D:\transportscript.sql'
new database 'TESTL'
FORMAT 'D:\%U';
Note:
does
not
produce
converted
datafile
copies.
Step 5
After completion of above task, now copy the Transport.sql, datafiles, convertscripts.rman and
the pfile from Windows OS to Linux.
Step 6
Go to Destination Machine and edit the PFILE to change any settings for the destination
database.
Step 7
Go to Destination Machine and Create a dummy Controlfile.
$ export ORACLE_HOME=/home/oracle/product/ora10g
$ export ORACLE_SID=TESTL
$ export PATH=$ORACLE_HOME/bin:$PATH
$ sqlplus "/ as sysdba"
Connected to an idle instance.
SQL> startup nomount;
ORACLE instance started.
SQL> <run here create control file syntex >
Control file created.
Step 8
Now edit the file Convertscript.rman and make necessary changes with respect to the filesystem
and the file names. Now once the changes are done run the script from rman prompt
178
Now
shutdown
the
database
and
delete
the
dummy
controlfile.
Step 10
Now edit the TRANSPORT sql script to reflect the new path for datafiles and redolog files in the
CREATE
CONTROLFILE
section
of
the
script.
Step 11 Once the PFILE and TRANSPORT sql scripts are suitably modified invoke SQLPLUS
on the destination host after setting the Oracle environment parameters and then run
TRANSPORT.sql as:
$
$
$
$
export ORACLE_HOME=/u01/oracle/product/ora10g
export ORACLE_SID=win10g
export PATH=$ORACLE_HOME/bin:$PATH
sqlplus "/ as sysdba"
When the transport script finishes, the creation of the new database is complete.
Step 9
Check error during recompilation:
179
Step 10
Run component validation procedure
SQL> SET serveroutput on
SQL> EXECUTE dbms_registry_sys.validate_components;
PL/SQL procedure successfully completed.
SQL> SET serveroutput off
Step 11
Change database identifier
4. Put your database in mount stage.
5. To verify the DBID and database name
SQL> SELECT dbid, name FROM v$_database;
6. set PATH and execute nid command in terminal
$ nid target=/
-----Change database ID of database ORCLLNX? (Y/[N]) => Y
Proceeding with operation
.
.
180
Project
Migration Oracle Databases across Platforms by using Transporting Tablespace
Prior to Oracle 10g, one of the only supported ways to move an Oracle database across platforms
was to export the data from the existing database and import it into a new database on the new
server.
Export / Import utility pretty well if your database is small, but can require an unreasonable
amount of down time if your database is large. In Oracle 10g, the transportable tablespace feature
has been enhanced in a way that makes it possible to move large databases (or portions of them)
across platforms much more quickly and simply than the export/import method.
Note:
In Oracle 8i and Oracle 9i, tablespaces could only be transported into databases that ran on the
same hardware platform and operating system. So if your Database ran on Windows and want to
migrate on Linux , you could not use transportable tablespaces to copy data efficiently between
the databases.
Beginning in Oracle 10g release 1, cross-platform support for transportable tablespaces is
available for several of the most commonly used platforms. The process is similar to transporting
tablespaces in previous Oracle releases, except there are a few possible extra steps, and there are
more limitations and restrictions. Oracle 10g releases 2 goes one step further and offers the
ability to transport an entire database across platforms in one step. But the limitations here are
even stricter.
Important:
Data pump cannot transport XMLTypes while original export and import can.
Data pump offers many benefits over original export and import in the areas of
performance and job management, but these benefits have little impact when transporting
tablespaces because metadata export and import is usually very fast to begin with.
Original export and import cannot transport BINARY_FLOAT and BINARY_DOUBLE
data types, while data pump can.
When original export and import transport a tablespace that contains materialized views,
the materialized views will be converted into regular tables on the target database. Data
pump, on the other hand, keeps them as materialized views.
Limitation:
The source and target database must use the same character set and national character set.
We can not transport a tablespace to a target database in which a tablespace with the same
name already exists.
Beginning with Oracle Database 10g Release 2, you can transport tablespaces that contain
XMLTypes, but you must use the IMP and EXP utilities, not Data Pump. When using
181
EXP, ensure that the CONSTRAINTS and TRIGGERS parameters are set to Y (the
default).
How to retrieve tablespace list that content XMLType Data type?
select distinct p.tablespace_name
from dba_tablespaces p, dba_xml_tables x, dba_users u, all_all_tables t
where t.table_name=x.table_name and t.tablespace_name=p.tablespace_name and
x.owner=u.username
Transporting tablespaces with XMLTypes has the following limitations:
ENDIAN_FORMAT
--------Big
SQL>
On the target database:
SQL> SELECT A.platform_id, A.platform_name, B.endian_format
FROM
WHERE
v$database A, v$transportable_platform B
B.platform_id (+) = A.platform_id;
PLATFORM_ID PLATFORM_NAME
---------------------------------------------
ENDIAN_FORMAT
--------------
182
10
Linux IA (32-bit)
Little
SQL>
Note:
The endian format column value can show three value Big,Little or can be blank.
If the source and target platform have the same endian format, then file conversion will not be
necessary.
If the endian formats differ, however, then file conversion will be required. (Endian format
describes the order in which processor architecture natively places bytes in memory, a CPU
register, or a file.
A blank indicates that the platform is not supported for cross-platform tablespace transport
Step 2: Identify Tablespaces to be transported and verify Self-containment
Now we figure out which tablespaces we want to transport. There is no need to transport the
SYSTEM, undo, or temporary tablespaces.
We use following query on source for retrieving Tablespace, which should be transport.
SQL> SELECT tablespace_name, segment_type, COUNT(*),
SUM (bytes) / 1024 / 1024 mb
FROM dba_segments
WHERE owner NOT IN ('SYS',SYSTEM)
GROUP BY tablespace_name, segment_type
ORDER BY 1, 2 DESC;
Self-contained meaning that objects in the tablespace set cannot reference or depend on objects
that reside outside of the tablespace. For example, if a table in the EMP2 tablespace had an index
in the IND1 tablespace, then transporting the EMP1 and IND1 tablespaces (without the EMP2
tablespace) would present a problem. When the EMP1 and IND1 tablespaces are transported into
the target database, there would be an index on a non-existent table. Oracle will not allow this
and will point out the problem while exporting the metadata.
Use following query on the source database to verify there were no self-containment problems:
SQL> BEGIN
SYS.dbms_tts.transport_set_check
('<TABLESPACE_NAME>, <TABLESPACE_NAME>, incl_constraints=>TRUE,
full_check=>FALSE);
END;
/
PL/SQL procedure successfully completed.
SQL> SELECT * FROM SYS.transport_set_violations;
183
no rows selected
SQL>
If there had been an index in tablespace IND1 that belonged to a table outside of the tablespace
set, we would have seen a violation like:
SQL> SELECT * FROM SYS.transport_set_violations;
VIOLATIONS
Index
MY_SCHEMA.MY_INDEX
in
tablespace
MY_SCHEMA.MY_TABLE in tablespace EMP2
SQL>
IND1
points
to
table
If there had been a table in the TAB1 tablespace with a foreign key referencing a table outside of
the tablespace set, we would have seen a violation like:
SQL> SELECT * FROM SYS.transport_set_violations;
VIOLATIONS
Constraint MY_CHILD_TABLE_FK1 between table MY_SCHEMA.MY_PARENT_TABLE in
tablespace EMP2 and table MY_SCHEMA.MY_CHILD_TABLE in tablespace EMP1
SQL>
Step 3: Check for Problematic Data Types
As we pointed out earlier, data pump is not able to transport XMLTypes, while original export
and import are not able to transport BINARY_FLOAT or BINARY_DOUBLE data.
Furthermore, there are several opaque data types including RAW, LONG RAW, BFILE,
ANYTYPE, and user-defined data types. Because of the unstructured nature of these data types,
Oracle does not know if data in these columns will be platform-independent or require byte
swapping for endian format change. Oracle simply transports these data types as-is and leaves
conversion to the application.
We ran the following queries on the source database in order to survey the data types used in our
tablespace set:
SELECT B.data_type, COUNT(*)
FROM dba_tables A, dba_tab_columns B
WHERE A.owner NOT IN( 'SYS',SYSTEM)
AND
B.owner = A.owner
AND
B.table_name = A.table_name
GROUP BY B.data_type
ORDER BY B.data_type;
.
SELECT B.owner, B.table_name
184
185
Next we move on to checking for duplicate tablespace or object names. It will not be possible to
transport our tablespace set into the target database if a tablespace already exists there with the
same name as one of the tablespaces in our set. We can quickly check the target database for a
duplicate tablespace name:
SQL> SELECT tablespace_name
FROM dba_tablespaces
WHERE tablespace_name IN ('USERS', 'EXAMPLE');
If there had been a duplication of tablespace names, we could simply rename a tablespace (on the
source or target database) with a statement such as:
SQL> ALTER TABLESPACE old_tablespace_name RENAME TO
new_tablespace_name;
$ rman
Recovery Manager: Release 10.2.0.2.0 - Production on Wed Dec 20 10:11:38 2006
Copyright (c) 1982, 2005, Oracle. All rights reserved.
186
187