100% found this document useful (1 vote)
783 views

Tricky PLSQL Notes

This document provides an overview of key topics related to Oracle architecture, including: 1. Oracle uses two main memory structures - SGA (System Global Area) which stores components like the buffer cache and redo log buffer, and PGA (Program Global Area) which helps with query execution. 2. Background processes like DBWR, LGWR, and SMON help manage I/O and recover from failures without user interaction. 3. Oracle data is stored on disk in tablespaces which contain datafiles made up of segments and extents. 4. The document discusses normalization to avoid data anomalies like updates to multiple copies by decomposing tables and defining primary keys to uniquely identify rows.

Uploaded by

Ranitha Nair
Copyright
© © All Rights Reserved
Available Formats
Download as DOCX, PDF, TXT or read online on Scribd
100% found this document useful (1 vote)
783 views

Tricky PLSQL Notes

This document provides an overview of key topics related to Oracle architecture, including: 1. Oracle uses two main memory structures - SGA (System Global Area) which stores components like the buffer cache and redo log buffer, and PGA (Program Global Area) which helps with query execution. 2. Background processes like DBWR, LGWR, and SMON help manage I/O and recover from failures without user interaction. 3. Oracle data is stored on disk in tablespaces which contain datafiles made up of segments and extents. 4. The document discusses normalization to avoid data anomalies like updates to multiple copies by decomposing tables and defining primary keys to uniquely identify rows.

Uploaded by

Ranitha Nair
Copyright
© © All Rights Reserved
Available Formats
Download as DOCX, PDF, TXT or read online on Scribd
You are on page 1/ 209

INDEX

1. ORACLE ARCHITECHTURE
2. NORMALIZATION
3. ER DIAGRAM
4. OOPS CONCEPTS IN ORACLE
5. CURSOR
6. EXECEPTION HANDLING
7. PROCEDURE
8. FUNCTION
9. PACKAGE
10.TRIGGER
11.COLLECTION
12.PARTITIONING TABLE
13.PRAGMA
14.INDEX
15.HIRARCHICAL QUERY
16.GLOBAL TEMPORARY TABLE
17.EXTERNAL TABLE
18.GRANT & REVOKE
19.BULK COLLECT & FOR ALL
20.DYNAMIC SQL
21.FLASH BACK QUERY
22.SQL LOADER
23.NO COPY
24.Materialized View
25.Analytical Functions
26.PERFORMANCE TUNNING
27.DBMS_PROFILER
ORACLE ARACHITECTURE
https://docs.oracle.com/cd/E18283_01/server.112/e16508/process.htm

In Oracle Architecture you will cover following topics related to Oracle Architecture.
1. Oracle Memory Structure
2. Oracle Background Process
3. Oracle Disk Utilization Structure

Oracle Memory Structure:-


There are two basic memory structure on Oracle Instance.
 SGA (System Global Area)
 PGA (Program Global Area)
SGA (System Global Area):-
 SGA is one of the most important component of Oracle.
 When DBA talks about memory, at that time they are actually talking about
SGA.
 SGA stores several different component of memory which process the data for
query assigned by user.
 SGA consist of 3 different item as listed below,
o The Buffer Cache
o The Shared Pool
o The REDO log buffer
 Buffer Cache consist of buffer which is designed to store the data of recently
used SQL queries in order to improve the performance of subsequent SELECT.
 Shared Pool has two required & one optional component.
 The required component of Shared Pool are,
o SQL Library Cache
o Data Dictionary Cache
 The Optional Component of shared pool includes Session Information which is
require for user process while connecting to Oracle Instance.
 Final area of SGA is Redo Log Buffer, which stores online redo log entries in
memory until they can be written to disk.
PGA (Program Global Area)

2 of 129
 PGA is an area in memory that helps user to execute such as,
o BIND Variable Info
o SORT Area,
o And other area of Cursor Handling.
 From our prior discussion of shared pool, DBA should know database already
stored Parse Tree for recently executed query in shared area called Library
Cache. So, why user need their own area? The reason behind that is to hold/store
real value of bind variable for execution of SQL statements.

Oracle Background Process:-


 In Oracle Instance when User Process is accessing the information at that time
Oracle Instance were doing something behind the scenes, using background
process.
 No user process directly interact with I/O. It is because Oracle Instance
 Following are the list of Background process exists in Oracle Instance.
 DBWR
 LGWR
 SMON
 PMON
 RECO
 ARCH
 CKPT

o DBWR:-
 Primary job is to keep database buffer clean.

3 of 129
 DBWR will write to disk when,
 Server process cannot find clean buffer.
 Timeout occur.
 Checkpoint occur.
o LGWR:-
 Primary job is to keep Redo log buffer clean.
 LGWR write to the disk when,
 Transaction is committed.
 Timeout Occurs.
 The Redo Log Buffer is 1/3 full.
o SMON:-
 SMON primarily clean up Server Side Failure.
 It Wakeups regularly to check whether it is needed or not.
 It will recover the transaction marked as DEAD during Instance
recovery.
 All Non-Committed work will be rollback by SMON.
o PMON:-
 PMON primarily clean up Client Side Failure.
 It will wake up regularly and check whether it is require or not.
 It will detects the both server & client aborted process.
o RECO:-
 It will handle the recovery of distributed transaction against the
database.
o ARCH:-
 It will archive online redo log.
o CKPT:-
 It will handle the process of writing log sequence number to data
file, control file.
 It is an alternative of LGWR.

Oracle Disk Utilization Structure:-


 It include component like,
o Table Space

4 of 129
o Segment
o Extend
 An Oracle database consists of one or more logical storage units called
tablespaces, which collectively store all of the database's data.
 Each tablespace in an Oracle database consists of one or more files called
datafiles, which are physical structures that conform to the operating system in
which Oracle is running.
 A segment is a set of Extents.
 Extends contains all the data which stored within a tablespace.

NORMALIZATION

Normalization is basically used to avoid 3 types of anomalies,

5 of 129
 INSERT
 UPDATE
 DELETE
From which database may suffer.
Normalization is the process of organizing the data in database in such a way that it
will reduce REDANDANCY & above 3 types of anomalies.

For e.g.
Suppose we have following table,
Student_Courses(Sid PK,Sname,Phone,Course_Taken)
Where,
 SID is Student Id which is Primary Key
 Sname is Student Name
 Phone is Student Phone Number
 Course_Taken is itself is Table which contain,
o Course_Id
o Course_Description
o Credit_Hours
o Grade

Student_Courses

SID SNAME PHONE COURSE_TAKEN


100 JHONE 487 2454 St-100-courses-taken
200 SMITH 671 8120 St-200-courses-taken
300 RUSSEL 871 2356 St-300-courses-taken

Definition of three types of anomalies are as below for above table,


INSERT Anomalies:-
We can’t ADD new course in table unless or until it is select by any student.
UPDATE Anomalies:-
It means we have data redundancy in database. So while updating we should
UPDATE all copies of respective data. Otherwise data will become inconsistent.
DELETE Anomalies:-

6 of 129
If we want to delete the data of any student & only that student holds particular
course. In such case if we DELETE the data for that student then we will loss
respective Couse Data also.
According to 1NF rule above table is not 1NF, so we decompose above table as below.
Course- Credit-
Sid Sname Phone Course-id Grade
description hours
Database
100 John 487 2454 IS380 3 A
Concepts
Unix Operating
100 John 487 2454 IS416 3 B
System
Database
200 Smith 671 8120 IS380 3 B
Concepts
Unix Operating
200 Smith 671 8120 IS416 3 B
System
200 Smith 671 8120 IS420 Data Net Work 3 C
300 Russell 871 2356 IS417 System Analysis 3 A

Examination of the above Student-courses relation reveals that Sid does not uniquely
identify a row (tuple) in the relation hence cannot be the primary key. For the same
reason Course-id cannot be the primary key. However the combination of Sid and
Course-id uniquely identifies a row in Student-courses, Therefore (Sid, Course-id) is
the primary key of the above relation.
The primary key determines every attribute. For example if you know both Sid and
Course-id for any student you will be able to retrieve Sname, Phone, Course-
description, Credit-hours and Grade, because these attributes are dependent on the
primary key. Figure 1 below is the graphical representation of the functional
dependency between the primary key and attributes of the above relation.

7 of 129
8 of 129
Note that the attribute to the right of the arrow is functionally dependent on the
attribute in the left of the arrow. Thus the combination (Sid, Course-id) is the
determinant (that determines other attributes) and attributes Sname, Phone, Course-
description, Credit-hours and Grade are dependent attributes.
Formally speaking a determinant is an attribute or a group of attributes determine the
value of other attributes. In addition to the (Sid, Course-id) there are two other
determinants in the above Student-courses relation. These are; Sid and Course-id
attributes. Note that Sid alone determines both Sname and Phone, and attribute
Course-id alone determines both Credit-hours and Course_description attributes.

9 of 129
1

Attribute Grade is fully functionally dependent on the primary key (Sid, Course-id)
because both parts of the primary keys are needed to determine Grade. On the
other hand both Sname, and Phone attributes are not fully functionally dependent on
the primary key, because only a part of the primary key namely Sid is needed to
determine both Sname and Phone. Also attributes Credit-hours and Course-
Description are not fully functionally dependent on the primary key because only
Course-id is needed to determine their values.
The new relation Student-courses still suffers from all three anomalies for the
following reasons:
1. The relation contains redundant data (Note Database_Concepts as the
course
description for IS380 appears in more than one place).
2. The relation contains information about two entities Student and course.
Following is the detail description of the anomalies that relation Student-courses
suffers from.
1. Insertion anomaly: We cannot add a new course such as IS247 with course
description programming techniques to the database unless we add a student
who to take the course.
2. Update anomaly: If we change the course description for IS380 from Database
Concepts to New_Database_Concepts we have to make changes in more than
one place or else the database will be inconsistent. In other words in some places
the course description will be New_Database_Concepts and in any place were
we forgot to make the changes the description still will be Database_Concepts.
3. Deletion anomaly: If student Russell is deleted from the database we also loose
information that we had on course IS417 with description System_Analysis.
The above discussion indicates that having a single table Student-courses for our
database causing problems (anomalies). Therefore we break the table to smaller table
to get a higher normal form relation. Before doing that let us define the second normal
form.

Second normal relation: A first normal form relation is in second normal form if all
its non-primary attributes are fully functionally dependent on the primary key.

10 of 129
Note that primary attributes are those attributes, which are parts of the primary key,
and non-primary attributes do not participate in the primary key. In Student-courses
relation both Sid and Course-id are primary attributes because they are components of
the primary key. However attributes Sname, Phone, Course-description, Credit-hours
and Grade all are non primary attributes because non of them is a component of the
primary key.
To convert Student-courses to second normal relations we have to make all non-
primary attributes to be fully functionally dependent on the primary key. To do that we
need to project (that is we break it down to two or more relations) Student-courses
table into two or more tables. However projections may cause problems. To avoid such
problems it is important to keep attributes, which are dependent on each other in the
same table, when a relation is projected to smaller relations. Following this principle
and examination of Figure-1 indicate that we should divide Student-courses relation
into following three relations:
PROJECT Student-courses ON (Sid, Sname, Phone) creates a table call it Student.
The relation Student will be Student (Sid:pk, Sname, Phone) and
PROJECT Student-courses ON (Sid, Course-id, Grade) creates a table call it
Student-grade. The relation Student-grade will be
Student-grade (Sid:pk1:fk:Student, Course-id::pk2:fk:Courses, Grade) and
Projects Student-courses ON (Course-id, Course-Description, Credit-hours) create a
table call it Courses. Following are these three relations and their contents:

Student (Sid:pk, Sname, Phone)

Sid Sname Phone


100 John 487 2454
200 Smith 671 8120
300 Russell 871 2356

Courses (Course-id::pk, Course-Description)

Course-id Course-description Credit-hours


IS380 Database Concepts 3
IS416 Unix Operating System 3
IS420 Data Net Work 3
IS417 System Analysis 3

Student-grade (Sid:pk1:fk:Student, Course-id::pk2:fk:Courses, Grade)

11 of 129
Sid Course-id Grade
100 IS380 A
100 IS416 B
200 IS380 B
200 IS416 B
200 IS420 C
300 IS417 A

All these three relations are in second normal form. Examination of these relations
shows that we have eliminated the redundancy in the database. Now relation Student
contains information only related to the entity student, relation Courses contains
information related to entity Courses only, and the relation Student-grade contains
information related to the relationship between these two entity.
Further these three sets are free from all anomalies. Let us clarify this in more detail.
Insertion anomaly: Now a new Course with course-id IS247 and Course-description
can be inserted to the table Course. Equally we can add any new students to the
database by adding their id, name and phone to Student table. Therefore our database,
which made up of these three tables does not suffer from insertion anomaly.
Update anomaly: Since redundancy of the data was eliminated no update anomaly can
occur. To change the course-description for IS380 only one change is needed in table
Courses.
Deletion anomaly: the deletion of student Russell from the database is achieved by
deleting Russell's records from both Student and Student-grade relations and this
does not have any side effect because the course IS417 untouched in the table Courses.

Third Normal Form: A second normal form relation is in third normal form if all non-
primary attributes (that is attributes that are not parts of the primary key or of any
candidate key) have non-transitivity dependency on the primary key.
Assume the relation:

12 of 129
STUDENT (Sid: pk, Activity, fee)
Further Activity ------------> fee that is the Activity determine the fee

PRIVATESi Activity Fee


d
100 Swimming 100
200 Tennis 100
300 Golf 300
400 Swimming 100

Table STUDENT is in first normal form because all its attributes are simple. Also
STUDENT is in second normal form because all its non-primary attributes are fully
functionally dependent on the primary key (Sid). Notice that a first normal relation
with non-composite (that is simple) primary key automatically will be in second
normal form because all its non-primary attributes will be fully functionally dependent
on the primary key.
Table STUDENT suffers from all 3 anomalies; a new student can not be added to the
database unless he/she takes an activity and no activity can be inserted into the
database unless we get a student to take that activity. There is redundancy in the table
(see Swimming), therefore to change the fee for Swimming we must make changes in
more than one place and that will cause update anomaly problem. If student 300 is
deleted from the table we also loose the fact that we had Golf activity with its fee to be
300. To overcome these anomalies STUDENT table should be converted to smaller
tables. Consider the following three projection of the STUDENT relation:
PROJECT STUDENT on [Sid, Activity] and we get a relation name it
STUD-AVT (Sid:pk, Activity) with the following data :

STUD_ACT

13 of 129
Error! Bookmark not defined.Error! Bookmark not defined.Error! Bookmark
not defined.Error! Bookmark not defined.

ER DIAGRAM

The ER model defines the conceptual view of a database. It works around real-world entities
and the associations among them. At view level, the ER model is considered a good option for
designing databases.

Entity

An entity can be a real-world object, either animate or inanimate, that can be easily
identifiable. For example, in a school database, students, teachers, classes, and courses offered
can be considered as entities. All these entities have some attributes or properties that give
them their identity.

An entity set is a collection of similar types of entities. An entity set may contain entities with
attribute sharing similar values. For example, a Students set may contain all the students of a
school; likewise a Teachers set may contain all the teachers of a school from all faculties.
Entity sets need not be disjoint.

Attributes

Entities are represented by means of their properties, called attributes. All attributes have
values. For example, a student entity may have name, class, and age as attributes.

There exists a domain or range of values that can be assigned to attributes. For example, a
student's name cannot be a numeric value. It has to be alphabetic. A student's age cannot be
negative, etc.

Types of Attributes

14 of 129
 Simple attribute − Simple attributes are atomic values, which cannot be divided
further. For example, a student's phone number is an atomic value of 10 digits.

 Composite attribute − Composite attributes are made of more than one simple
attribute. For example, a student's complete name may have first_name and last_name.

 Derived attribute − Derived attributes are the attributes that do not exist in the
physical database, but their values are derived from other attributes present in the
database. For example, average_salary in a department should not be saved directly in
the database, instead it can be derived. For another example, age can be derived from
data_of_birth.

 Single-value attribute − Single-value attributes contain single value. For example


− Social_Security_Number.

 Multi-value attribute − Multi-value attributes may contain more than one values.
For example, a person can have more than one phone number, email_address, etc.

These attribute types can come together in a way like −

 simple single-valued attributes


 simple multi-valued attributes
 composite single-valued attributes
 composite multi-valued attributes
Entity-Set and Keys
Key is an attribute or collection of attributes that uniquely identifies an entity among entity
set.

For example, the roll_number of a student makes him/her identifiable among students.

 Super Key − A set of attributes (one or more) that collectively identifies an entity in
an entity set.

 Candidate Key − A minimal super key is called a candidate key. An entity set may
have more than one candidate key.

 Primary Key − A primary key is one of the candidate keys chosen by the database
designer to uniquely identify the entity set.

Relationship

15 of 129
The association among entities is called a relationship. For example, an employee works_at a
department, a student enrolls in a course. Here, Works_at and Enrolls are called relationships.

Relationship Set
A set of relationships of similar type is called a relationship set. Like entities, a relationship
too can have attributes. These attributes are called descriptive attributes.

Degree of Relationship
The number of participating entities in a relationship defines the degree of the relationship.

 Binary = degree 2
 Ternary = degree 3
 n-ary = degree
Mapping Cardinalities
Cardinality defines the number of entities in one entity set, which can be associated with the
number of entities of other set via relationship set.

 One-to-one − One entity from entity set A can be associated with at most one entity
of entity set B and vice versa.
 One-to-many − One entity from entity set A can be associated with more than one
entities of entity set B however an entity from entity set B, can be associated with at
most one entity.
 Many-to-one − More than one entities from entity set A can be associated with at
most one entity of entity set B, however an entity from entity set B can be associated
with more than one entity from entity set A.
 Many-to-many − One entity from A can be associated with more than one entity
from B and vice versa.

CURSOR
Cursor is an area given by Oracle to perform SQL queries. There are ideally two types
of cursor.
1. Implicit Cursor
2. Explicit Cursor
Implicit Cursor create by oracle itself. For e.g. any SELECT statement.
Explicit Cursor create by user.

Cursor have following attributes,

16 of 129
%ISOPEN: - Check whether cursor is open or not. If Cursor is open then it will return
TRUE else it will return FALSE.

%FOUND: - Returns TRUE if DML statement affect one or more rows or SELECT
statement returns one or more rows. Otherwise, it returns FALSE.

%NOTFOUND: - It is Opposite of %FOUND. It returns TRUE if DML statement does


not affect any row or SELECT statement does not fetch any data else it will return
FALSE.

%ROWCOUNT: - It returns no of rows affect by DML operation or return by


SELECT statement.

Can we reopen the same cursor inside it?


-> No

declare

CURSOR c1
IS
SELECT eno,dno FROM sag_test_1;

v_eno NUMBER;
v_dno NUMBER;

begin

OPEN c1;
LOOP
FETCH c1 INTO v_eno,v_dno;
EXIT WHEN c1%NOTFOUND;
dbms_output.put_line('EMP_NO:='||v_eno||'DEPT_NO:='||v_dno);
END LOOP;

OPEN c1;
LOOP
FETCH c1 INTO v_eno,v_dno;
EXIT WHEN c1%NOTFOUND;
dbms_output.put_line('EMP_NO:='||v_eno||'DEPT_NO:='||v_dno);

17 of 129
END LOOP;
CLOSE c1;

CLOSE c1;

end;

Can we open another cursor within existing one?


-> YES

declare

CURSOR c1
IS
SELECT eno,dno FROM sag_test_1;

CURSOR c2 IS
SELECT ename,sal
FROM sag_test_1;

v_eno NUMBER;
v_dno NUMBER;

v_ename VARCHAR2(10);
v_sal NUMBER;

begin

OPEN c1;

18 of 129
LOOP
FETCH c1 INTO v_eno,v_dno;
EXIT WHEN c1%NOTFOUND;
dbms_output.put_line('EMP_NO:='||v_eno||'DEPT_NO:='||v_dno);
END LOOP;

OPEN c2;
LOOP
FETCH c2 INTO v_ename,v_sal;
EXIT WHEN c2%NOTFOUND;
dbms_output.put_line('Ename:='||v_ename||'Salary:='||v_sal);
END LOOP;
CLOSE c2;

CLOSE c1;

end;

Can we reopen same cursor name but with Different parameter?

declare

CURSOR c1
IS
SELECT eno,dno FROM sag_test_1;

CURSOR c1
IS
SELECT ename,sal FROM sag_test_1;

v_eno NUMBER;
v_dno NUMBER;

19 of 129
v_ename VARCHAR2(10);
sal NUMBER;

begin

OPEN c1;
LOOP
FETCH c1 INTO v_eno,v_dno;
EXIT WHEN c1%NOTFOUND;
dbms_output.put_line('EMP_NO:='||v_eno||'DEPT_NO:='||v_dno);
END LOOP;

OPEN c1;
LOOP
FETCH c1 INTO v_ename,sal;
EXIT WHEN c1%NOTFOUND;
dbms_output.put_line('EMP_Name:='||v_ename||'Salary:='||sal);
END LOOP;
CLOSE c1;

CLOSE c1;

end;

20 of 129
Can we open same cursor but with different input parameter?
- >No it is not possible
We will get an error like

ORA-06550: line 19, column 6:


PLS-00307: too many declarations of 'C1' match this call
ORA-06550: line 19, column 1:
PL/SQL: SQL Statement ignored
ORA-06550: line 22, column 5:
PLS-00307: too many declarations of 'C1' match this call
ORA-06550: line 22, column 5:

21 of 129
DECLARE

CURSOR C1(V_DNO EMP.DEPTNO%TYPE)


IS
SELECT EMPNO,SAL
FROM EMP WHERE DEPTNO=V_DNO;

CURSOR C1(V_JOB EMP.JOB%TYPE)


IS
SELECT EMPNO,SAL
FROM EMP WHERE JOB = V_JOB;

V_EMPNO EMP.EMPNO%TYPE;
V_SAL EMP.SAL%TYPE;

BEGIN

OPEN C1(10);

LOOP
FETCH C1 INTO V_EMPNO,V_SAL;
DBMS_OUTPUT.PUT_LINE('EMP WHOES DEPTNO IS 10 FOR THEM EMP
NO = ' || V_EMPNO || ' SAL =' || V_SAL);
EXIT WHEN (C1%NOTFOUND=TRUE);
END LOOP;

CLOSE C1;

OPEN C1('MANAGER');

LOOP
FETCH C1 INTO V_EMPNO,V_SAL;
DBMS_OUTPUT.PUT_LINE('EMP WHOES JOB IS MANAGER FOR THEM
EMP NO = ' || V_EMPNO || ' SAL =' || V_SAL);
EXIT WHEN (C1%NOTFOUND=TRUE);
END LOOP;

CLOSE C1;

22 of 129
END;

**************** Correct Code *******************

DECLARE

CURSOR C1(V_DNO EMP.DEPTNO%TYPE)


IS
SELECT EMPNO,SAL
FROM EMP WHERE DEPTNO=V_DNO;

CURSOR C2(V_JOB EMP.JOB%TYPE)


IS
SELECT EMPNO,SAL
FROM EMP WHERE JOB = V_JOB;

V_EMPNO EMP.EMPNO%TYPE;
V_SAL EMP.SAL%TYPE;

BEGIN

OPEN C1(10);

LOOP
FETCH C1 INTO V_EMPNO,V_SAL;
DBMS_OUTPUT.PUT_LINE('EMP WHOES DEPTNO IS 10 FOR THEM EMP
NO = ' || V_EMPNO || ' SAL =' || V_SAL);
EXIT WHEN (C1%NOTFOUND=TRUE);
END LOOP;

CLOSE C1;

OPEN C2('MANAGER');

LOOP
FETCH C2 INTO V_EMPNO,V_SAL;

23 of 129
DBMS_OUTPUT.PUT_LINE('EMP WHOES JOB IS MANAGER FOR THEM
EMP NO = ' || V_EMPNO || ' SAL =' || V_SAL);
EXIT WHEN (C2%NOTFOUND=TRUE);
END LOOP;

CLOSE C2;

END;

Ref Cursor
A REF Cursor is a datatype that holds a cursor value in the same way that a
VARCHAR2 variable will hold a string value.

A REF Cursor can be opened on the server and passed to the client as a unit rather than
fetching one row at a time. One can use a Ref Cursor as target of an assignment, and it
can be passed as parameter to other program units. Ref Cursors are opened with an
OPEN FOR statement. In most other ways they behave similar to normal cursors.

History

This feature was introduced with PL/SQL v2.3 (Oracle 7.3).

Example

Create a function that opens a cursor and returns a reference to it:

create or replace procedure test_proc


(
v_dno emp.deptno%type,
v_op out sys_refcursor
)
as

begin

open v_op for select empno,ename,sal,deptno from emp where deptno=v_dno;

24 of 129
end;

declare

t sys_refcursor;

v_empno emp.empno%type;
v_ename emp.ename%type;
v_sal emp.sal%type;
v_deptno emp.deptno%type;

begin
test_proc('10',t);
LOOP
FETCH t
INTO v_empno,v_ename,v_sal,v_deptno;
EXIT WHEN t%NOTFOUND;
DBMS_OUTPUT.PUT_LINE(v_empno|| ' | ' || v_ename || ' | ' || v_sal || ' | ' ||
v_deptno);
END LOOP;
CLOSE t;

end;

Ref cursor are of two types

1) A Strong ref cursor (also called static structure type)


2) Weak ref cursor (also called dynamic structure type)

1) Strong ref cursor:

25 of 129
-->When return type included then it is called strong or static structure type
-->static ref cursor support different type of select statement but all of same
structure, but not necessary that the table should be same
2) Weak Ref Cursor:

  this ref cursor allows us to any  type of select statement irrespective of data
structure . i .e any table

Ref Cursor:

Syntax:
  type <typename> is ref cursor [return <returntype>];
syntax for open statement:

  open <cursorvariable> for select statement .......;


--------Strong cursor ----------------------

declare

type emprefcur is ref cursor return emp%rowtype;


ec emprefcur;
v_ec ec%rowtype;
begin
  open ec for select * from emp;
loop
      fetch ec into v_ec;
     exit when ec%notfound;
     print(v_Ec.empno);
     print(v_ec.ename);

26 of 129
end loop;
close ec;

print('-------------------------------------------------------------------------'):
  open ec for select * from emp;
loop
      fetch ec into v_ec;
     exit when ec%notfound;
     print(v_Ec.empno);
     print(v_ec.ename);
end loop;
close ec;

end;

Weak cursor example


-------------

declare
------------------weak cursor --------------------------
type refcur is ref cursor;
xc refcur;
v_Ec emp%rowtype;
v_dc dept%rowtype;
begin
  open xc for select * from emp;

27 of 129
loop
     fetch xc into v_ec;
exit when xc%notfound;
  print(v_ec.ename);
  print(v_Ec.empno);
end loop;

close xc;

print('--------------------------------------');
open xc for select * from dept;
loop
     fetch xc into v_dc;
     exit  when xc%notfound;
  print(v_dc.deptno);
  print(v_dc.dname);
  print(v_dc.loc);

end loop;
close xc;

end;

EXCEPTION HANDLING

28 of 129
CREATE OR REPLACE FUNCTION SAG_S5_FUN(F_ENO NUMBER)
RETURN NUMBER
AS
V_COUNT NUMBER;
V_NO_DATA EXCEPTION;
PRAGMA EXCEPTION_INIT(V_NO_DATA, -20009);
BEGIN
SELECT COUNT(*) INTO V_COUNT FROM SAG_TEST_EMP WHERE
EMP_NO = F_ENO;
IF (V_COUNT != 1) THEN
RAISE V_NO_DATA;
ELSE
DBMS_OUTPUT.PUT_LINE(V_COUNT);
RETURN V_COUNT;
END IF;
EXCEPTION
WHEN V_NO_DATA THEN
RAISE_APPLICATION_ERROR(-20009, 'No such emp exist');
RETURN V_COUNT;
END;

PRAGMA EXCEPTION_INIT allows Oracle defines error numbers.


For e.g.

declare

child_rec exception;
pragma exception_init(child_rec,-02292);

begin

update dept_1 set dept_no=30 where dept_no=10;

exception

29 of 129
when child_rec then
dbms_output.put_line('Child found');

end;

O/P:-

Child found

Statement processed.

0.01 seconds

declare

child_rec exception;
pragma exception_init(child_rec,-02292);

begin

update dept_1 set dept_no=30 where dept_no=10;

exception

when child_rec then


raise_application_error(-20004,'Child found');

end;

o/p:-
ORA-20004: Child found

When we specify user define error number in PRAGMA EXCEPTION_INIT


Then we will have following o/p

30 of 129
declare

child_rec exception;
pragma exception_init(child_rec,-20004);

begin

update dept_1 set dept_no=30 where dept_no=10;

exception

when child_rec then


raise_application_error(-20004,'Child found');

end;

ORA-02292: integrity constraint (RAHATE_SCHEMA.EMP_FK) violated - child


record found

Exception Handling In Bulk Collect


Operation.
Suppose we have two tables.

Create table Emp_5


(
eno number,
ename varchar2(50),
sal number,
dno varchar2(10)
);

Create table Emp_6

31 of 129
(
eno number,
ename varchar2(50),
sal number,
dno number
);

Now we want to insert data from Emp_5 table to Emp_6 table in fastest way that is by
using Bulk Collect & Forall.

Let’s start with our code.

Step 1:- In this step we will not handle any exception. Then check what will be the
output.

declare
type my_rec is record(eno number,ename varchar2(50),sal number,dno varchar2(50));
type my_tab is table of my_rec index by binary_integer;
t my_tab;

v_sql varchar2(100);

begin

v_sql:='Truncate table emp_6';


execute immediate v_sql;

select * --eno,ename,sal,dno
bulk collect
into t
from emp_5
order by 1;

forall I in 1..t.count

insert into emp_6


values
(t(I).eno,t(I).ename,t(I).sal,t(I).dno);

32 of 129
end;

o/p:-
ORA-01722: invalid number
select * from emp_6 order by 1;

No data found

Step 2:- Here we will handle exception

declare
type my_rec is record(eno number,ename varchar2(50),sal number,dno varchar2(50));
type my_tab is table of my_rec index by binary_integer;
t my_tab;

v_sql varchar2(100);

begin

v_sql:='Truncate table emp_6';


execute immediate v_sql;

select * --eno,ename,sal,dno
bulk collect
into t
from emp_5
order by 1;

forall I in 1..t.count
insert into emp_6
values
(t(I).eno,t(I).ename,t(I).sal,t(I).dno);

exception
when others then
for x in 1..sql%bulk_exceptions.count
loop
dbms_output.put_line(sql%bulk_exceptions(x).error_index||'-'||sqlerrm(-sql
%bulk_exceptions(x).error_code));

33 of 129
end loop;

end;

O/p:-

4-ORA-01722: invalid number

Statement processed.

select * from emp_6 order by 1;

EN ENA DN
SAL
O ME O
1 a 1500010
2 a2 2500020
3 a3 2100010

Step 3:- We are handling exception with SAVE EXCEPTION


clause.

declare
type my_rec is record(eno number,ename varchar2(50),sal number,dno varchar2(50));
type my_tab is table of my_rec index by binary_integer;
t my_tab;

v_sql varchar2(100);

begin

v_sql:='Truncate table emp_6';


execute immediate v_sql;

select * --eno,ename,sal,dno
bulk collect
into t
from emp_5
order by 1;

34 of 129
forall I in 1..t.countsave exceptions
insert into emp_6
values
(t(I).eno,t(I).ename,t(I).sal,t(I).dno);

exception
when others then
for x in 1..sql%bulk_exceptions.count
loop
dbms_output.put_line(sql%bulk_exceptions(x).error_index||'-'||sqlerrm(-sql
%bulk_exceptions(x).error_code));
end loop;

end;

O/p:-

4-ORA-01722: invalid number


6-ORA-01722: invalid number

Statement processed.

select * from emp_6 order by 1;

EN ENA DN
SAL
O ME O
1 a 1500010
2 a2 2500020
3 a3 2100010
5 a5 3300020
7 a7 3500020

35 of 129
PROCEDURE
Syntax for Procedure
CREATE [OR REPLACE] PROCEDURE procedure_name[ (parameter [,parameter]) ]
IS
[declaration_section]
BEGIN
executable_section
[EXCEPTION
exception_section]

END [procedure_name];

For e.g.

create or replace procedure test_proc

36 of 129
(
p_dno number,
p_output out sys_refcursor
)
as

begin

open p_output for select empno,ename,job,sal from emp where deptno=p_dno;

end;

declare

t sys_refcursor;
empno emp.empno%type;
ename emp.ename%type;
job emp.job%type;
sal emp.sal%type;

begin
test_proc(10,t);
loop
fetch t into empno,ename,job,sal;
exit when t%notfound;
dbms_output.put_line('Employee Number = ' || empno || ' Employee Name = ' || ename
|| ' Job = ' ||job|| ' Salary = ' || sal);
end loop;
close t;

37 of 129
end;

FUNCTION

Syntax for creating function:-


CREATE [OR REPLACE] FUNCTION function_name[ (parameter [,parameter]) ]
RETURN return_datatype
IS | AS
[declaration_section]
BEGIN
executable_section
[EXCEPTION
exception_section]
END [function_name];

For e.g.:-

Can function return more than one value?


 YES, function can return more than one value.

38 of 129
By Using Object & Table Type:-

**********************************************************************
create or replace type my_obj is object
(
empno number,
ename varchar2(50),
sal number
);

**********************************************************************
create or replace type my_tab is table of my_obj;

**********************************************************************
create or replace function my_func(f_dno IN number)
return my_tab
as

t my_tab:=my_tab();
n integer:=0;

begin
dbms_output.put_line('Hii');

for i in (select empno,ename,sal from emp where deptno=f_dno)


loop
t.extend;
n:=n+1;

39 of 129
t(n):=my_obj(i.empno,i.ename,i.sal);
dbms_output.put_line('Empno:= '|| t(n).empno||' Ename:= '|| t(n).ename|| ' Salary:' ||
t(n).sal);
end loop;
return t;
end;

**********************************************************************

select my_func(10) from dual;

select * from table(my_func(10));

**********************************************************************

By Using SYS_REFCURSOR:-

create or replace function my_func2


(f_deptno number)
return sys_refcursor
as
f_output sys_refcursor;
begin
open f_output for select * from emp where deptno=f_deptno;

return f_output;

end;

declare
t sys_refcursor;

40 of 129
x emp%rowtype;

begin

t:= my_func2(10);

loop
fetch t into x;
exit when t%notfound;
dbms_output.put_line(' Empno:= '|| x.empno ||' Emp Name:= ' || x.ename || ' Employee
Salary:= ' || x.sal);
end loop;
close t;
end;

41 of 129
PACKAGE

Package is schema object that groups logically related PL/SQL objects like TYPES,
PROCEDURE, FUNCTION, CURSOR, etc.
Package usually have two parts,
Package Specification
Package Body
Specification is like interface to an Application whereas Body is contains all definition
of objects.

Advantage of Package
Modularity:-
Modularity let you to break an application into many more small modules.
It will reduce complex problem into set of simple problem.
Easy Application Design:-
When designing Application we need interface information in package
specification. You can compile specification without body. Vice versa is not
possible.
Information Hiding:-
With PACKAGE you can define which object should be PUBLEC OR
PRIVATE. For e.g. If PACKAGE contain 4 sub programs out of which 3 are
PUBLIC and 1 of them is PRIVATE. In this case PACKAGE hide
implementation of PRIVATE sub-program so that only PACKAGE will get
affect if implementation changes.

42 of 129
Added Functionality:-
PACKAGE can have PUBLIC Variable, CURSOR, etc. so that it can be
accessible by ALL Sub-Program execute in this environment. They also allow
you to maintain the data across transaction without storing it on database.
Better Performance:-
When you call PACKAGE subprogram for the first time, the whole
package will get load into memory. So later, call related to subprogram in
package require no disk I/O.

Restriction on PACKAGE:-

You cannot reference remote packaged variables directly or indirectly. For example,
you cannot call the following procedure remotely because it references a packaged
variable in a parameter initialization clause:

CREATE PACKAGE random AS


seed NUMBER;
PROCEDURE initialize (starter IN NUMBER := seed, ...);

For e.g.

CREATE OR REPLACE PACKAGE emp_mgmt AS


FUNCTION hire (last_name VARCHAR2, job_id VARCHAR2,
manager_id NUMBER, salary NUMBER,
commission_pct NUMBER, department_id NUMBER)
RETURN NUMBER;
FUNCTION create_dept(department_id NUMBER, location_id NUMBER)
RETURN NUMBER;
PROCEDURE remove_emp(employee_id NUMBER);
PROCEDURE remove_dept(department_id NUMBER);
PROCEDURE increase_sal(employee_id NUMBER, salary_incr NUMBER);
PROCEDURE increase_comm(employee_id NUMBER, comm_incr NUMBER);
no_comm EXCEPTION;
no_sal EXCEPTION;
END emp_mgmt;
/

CREATE OR REPLACE PACKAGE BODY emp_mgmt AS


tot_emps NUMBER;
tot_depts NUMBER;
FUNCTION hire
(last_name VARCHAR2, job_id VARCHAR2,
manager_id NUMBER, salary NUMBER,

43 of 129
commission_pct NUMBER, department_id NUMBER)
RETURN NUMBER IS new_empno NUMBER;
BEGIN
SELECT employees_seq.NEXTVAL
INTO new_empno
FROM DUAL;
INSERT INTO employees
VALUES (new_empno, 'First', 'Last','[email protected]',
'(415)555-0100','18-JUN-02','IT_PROG',90000000,00,
100,110);
tot_emps := tot_emps + 1;
RETURN(new_empno);
END;
FUNCTION create_dept(department_id NUMBER, location_id NUMBER)
RETURN NUMBER IS
new_deptno NUMBER;
BEGIN
SELECT departments_seq.NEXTVAL
INTO new_deptno
FROM dual;
INSERT INTO departments
VALUES (new_deptno, 'department name', 100, 1700);
tot_depts := tot_depts + 1;
RETURN(new_deptno);
END;
PROCEDURE remove_emp (employee_id NUMBER) IS
BEGIN
DELETE FROM employees
WHERE employees.employee_id = remove_emp.employee_id;
tot_emps := tot_emps - 1;
END;
PROCEDURE remove_dept(department_id NUMBER) IS
BEGIN
DELETE FROM departments
WHERE departments.department_id = remove_dept.department_id;
tot_depts := tot_depts - 1;
SELECT COUNT(*) INTO tot_emps FROM employees;
END;
PROCEDURE increase_sal(employee_id NUMBER, salary_incr NUMBER) IS
curr_sal NUMBER;
BEGIN
SELECT salary INTO curr_sal FROM employees
WHERE employees.employee_id = increase_sal.employee_id;
IF curr_sal IS NULL
THEN RAISE no_sal;
ELSE
UPDATE employees
SET salary = salary + salary_incr
WHERE employee_id = employee_id;
END IF;
END;
PROCEDURE increase_comm(employee_id NUMBER, comm_incr NUMBER) IS
curr_comm NUMBER;
BEGIN
SELECT commission_pct
INTO curr_comm
FROM employees
WHERE employees.employee_id = increase_comm.employee_id;

44 of 129
IF curr_comm IS NULL
THEN RAISE no_comm;
ELSE
UPDATE employees
SET commission_pct = commission_pct + comm_incr;
END IF;
END;
END emp_mgmt;
/

select test_pkg.test_fun(7839) from dual;

create or replace package test_pkg as


procedure test_proc3(v_empno emp.empno%type);
function test_fun(f_empno emp.empno%type) return number;
end;

create or replace package body test_pkg as

procedure test_proc3(v_empno emp.empno%type)


as
v_empname emp.ename%type;
begin

select ename into v_empname from emp where empno=v_empno;


dbms_output.put_line('Employee Name := ' || v_empname);

end;

function test_fun(f_empno emp.empno%type)


return number
as
v_sal number;

begin
select sal into v_sal from emp where empno=f_empno;

return v_sal;

45 of 129
end;
end;

TRIGGER
 Trigger in event based stored program.
 They are not call directly.
 They runs between the time when you issue command
 Trigger monitors the changes in States of Database.
 Database Trigger are different from PLSQL functions & procedures because you
cannot call them directly.
 Database Triggers are fired when Triggered Event occurred in database. This
makes them powerful tool to manage database.
 You can do following with Triggers:-
 Control the behavior of DDL statements.
 Control the behavior of DML statements.
 Enforce referential integrity, complex business rules & security
policies.

* Types of Trigger:-

There are five types of trigger

1. DDL Trigger
2. DML Trigger
3. Compound Trigger
4. Instead of Trigger
5. System or Database Event Trigger

1. DDL Trigger:-

 This trigger are fires when you CREATE, ALTER & DROP object in
Database.
 They are useful to control or monitor DDL Statement.

46 of 129
2. DML Trigger:-
 This trigger fires when you INSERT, UPDATE & DELETE data from
table.
 You can fire them once for all or for each row change using Statement
Level or Row Level trigger type.
 You can use these triggers to controls DML statements.

3. Compound Trigger:-
 This trigger act as a both "ROW LEVEL” & "STATEMENT LEVEL"
trigger when you INSERT, UPDATE & DELETE data from table.
 This trigger let you to capture the information at 4 times point,
 Before firing statement.
 Before each row changing from firing statement.
 After each row changing from firing statement.
 After firing statement.

4. Instead of Trigger:-
 This trigger unable you to STOP DML statements on VIEW.
 This trigger allow you to maintain non updatable view.

5. SYSTEM or DATABASE event Trigger:-


 This trigger fires when system activity occurs in database like LOGON
and LOGOFF.
 They are useful for audit information of system access.

* Limitation of Trigger:-
Trigger largest body can't be larger than 32,760 bytes. That is because trigger
body stored in LONG data type column. This means we should keep our Trigger Body
as small as possible. We can solve this problem by keeping coding logic in other
schema like Procedures, Functions & Packages. Another advantage of keeping code on
another schema is we can WRAP the code which is not possible in Trigger.

DML TRIGGER
create or replace trigger sag_test_trig
before insert or update or delete
on emp
for each row
DECLARE

47 of 129
v_user varchar2(50);
BEGIN
select user into v_user from dual;
case
when inserting then
insert into emp_log
values
(:new.empno,v_user,'INSERT',sysdate);

when updating then


insert into emp_log
values
(:old.empno,v_user,'UPDATE',sysdate);
when deleting then
insert into emp_log
values
(:old.empno,v_user,'DELETE',sysdate);
end case;
end;

insert into emp values


(44,'Sagar','Engineer','',sysdate,'50000','',40);

update emp set sal='3000'


where empno='7839';

delete emp where empno='44';

select * from emp_log;

48 of 129
EMPNO USER_NAME OPERATION LOG_TIME
44 APEX_PUBLIC_USER DELETE 04/19/2016
44 APEX_PUBLIC_USER INSERT 04/19/2016
7839 APEX_PUBLIC_USER UPDATE 04/19/2016

BEFORE INSERT, UPDATE, AND DELETE:-


CREATE OR REPLACE TRIGGER BEF_IUD_SAG_TEST_EMP
BEFORE INSERT OR UPDATE OR DELETE
ON SAG_TEST_EMP
FOR EACH ROW

DECLARE

V_COUNT NUMBER;
V_USER VARCHAR2 (100);

INVALID_DML EXCEPTION;
PRAGMA EXCEPTION_INIT (INVALID_DML,-21113);
PRAGMA AUTONOMOUS_TRANSACTION;

BEGIN

SELECT COUNT (SYSDATE) INTO V_COUNT


FROM DUAL
WHERE SYSDATE BETWEEN TRUNC (SYSDATE) + (06/24) AND TRUNC
(SYSDATE) + (18/24);

SELECT USER INTO V_USER FROM DUAL;

IF(V_COUNT=0)
THEN
RAISE_APPLICATION_ERROR(-21113,'You can not perform
DML(INSERT,UPDATE,DELETE) OPRATION OUT OF OFFICE TIME');
ELSE
CASE WHEN INSERTING THEN
INSERT INTO PROJECT_DML
VALUES
(
UPPER(TRIM(V_USER)),

49 of 129
SYSDATE,
UPPER(TRIM('SAG_TEST_EMP')),
UPPER(TRIM('BEFORE INSERT HAS CROSS CHECKED'))
);
WHEN UPDATING THEN
INSERT INTO PROJECT_DML
VALUES
(
UPPER(TRIM(V_USER)),
SYSDATE,
UPPER(TRIM('SAG_TEST_EMP')),
UPPER(TRIM('BEFORE UPDATE HAS CROSS CHECKED'))
);
WHEN DELETING THEN
INSERT INTO PROJECT_DML
VALUES
(
UPPER(TRIM(V_USER)),
SYSDATE,
UPPER(TRIM('SAG_TEST_EMP')),
UPPER(TRIM('BEFORE DELETE HAS CROSS CHECKED'))
);
END CASE;
END IF;
COMMIT;
END;

After INSERT ,UPDATE, DELETE:-

CREATE OR REPLACE TRIGGER aft_iud_sag_test_emp


AFTER INSERT OR UPDATE OR DELETE
ON sag_test_emp

DECLARE

v_user VARCHAR2(100);
PRAGMA AUTONOMOUS_TRANSACTION;

BEGIN

50 of 129
SELECT USER INTO v_user FROM dual;

CASE
WHEN inserting THEN
INSERT INTO project_dml
VALUES
(
UPPER(TRIM(v_user)),
SYSDATE,
UPPER(TRIM('sag_test_emp')),
UPPER(TRIM('AFTER INsert has been cross checked'))
);
WHEN updating THEN
INSERT INTO project_dml
VALUES
(
UPPER(TRIM(v_user)),
SYSDATE,
UPPER(TRIM('sag_test_emp')),
UPPER(TRIM('AFTER Update has been cross checked'))
);
WHEN deleting THEN
INSERT INTO project_dml
VALUES
(
UPPER(TRIM(v_user)),
SYSDATE,
UPPER(TRIM('sag_test_emp')),
UPPER(TRIM('AFTER Delete has been cross checked'))
);
END CASE;

COMMIT;

END;

For e.g.:-

INSERT INTO sag_test_emp


VALUES
(

51 of 129
12,'l',30000,20,TRUNC(SYSDATE)
)
COMMIT;

SELECT * FROM sag_test_emp ORDER BY 1 DESC

SELECT * FROM project_dml ORDER


BY 2 DESC;

UPDATE sag_test_emp
SET ename='Janny'
WHERE emp_no=12;

COMMIT;

SELECT * FROM sag_test_emp ORDER BY 1 DESC;

52 of 129
SELECT * FROM project_dml ORDER BY 2 DESC;

DELETE sag_test_emp
WHERE emp_no=12;

COMMIT;

SELECT * FROM sag_test_emp ORDER BY 1 DESC;

SELECT * FROM project_dml ORDER BY 2 DESC;

53 of 129
*DDL Trigger:-

Oracle provides DDL triggers to audit all schema changes and can report the exact
change, when it was made, and by which user.  There are several ways to audit within
Oracle and the following auditing tools are provided:

 SQL audit command (for DML)

 Auditing with object triggers (DML auditing)

 Auditing with system-level triggers (DML and DDL)

 Auditing with LogMiner (DML and DDL)

 Fine-grained auditing (select auditing)

DDL triggers: Using the Data Definition Language (DDL) triggers, the Oracle DBA
can automatically track all changes to the database, including changes to tables,
indexes, and constraints. The data from this trigger is especially useful for change
control for the Oracle DBA.

create or replace trigger DDLTrigger


AFTER DDL ON DATABASE
BEGIN
insert into
perfstat.stats$ddl_log
(
user_name,
ddl_date,
ddl_type,
object_type,
owner,
object_name
)
VALUES
(
ora_login_user,
sysdate,
ora_sysevent,
ora_dict_obj_type,
ora_dict_obj_owner,
ora_dict_obj_name

54 of 129
);

END;
/

* Compound Trigger:-

Syntax of Compound Trigger:-

CREATE OR REPLACE TRIGGER <trigger-name>


FOR <trigger-action> ON <table-name>
COMPOUND TRIGGER

-- Global declaration.
g_global_variable VARCHAR2(10);

BEFORE STATEMENT IS
BEGIN
NULL; -- Do something here.
END BEFORE STATEMENT;

BEFORE EACH ROW IS


BEGIN
NULL; -- Do something here.
END BEFORE EACH ROW;

AFTER EACH ROW IS


BEGIN
NULL; -- Do something here.
END AFTER EACH ROW;

AFTER STATEMENT IS
BEGIN
NULL; -- Do something here.
END AFTER STATEMENT;

END <trigger-name>;
/

example no :- 01

55 of 129
CREATE OR REPLACE TRIGGER sag_test_5_trigg_221113
FOR INSERT ON sag_test_2
COMPOUND TRIGGER

BEFORE STATEMENT IS
BEGIN
INSERT INTO sag_test_1_logs
VALUES
(sag_test_1_seq.nextval,'BEFORE SATEMENT
INSERT','SAG_TEST_2',current_date);
END BEFORE STATEMENT;

BEFORE EACH ROW IS


BEGIN
INSERT INTO sag_test_1_logs
VALUES
(sag_test_1_seq.nextval,' BEFORE EACH ROW
INSERT','SAG_TEST_2',current_date);
END BEFORE EACH ROW;

AFTER EACH ROW IS


BEGIN
INSERT INTO sag_test_1_logs
VALUES
(sag_test_1_seq.nextval,' AFTER EACH ROW
INSERT','SAG_TEST_2',current_date);
END AFTER EACH ROW;

AFTER STATEMENT IS
BEGIN
INSERT INTO sag_test_1_logs
VALUES
(sag_test_1_seq.nextval,' AFTER SATEMENT
INSERT','SAG_TEST_2',current_date);
END AFTER STATEMENT;

END;

example no:-02

56 of 129
CREATE OR REPLACE TRIGGER sag_test_6_trigg_221113
FOR INSERT OR UPDATE OR DELETE ON sag_test_2
COMPOUND TRIGGER

BEFORE STATEMENT IS
BEGIN
sag_test_6_pro_befstate_221113; -- Calling respective Procedure
END BEFORE STATEMENT;

BEFORE EACH ROW IS


BEGIN
sag_test_6_pro_befechrw_221113; -- Calling respective Procedure
END BEFORE EACH ROW;

AFTER EACH ROW IS


BEGIN
sag_test_6_pro_aftechrw_221113; -- Calling respective Procedure
END AFTER EACH ROW;

AFTER STATEMENT IS
BEGIN
sag_test_6_pro_aftstate_221113; -- Calling respective Procedure
END AFTER STATEMENT;

END;

CREATE OR REPLACE PROCEDURE sag_test_6_pro_befstate_221113


AS

BEGIN

CASE

57 of 129
WHEN inserting THEN
INSERT INTO sag_test_1_logs
VALUES
(sag_test_1_seq.nextval,'BEFORE STATEMENT
INSERT','SAG_TEST_2',current_date);

WHEN updating THEN


INSERT INTO sag_test_1_logs
VALUES
(sag_test_1_seq.nextval,'BEFORE STATEMENT
UPDATE','SAG_TEST_2',current_date);

WHEN deleting THEN


INSERT INTO sag_test_1_logs
VALUES
(sag_test_1_seq.nextval,'BEFORE STATEMENT
DELETE','SAG_TEST_2',current_date);
END CASE;

END;

CREATE OR REPLACE PROCEDURE sag_test_6_pro_befechrw_221113


AS

BEGIN

CASE
WHEN inserting THEN
INSERT INTO sag_test_1_logs
VALUES
(sag_test_1_seq.nextval,'BEFORE EACH ROW
INSERT','SAG_TEST_2',current_date);

WHEN updating THEN


INSERT INTO sag_test_1_logs
VALUES
(sag_test_1_seq.nextval,'BEFORE EACH ROW
UPDATE','SAG_TEST_2',current_date);

58 of 129
WHEN deleting THEN
INSERT INTO sag_test_1_logs
VALUES
(sag_test_1_seq.nextval,'BEFORE EACH ROW
DELETE','SAG_TEST_2',current_date);

END CASE;

END;
CREATE OR REPLACE PROCEDURE sag_test_6_pro_aftechrw_221113
AS

BEGIN

CASE
WHEN inserting THEN
INSERT INTO sag_test_1_logs
VALUES
(sag_test_1_seq.nextval,'AFTER EACH ROW
INSERT','SAG_TEST_1',current_date);

WHEN updating THEN


INSERT INTO sag_test_1_logs
VALUES
(sag_test_1_seq.nextval,'AFTER EACH ROW
UPDATE','SAG_TEST_2',current_date);

WHEN deleting THEN


INSERT INTO sag_test_1_logs
VALUES
(sag_test_1_seq.nextval,'AFTER EACH ROW
DELETE','SAG_TEST_2',current_date);

END CASE;

END;

CREATE OR REPLACE PROCEDURE sag_test_6_pro_aftstate_221113


AS

59 of 129
BEGIN

CASE
WHEN inserting THEN
INSERT INTO sag_test_1_logs
VALUES
(sag_test_1_seq.nextval,'AFTER STATEMENT
INSERT','SAG_TEST_2',current_date);

WHEN updating THEN


INSERT INTO sag_test_1_logs
VALUES
(sag_test_1_seq.nextval,'AFTER STATEMENT
UPDATE','SAG_TEST_2',current_date);

WHEN deleting THEN


INSERT INTO sag_test_1_logs
VALUES
(sag_test_1_seq.nextval,'AFTER STATEMENT
DELETE','SAG_TEST_2',current_date);
END CASE;

END;

* Instead of Trigger

CREATE OR REPLACE TRIGGER ioft_insert_role_perm


INSTEAD OF INSERT
ON role_permission_view
FOR EACH ROW

CREATE TABLE BaseTable


(ID int PRIMARY KEY IDENTITY(1,1),
Color nvarchar(10) NOT NULL,
Material nvarchar(10) NOT NULL,
ComputedCol AS (Color + Material)

60 of 129
);
GO

--Create a view that contains all columns from the base table.
CREATE VIEW InsteadView
AS SELECT ID, Color, Material, ComputedCol
FROM BaseTable;
GO

--Create an INSTEAD OF INSERT trigger on the view.


CREATE TRIGGER InsteadTrigger on InsteadView
INSTEAD OF INSERT
AS
BEGIN
--Build an INSERT statement ignoring inserted.ID and
--inserted.ComputedCol.
INSERT INTO BaseTable
SELECT Color, Material
FROM inserted
END;

*System Trigger or Database Trigger

CREATE OR REPLACE TRIGGER On_Logon


AFTER LOGON
ON The_user.Schema BEGIN Do_Something; END;

COLLECTION
Collection is an Ordered Group of elements of same type. There are following types of
Collection exists in Oracle,

61 of 129
Bounded: = Which has limit

Un Bounded: = Which do have limit

Persistent: = Which stores on database

Non-Persistent:= Which is active for that program only maximum up to a session.

Method of collections are as follows,


 EXISTS
 COUNT
 LIMIT
 FIRST
 LAST
 PRIOR
 NEXT

62 of 129
1) Associative Array:-

 Associative array is an array which can define only with PLSQL program.
 Neither array structure nor data stores in database.
 It hold the elements of similar data type.
 Each cell of array is identified by subscript or Index or cell no.
 Index can be Number or String

Syntax for Associative Array:-


TYPE [COLL NAME] IS TABLE OF [ELEMENT DATA TYPE] NOT NULL
INDEX BY [INDEX DATA TYPE]
In the preceding syntax, the index type signifies the data type of the array subscript.
RAW, NUMBER, LONG-RAW, ROWID, and CHAR are the unsupported index data
types.
The suited index types are BINARY_INTEGER, PLS_INTEGER, POSITIVE,
NATURAL,
SIGNTYPE, or VARCHAR2.
For e.g.

Example No: - 01

declare

type get_ascii is table of varchar2(50)


index by binary_integer;

ascii_var get_ascii;

begin

63 of 129
for I in 1..30
loop
ascii_var(I) := ascii(I);

dbms_output.put_line( i || ' = ' || ascii_var(I));

end loop;
end;

1 = 49
2 = 50
3 = 51
4 = 52
5 = 53
6 = 54
7 = 55
.
.
.
.
So on

declare
type my_tab is table of number;
t my_tab;
v_count number:=0;

begin

select empno bulk collect into t from emp;


for I in t.first..t.last
loop
dbms_output.put_line(t(I));
v_count:=v_count+1;
dbms_output.put_line(v_count);
end loop;

end;

o/p:-

64 of 129
7369
1
7499
2
7521
3
7566
4
7654
5
7698
6
7782
7
7788
8
7839
9
7844
10
7876
11
7900
12
7902
13
7934
14

65 of 129
66 of 129
Example No: - 02

DECLARE
TYPE salary IS TABLE OF NUMBER INDEX BY VARCHAR2(20);
salary_list salary;
name VARCHAR2(20);
BEGIN
-- adding elements to the table
salary_list('Rajnish') := 62000;
salary_list('Minakshi') := 75000;
salary_list('Martin') := 100000;
salary_list('James') := 78000;

-- printing the table


name := salary_list.FIRST;
WHILE name IS NOT null LOOP
dbms_output.put_line
('Salary of ' || name || ' is ' || TO_CHAR(salary_list(name)));
name := salary_list.NEXT(name);
END LOOP;
END;

Salary of Rajnish is 62000


Salary of Minakshi is 75000
Salary of Martin is 100000
Salary of James is 78000

PL/SQL procedure successfully completed.

67 of 129
DECLARE
CURSOR c_customers is
select name from customers;

TYPE c_list IS TABLE of customers.name%type INDEX BY binary_integer;


name_list c_list;
counter integer :=0;
BEGIN
FOR n IN c_customers LOOP
counter := counter +1;
name_list(counter) := n.name;
dbms_output.put_line('Customer('||counter|| '):'||name_list(counter));
END LOOP;
END;
/

When the above code is executed at SQL prompt, it produces the following result:

Customer(1): Ramesh
Customer(2): Khilan
Customer(3): kaushik
Customer(4): Chaitali
Customer(5): Hardik
Customer(6): Komal

PL/SQL procedure successfully completed

2) Nested Table:-

68 of 129
 Nested table is a persistent form of collection which can created in Database and
in PLSQL also.
 It is an unbounded form of collection in which index is maintain by oracle.
 Oracle automatically marked minimum index as 1 & later goes on.
 When Nested table declared in PLSQL they behave as a ONE DIMENTIONAL
ARRAY.
 Nested table type column in table reassembled table with in table, but oracle
draw out of line storage to hold nested table data.

For e.g. No:-01

CREATE OR REPLACE TYPE nest_tab_1 IS TABLE OF VARCHAR2 (100);

CREATE TABLE sag_test_1


(
eno NUMBER,
ename VARCHAR2(50),
addres nest_tab_1
)
NESTED TABLE addres STORE AS nested_address
;

Insert Operation on Nested Table

INSERT INTO sag_test_1


(eno,ename,addres)

69 of 129
VALUES
(1,'Sam', nest_tab_1('Pune','Maharashatra'));

COMMIT;

select * from sag_test_1;

o/p:-
Update
operation
on Nested Table

UPDATE sag_test_1
SET addres =nest_tab_1('Mumbai','Maharashtra')
WHERE eno=1;

commit;

Delete operation on Nested table

DELETE sag_test_1
WHERE eno=1;

for e.g. No:-02

CREATE OR REPLACE TYPE nest_tab_1 IS TABLE OF VARCHAR2(100);

CREATE TABLE sag_test_1


(
eno NUMBER,
ename VARCHAR2(100),
address nest_tab_1
)
NESTED TABLE address STORE AS Nested_Address
;

INSERT INTO sag_test_1


VALUES

70 of 129
(1,'Sam',nest_tab_1('Build No:-53','Room No:- 103','Complex Name:- River Wood
Park','Road:- Kalyan Shill Road',
'Landmark:- Opp. Desai Naka','Post_Box:- Padale','Pincode-421204'));

COMMIT;

For e.g. no:-03

CREATE OR REPLACE TYPE obj_address IS OBJECT


(
house_no NUMBER(15),
address_details VARCHAR2(50),
city VARCHAR2(50),
state VARCHAR2(50),
pincode NUMBER,
phone_no VARCHAR2(15)
);
CREATE TABLE basic_info
(
info_id NUMBER,
person_name VARCHAR2(50),
address obj_address
);

INSERT INTO basic_info


VALUES(1,'Sagar Rahate',obj_address(10,'near dombiwali rly
stn','Mumbai','Maharashtra',411223,'022-23456788'));

INSERT INTO basic_info


VALUES(1,'Sagar Rahate',obj_address(20,'near dombiwali rly
stn','Mumbai','Maharashtra',411223,'022-23456788'));

Commit;

71 of 129
3) VARRAY:-
 VARRAY is a modified form of NESTED TABLE.
 VARRAY (Variable Size of Array) is a Bounded & Persistent form of
collection.
 VARRAY declaration define the limit of element VARRAY can accommodate.
 Minimum bound is 1 & maximum = size of VARRAY.
 Like a Nested Table VARRAY can create on database & in PLSQL.
 VARRAY stored in the line with their parent record as row value in parent table.

For e.g.
CREATE OR REPLACE TYPE varray_test_1 IS VARRAY(5) OF NUMBER;

CREATE TABLE sag_test_2


(
NAME VARCHAR2(100),
VERSION varray_test_1
);
INSERT INTO sag_test_2
VALUES
('Oracle',varray_test_1(7,8,9,10,11));

COMMIT;

PARTITIONING TABLE

72 of 129
As the number of rows increase in table as a result management & performance will
get decrease. To overcome this problem ORACLE introduced Partitioning table. In
partition table, huge data of single table is divide into multiple partitions. With the help
of Partition Table we can achieved following goals,
 Performance improves: - Since Oracle have to search in respective partition
instead of searching in entire table.
 Easy of Management: - Since loading & deletion of data become easy for
partition rather than entire table.
 Easy for Backup & Recovery:-Because of partition table we gets many options
for backup recovery rather than large table.
Case Study
Types of Partition:-
Oracle have following types of partitions
 Single Level
1. Range Partition
2. List Partition
3. Hash Partition
 Composite Partition
Oracle support following Composite Partition
1. Range Hash Partition
2. Range List Partition
For Imagination you can take the help of following diagram which I have taken from
below link,

http://docs.oracle.com/cd/B28359_01/server.111/b32024/partition.htm#CACFECJC

73 of 129
1. Single Level Partition:-
Range Partition:-

 Data will mapped to respective partition based on the range assigned to


partition.
 'VALUES LESS THAN' keyword is use to assign the range for every
partition.

for e.g.:-

CREATE TABLE sag_emp_1


(eno NUMBER,
ename VARCHAR2(100),
sal NUMBER,
dno NUMBER
)
PARTITION BY RANGE(sal)
(
PARTITION Small_sal VALUES LESS THAN (15000),
PARTITION Medium_sal VALUES LESS THAN (30000),
PARTITION large_sal VALUES LESS THAN (100000)
);

When you are going to perform any DML operation on such partition table then it will
effect on respective partition rather on Entire table.

For e.g.:- 01

INSERT INTO sag_emp_1


VALUES
(2,'A', 14000, 20);

Commit;

This record will insert into Small_sal partition.

74 of 129
For e.g.:- 02
INSERT INTO sag_emp_1
VALUES
(1,'B',26000,10);

commit;

This record will insert into Medium_sal partition.

for e.g.:- 03

INSERT INTO sag_emp_1


VALUES
(3,'C',96000,30);
commit;

This record will insert into large_sal partition.

2 List Partition:-
 In this partition techniques you have to specify list of values to partition key as a
description for each partition.

for e.g.:-
CREATE TABLE sag_emp_2
(
eno NUMBER,
ename VARCHAR2(100),
sal NUMBER,
dno NUMBER,
designation VARCHAR2(100)
)
PARTITION BY LIST (designation)
(
PARTITION IT_field VALUES
('Trainee_Engineer','Oracle_Developer','Java_Developer','Dot_Net_Developer',
'Software_Developer','IT_Project_Manager'),
PARTITION Electronics_field VALUES ('Electrition','Technition','QA','Superwiser'),
PARTITION Teaching_field VALUES
('Lectural','Class_teacher','HOD','Wise_Principle','Principle','Trusty')

75 of 129
);

INSERT INTO sag_emp_2


VALUES
(1,'A',14000,10,' Trainee_Engineer');

COMMIT;

This record will insert into IT_Field partition.

INSERT INTO sag_emp_2


VALUES
(2,'B',24000,10,'QA');

COMMIT;

This record will insert into Electronics_Field partition.

INSERT INTO sag_emp_2


VALUES
(3,'C',30000,30,'Class_teacher');

COMMIT;

This record will insert into Teaching_Field partition.

76 of 129
Hash Partition

 HASH partition maps the data among the partition based on Hashing algorithm.
 Hash algorithm will distributes the data among partitions by giving partition
approximately same size.
 HASH partition is used to even distribution of data among predefined number of
partitions.
 With RANGE & LIST you need to specify which value should go in which
partition. Where as in HASH partition it is handle by MYSQL.
 To partition a table by using HASH function it is necessary to append CREATE
TABLE statement with PARTITION BY HASH (expr) clause. Where expr is name
of column.
 After this statement we need to write PARTITION num. Where num is number of
partition into which table is going to divide.

The following statement creates a table that uses hashing on the store_id column and
is divided into 4 partitions:

CREATE TABLE employees (


id INT NOT NULL,
fname VARCHAR(30),
lname VARCHAR(30),
hired DATE NOT NULL DEFAULT '1970-01-01',
separated DATE NOT NULL DEFAULT '9999-12-31',
job_code INT,
store_id INT
)
PARTITION BY HASH(store_id)
PARTITIONS 4;

If you do not include a PARTITIONS clause, the number of partitions defaults to 1.

Using the PARTITIONS keyword without a number following it results in a syntax


error.

77 of 129
ALTER TABLE PARTITION Option:-
 We can use ALTER TABLE Statement with partitioned table for repartitioning,
for adding, for dropping, for merging & for splitting the partition.

Suppose we have following table


CREATE TABLE t1 (
id INT,
year_col INT
)
PARTITION BY RANGE (year_col) (
PARTITION p0 VALUES LESS THAN (1991),
PARTITION p1 VALUES LESS THAN (1995),
PARTITION p2 VALUES LESS THAN (1999)
);

You can add a new partition p3 to this table for storing values less than 2002 as
follows:

ALTER TABLE t1
ADD PARTITION
(
PARTITION p3 VALUES LESS THAN (2002)
);

DROP PARTITION can be used to drop one or more RANGE or LIST partitions.
This statement cannot be used with HASH or KEY partitions; instead, use
COALESCE PARTITION (see below). Any data that was stored in the dropped
partitions named in the partition_names list is discarded. For example, given the
table t1 defined previously, you can drop the partitions named p0 and p1 as shown
here:

78 of 129
ALTER TABLE t1
DROP PARTITION p0, p1;

It is also possible to delete the rows from selected partition using TRUNCATE
PARTITION option.
To DELETE the rows of partition P0 we can use following command,
ALTER TABLE T1
TRUNCATE PARTITION p0;

The statement just shown has the same effect as the following DELETE statement:

DELETE FROM t1 WHERE year_col < 1991;

For example, this statement deletes all rows from partitions p1 and p3:

ALTER TABLE t1 TRUNCATE PARTITION p1, p3;

An equivalent DELETE statement is shown here:

DELETE FROM t1 WHERE


(year_col >= 1991 AND year_col < 1995)
OR
(year_col >= 2003 AND year_col < 2007);

You can also use the ALL keyword in place of the list of partition names; in this case,
the statement acts on all partitions in the table.

You can verify that the rows were dropped by checking the
INFORMATION_SCHEMA.PARTITIONS table, using a query such as this one:

SELECT PARTITION_NAME, TABLE_ROWS


FROM INFORMATION_SCHEMA.PARTITIONS
WHERE TABLE_NAME = 't1';

79 of 129
You can reduce the number of partitions used by t2 from 6 to 4 using the following
statement:

ALTER TABLE t2 COALESCE PARTITION 2;

The data contained in the last number partitions will be merged into the remaining
partitions. In this case, partitions 4 and 5 will be merged into the first 4 partitions
(the partitions numbered 0, 1, 2, and 3).

80 of 129
PRAGMA
 PRAGMA is Compiler Directive Keyword.
 It is used to provide instruction to compiler.
 It is define in DECLARE section of PLSQL block.
 Till now there are 5 types of PRAGMA.
1. PRAGMA AUTONOMOUS_TRANSACTION
2. PRAGMA EXCEPTION_INIT
3. PRAGMA RESTRICT_REFERENCES
4. PRAGMA SERIALLY_REUSABLE
5. PRAGMA INLINE

1. PRAGMA AUTONOMOUS_TRANSACTION:-

Prior to ORACLE 8.1, each ORACLE session have at most one active transaction at a
time. In other words, Changes were all or nothing. ORACLE 8i address this issue &
comes up with solution called "AUTONOMOUS TRANSACTION".
For instance, if we perform COMMIT or ROLLBACK within the block then it should
not affect the transaction outside of block. In such scenario PRAGMA
AUTONOMOUS_TRANDACTION is use.

2. PRAGMA EXCEPTION_INIT:-

This type of PRAGMA is used to bind user defined exception with particular error
number.

3. PRAGMA RESTRICT _REFERENCES:-

Defines the purity level of a packaged program. This is not required starting with
Oracle8i.Prior to Oracle8i if you were to invoke a function within a package
specification from a SQL statement, you would have to provide a
RESTRICT_REFERENCE directive to the PL/SQL engine for that function. This
pragma confirms to Oracle database that the function as the specified side-effects or
ensures that it lacks any such side-effects.

Usage is as follows:

81 of 129
PRAGMA RESTRICT_REFERENCES (function name, WNDS [, WNPS] [, RNDS],
[, RNPS])

WNDS: Writes No Database State. States that the function will not perform any
DMLs.

WNPS: Writes No Package State. States that the function will not modify any Package
variables.

RNDS: Reads No Database State. Analogous to Write. This pragma affirms that the
function will not read any database tables.

RNPS: Reads No Package State. Analogous to Write. This pragma affirms that the
function will not read any package variables.

In some situations, only functions that guarantee those restrictions can be used.
The following is a simple example:
Let’s define a package made of a single function that updates a db table and returns a
number:

1 SQL> create or replace package pack is


2   2  function a return number;
3   3  end;
4   4  /
5  
6 SQL> create or replace package body pack is
7   2  function a return number is
8   3  begin
9   4    update emp set empno=0 where 1=2;
10   5    return 2;
11   6  end;
12   7  end;
13   8  /

If we try to use the function pack. a in a query statement we’ll get an error:

1 SQL> select pack.a from dual;


2 select pack.a from dual
3        *
4 ERROR at line 1:
5 ORA-14551: cannot perform a DML operation inside a query

82 of 129
6 ORA-06512: a "MAXR.PACK", line 4

PL/SQL functions can be used inside a query statement only if they don’t modify
neither the db nor packages’ variables.

This error can be discovered only at runtime, when the select statement is executed.
How can we check for this errors at compile time? We can use PRAGMA
RESTRICT_REFERENCES!
If we know that the function will be used in SQL we can define it as follows:

1 SQL> create or replace package pack is


2   2  function a return number;
3   3  pragma restrict_references(a,'WNDS');
4   4  end;
5   5  /

Declaring that the function A will not modify the database state (WNDS stands for
WRITE NO DATABASE STATE).
Once we have made this declaration, if a programmer, not knowing that the function
has to be used in a query statement, tries to write code for A that violates the
PRAGMA:

1 SQL> create or replace package body pack is


2   2  function a return number is
3   3  begin
4   4    update emp set empno=0 where 1=2;
5   5    return 2;
6   6  end;
7   7  end;
8   8  /
9  
10 Warning: Package Body created with compilation errors.
11  
12 SVIL>sho err
13 Errors for PACKAGE BODY PACK:
14  
15 LINE/COL ERROR
16 -------- -----------------------------------------------------------------
17 2/1      PLS-00452: Subprogram 'A' violates its associated pragma

He(She)’ll get an error at compile time…

83 of 129
4. PRAGMA SERIALLY_REUSABLE:-
It tells to the compiler that the package’s variables are needed for a single use. After
this single use Oracle can free the associated memory. It’s really useful to save
memory when a packages uses large temporary space just once in the session.
Let’s see an example.
Let’s define a package with a single numeric variable “var” not initialized:
1 SQL> create or replace package pack is
2   2  var number;
3   3  end;
4   4  /

If we assign a value to var, this will preserve that value for the whole session:

1 SQL> begin
2   2  pack.var := 1;
3   3  end;
4   4  /
5  
6 SQL> exec dbms_output.put_line('Var='||pack.var);
7 Var=1

If we use the PRAGMA SERIALLY_REUSABLE, var will preserve the value just
inside the program that initializes it, but is null in the following calls:

1 SQL> create or replace package pack is


2   2  PRAGMA SERIALLY_REUSABLE;
3   3  var number;
4   4  end;
5   5  /
6  
7 SQL> begin
8   2  pack.var := 1;
9   3  dbms_output.put_line('Var='||pack.var);

84 of 129
10   4  end;
11   5  /
12 Var=1
13  
14 SQL> exec dbms_output.put_line('Var='||pack.var);
15 Var=
PRAGMA SERIALLY_REUSABLE is a way to change the default behavior of
package variables that is as useful as heavy for memory.

5. PRAGMA INLINE:-

In Oracle11g has been added a new feature that optimizer can use to get better
performances, it’s called Subprogram in lining.
Optimizer can (autonomously or on demand) choose to replace a subprogram call with
a local copy of the subprogram.

For example, assume the following code:

1 declare
2 total number;
3 begin
4  total := calculate_nominal + calculate_interests;
5 end;

Where calculate_nominal and calculate_interests are two functions defined as follows:

1 function calculate_nominal return number is


2 s number;
3 begin
4   select sum(nominal)
5     into s
6     from deals;
7      
8   return s;
9 end;
10  
11 function calculate_interests return number is
12 s number;
13 begin
14   select sum(interest)
15     into s

85 of 129
16     from deals;
17      
18   return s;
19 end;

Optimizer can change the code to something like this:

1 declare
2 total number;
3 v_calculate_nominal number;
4 v_calculate_interests number;
5 begin
6   select sum(nominal)
7     into v_calculate_nominal
8     from deals;
9  
10   select sum(interest)
11     into v_calculate_interests
12     from deals;
13  
14  total := v_calculate_nominal + v_calculate_interests;
15 end;

Including a copy of the subprograms into the calling program.

PRAGMA INLINE is the tool that we own to drive this new feature.
If we don’t want such an optimization we can do:

1 declare
2 total number;
3 begin
4  PRAGMA INLINE(calculate_nominal,'NO');
5  PRAGMA INLINE(calculate_interests,'NO');
6  total := calculate_nominal + calculate_interests;
7 end;

If we do want subprogram inlining on calculate_nominal we do:

1 declare
2 total number;
3 begin
4  PRAGMA INLINE(calculate_nominal,'YES');

86 of 129
5  total := calculate_nominal + calculate_interests;
6 end;

Subprogram in lining behave differently depending on the level of optimization


defined through the db initialization variable PLSQL_OPTIMIZE_LEVEL.
If this variable is set to 2 (that’s the default value) optimizer never uses subprogram
inlining unless the programmer requests it using PRAGMA INLINE YES.
If PLSQL_OPTIMIZE_LEVEL=3 optimizer can autonomously decide whether to use
subprogram inlining or not. In this case PRAGMA INLINE YES does not force the
optimizer, it’s just an hint.

INDEX
o INDEX is an Oracle Object which is use to speed up the access of the
table.

87 of 129
o We should uses INDEX if there is frequent retrieval of rows (< 10 % of
complete no of rows of respective table.) & frequently retrieval of column
in WHERE clause.
o Basically there are two type of index,
 Implicit Index
 Explicit Index
o In Explicit Index further we have following types,
 B-Tree Index
 Bit Map Index
 Function Base Index

What is an Index in Oracle?

An index is a performance-tuning method of allowing faster retrieval of records. An


index creates an entry for each value that appears in the indexed columns. By default,
Oracle creates B-tree indexes.

Create an Index

Syntax

The syntax for creating an index in Oracle/PLSQL is:

CREATE [UNIQUE] INDEX index_name


ON table_name (column1, column2, ... column_n)
[ COMPUTE STATISTICS ];

UNIQUE
It indicates that the combination of values in the indexed columns must be
unique.
index_name
The name to assign to the index.
table_name
The name of the table in which to create the index.

88 of 129
column1, column2, ... column_n
The columns to use in the index.
COMPUTE STATISTICS
It tells Oracle to collect statistics during the creation of the index. The statistics
are then used by the optimizer to choose a "plan of execution" when SQL
statements are executed.

Example

Let's look at an example of how to create an index in Oracle/PLSQL.

For example:

CREATE INDEX supplier_idx


ON supplier (supplier_name);

In this example, we've created an index on the supplier table called supplier_idx. It
consists of only one field - the supplier_name field.

We could also create an index with more than one field as in the example below:

CREATE INDEX supplier_idx


ON supplier (supplier_name, city);

We could also choose to collect statistics upon creation of the index as follows:

CREATE INDEX supplier_idx


ON supplier (supplier_name, city)
COMPUTE STATISTICS;

Create a Function-Based Index

In Oracle, you are not restricted to creating indexes on only columns. You can create
function-based indexes.

Syntax

The syntax for creating a function-based index in Oracle/PLSQL is:

CREATE [UNIQUE] INDEX index_name


ON table_name (function1, function2, ... function_n)
[ COMPUTE STATISTICS ];

89 of 129
UNIQUE
It indicates that the combination of values in the indexed columns must be
unique.
index_name
The name to assign to the index.
table_name
The name of the table in which to create the index.
function1, function2, ... function_n
The functions to use in the index.
COMPUTE STATISTICS
It tells Oracle to collect statistics during the creation of the index. The statistics
are then used by the optimizer to choose a "plan of execution" when SQL
statements are executed.

Example

Let's look at an example of how to create a function-based index in Oracle/PLSQL.

For example:

CREATE INDEX supplier_idx


ON supplier (UPPER(supplier_name));

In this example, we've created an index based on the uppercase evaluation of the
supplier_name field.

However, to be sure that the Oracle optimizer uses this index when executing your
SQL statements, be sure that UPPER(supplier_name) does not evaluate to a NULL
value. To ensure this, add UPPER(supplier_name) IS NOT NULL to your WHERE
clause as follows:

SELECT supplier_id, supplier_name, UPPER(supplier_name)


FROM supplier
WHERE UPPER(supplier_name) IS NOT NULL
ORDER BY UPPER(supplier_name);

90 of 129
Rename an Index

Syntax

The syntax for renaming an index in Oracle/PLSQL is:

ALTER INDEX index_name


RENAME TO new_index_name;
index_name
The name of the index that you wish to rename.
new_index_name
The new name to assign to the index.

Example

Let's look at an example of how to rename an index in Oracle/PLSQL.

For example:

ALTER INDEX supplier_idx


RENAME TO supplier_index_name;

In this example, we're renaming the index called supplier_idx to supplier_index_name.

Collect Statistics on an Index

If you forgot to collect statistics on the index when you first created it or you want to
update the statistics, you can always use the ALTER INDEX command to collect
statistics at a later date.

Syntax

The syntax for collecting statistics on an index in Oracle/PLSQL is:

ALTER INDEX index_name


REBUILD COMPUTE STATISTICS;
index_name
The index in which to collect statistics.

91 of 129
Example

Let's look at an example of how to collect statistics for an index in Oracle/PLSQL.

For example:

ALTER INDEX supplier_idx


REBUILD COMPUTE STATISTICS;

In this example, we're collecting statistics for the index called supplier_idx.

Drop an Index

Syntax

The syntax for dropping an index in Oracle/PLSQL is:

DROP INDEX index_name;


index_name
The name of the index to drop.

Example

Let's look at an example of how to drop an index in Oracle/PLSQL.

For example:

DROP INDEX supplier_idx;

1. B-Tree Index:-

By default Oracle create B-Tree Index. In B-Tree, you walk through branches unless
until you won't get the node which you want.

92 of 129
For e.g. If your tree starts from 50 & you are searching for 28. Then first you will
check whether 28>50 or not. Since it is false so you will come left side of tree(50).
Suppose if you get 25 as main node, then you will check whether 28 > 25 or not since
answer is YES so you will check at right side, & so on.

ORACLE implement B-Tree in little different manner. An Oracle b-tree starts with
two nodes,
1.Header
2.Leaf
Header contain pointer to leaf node & value stored in leaf node. If header block fills
new Header block will establish, and former Header Block will become Branch Node.
This is called three level B-Tree.

We can also create multi column Index also called as "Concatenated Index" or
"Complex Index".

SQL> create index sales_keys


  2  on sales (book_key, store_key, order_number); 

Index created.

Here, we created an index called sales_keys on three columns of the SALEStable.  A


multicolumn index can be used by the database but only from the first or lead column. 
Our sales_keys index can be used in the following query.

select
  order_number,
  quantity
from
  sales
where 
   book_key = 'B103'; 

Note that the lead column of the index is the book_key, so the database can use the
index in the query above.  I can also use the sales_keys index in the queries below.

select
  order_number,
  quantity
from
  sales

93 of 129
where 
   book_key = 'B103'
and 
   store_key = 'S105'
and 
   order_number = 'O168'; 

However, the database cannot use that index in the following query because the
WHERE clause does not contain the index lead column.

select
  order_number,
  quantity
from
  sales
where 
   store_key = 'S105'
and 
   order_number = 'O168'; 

Also, note that in the query below, the database can answer the query from the index
and so will not access the table at all.

 select
  order_number
from
  sales
where 
   store_key = 'S105'
and 
   book_key = 'B108'; 

As you can see, b-tree indexes are very powerful.  You must remember that a
multicolumn index cannot skip over columns, so the lead index column must be in the
WHERE clause filters.  Oracle has used b-tree indexes for many years, and they are
appropriate from most of your indexing needs.  However, the Oracle database provides
specialized indexes that can provide additional capabilities; the bit-mapped index and
the function-based index.

2. Bit-Map Index:-

94 of 129
Bit-Map Index is most useful in data ware house environment because they are
generally faster when you are only selecting data.

Bit-Map index are smaller in size than B=Tee as they stored only rowed & series of
bits.

For e.g.:-

The bitmaps stored may be the following (the actual storage depends on the algorithm
used internally, which is more complex than this example):

As you can tell from the preceding example, finding all of the females by searching for
the gender bit set to a ‘1’ in the example would be easy. You can similarly find all of
those who are married or even quickly find a combination of gender and marital status.
You should use b-tree indexes when columns are unique or near-unique; you should at
least consider bitmap indexes in all other cases. Although you generally would not use
a b-tree index when retrieving 40 percent of the rows in a table, using a bitmap index
generally makes this task faster than doing a full table scan.
You can use bitmap indexes even when retrieving large percentages (20–80 percent) of
a table.

95 of 129
3. Function Based Index:-

Oracle Function-Based Indexes

Traditionally, performing a function on an indexed column in the where clause of a


query guaranteed an index would not be used. Oracle 8i introduced Function-Based
Indexes to counter this problem. Rather than indexing a column, you index the
function on that column, storing the product of the function, not the original column
data. When a query is passed to the server that could benefit from that index, the query
is rewritten to allow the index to be used. The following code samples give an example
of the use of Function-Based Indexes.

 Build Test Table


 Build Regular Index
 Build Function-Based Index
 Concatenated Columns

2.1 Build Test Table

First we build a test table and populate it with enough data so that use of an index
would be advantageous.

CREATE TABLE user_data (


id NUMBER(10) NOT NULL,
first_name VARCHAR2(40) NOT NULL,
last_name VARCHAR2(40) NOT NULL,
gender VARCHAR2(1),
dob DATE
);

BEGIN
FOR cur_rec IN 1 .. 2000 LOOP
IF MOD(cur_rec, 2) = 0 THEN
INSERT INTO user_data
VALUES (cur_rec, 'John' || cur_rec, 'Doe', 'M', SYSDATE);
ELSE
INSERT INTO user_data
VALUES (cur_rec, 'Jayne' || cur_rec, 'Doe', 'F', SYSDATE);
END IF;
COMMIT;
END LOOP;

96 of 129
END;
/

EXEC DBMS_STATS.gather_table_stats(USER, 'user_data', cascade => TRUE);

At this point the table is not indexed so we would expect a full table scan for any
query.

SET AUTOTRACE ON
SELECT *
FROM user_data
WHERE UPPER(first_name) = 'JOHN2';

Execution Plan
----------------------------------------------------------
Plan hash value: 2489064024

-------------------------------------------------------------------------------
| Id | Operation | Name | Rows | Bytes | Cost (%CPU)| Time |
-------------------------------------------------------------------------------
| 0 | SELECT STATEMENT | | 20 | 540 | 5 (0)| 00:00:01 |
|* 1 | TABLE ACCESS FULL| USER_DATA | 20 | 540 | 5 (0)| 00:00:01 |
-------------------------------------------------------------------------------
2.2 Build Regular Index

If we now create a regular index on the FIRST_NAME column we see that the index is
not used.

CREATE INDEX first_name_idx ON user_data (first_name);


EXEC DBMS_STATS.gather_table_stats(USER, 'user_data', cascade => TRUE);

SET AUTOTRACE ON
SELECT *
FROM user_data
WHERE UPPER(first_name) = 'JOHN2';

Execution Plan
----------------------------------------------------------
Plan hash value: 2489064024

-------------------------------------------------------------------------------

97 of 129
| Id | Operation | Name | Rows | Bytes | Cost (%CPU)| Time |
-------------------------------------------------------------------------------
| 0 | SELECT STATEMENT | | 20 | 540 | 5 (0)| 00:00:01 |
|* 1 | TABLE ACCESS FULL| USER_DATA | 20 | 540 | 5 (0)| 00:00:01 |
-------------------------------------------------------------------------------
2.3 Build Function-Based Index

If we now replace the regular index with a function-based index on the FIRST_NAME
column we see that the index is used.

DROP INDEX first_name_idx;


CREATE INDEX first_name_idx ON user_data (UPPER(first_name));
EXEC DBMS_STATS.gather_table_stats(USER, 'user_data', cascade => TRUE);

-- Later releases set these by default.


ALTER SESSION SET QUERY_REWRITE_INTEGRITY = TRUSTED;
ALTER SESSION SET QUERY_REWRITE_ENABLED = TRUE;

SET AUTOTRACE ON
SELECT *
FROM user_data
WHERE UPPER(first_name) = 'JOHN2';
Execution Plan
----------------------------------------------------------
Plan hash value: 1309354431

----------------------------------------------------------------------------------------------
| Id | Operation | Name | Rows | Bytes | Cost (%CPU)| Time |
----------------------------------------------------------------------------------------------
| 0 | SELECT STATEMENT | | 1 | 36 | 2 (0)| 00:00:01 |
| 1 | TABLE ACCESS BY INDEX ROWID| USER_DATA | 1 | 36 | 2 (0)|
00:00:01 |
|* 2 | INDEX RANGE SCAN | FIRST_NAME_IDX | 1 | | 1 (0)|
00:00:01 |
----------------------------------------------------------------------------------------------

The QUERY_REWRITE_INTEGRITY and QUERY_REWRITE_ENABLED


parameters must be set or the server will not be able to rewrite the queries, and will
therefore not be able to use the new index. Later releases have them enabled by default.

98 of 129
2.4 Concatenated Columns

This method works for concatenated indexes also.

DROP INDEX first_name_idx;


CREATE INDEX first_name_idx ON user_data (gender, UPPER(first_name), dob);
EXEC DBMS_STATS.gather_table_stats(USER, 'user_data', cascade => TRUE);

SET AUTOTRACE ON
SELECT *
FROM user_data
WHERE gender = 'M'
AND UPPER(first_name) = 'JOHN2';

Execution Plan
----------------------------------------------------------
Plan hash value: 1309354431

----------------------------------------------------------------------------------------------
| Id | Operation | Name | Rows | Bytes | Cost (%CPU)| Time |
----------------------------------------------------------------------------------------------
| 0 | SELECT STATEMENT | | 1 | 36 | 3 (0)| 00:00:01 |
| 1 | TABLE ACCESS BY INDEX ROWID| USER_DATA | 1 | 36 | 3 (0)|
00:00:01 |
|* 2 | INDEX RANGE SCAN | FIRST_NAME_IDX | 1 | | 2 (0)|
00:00:01 |
----------------------------------------------------------------------------------------------

Remember, function-based indexes require more effort to maintain than regular


indexes, so having concatenated indexes in this manner may increase the incidence of
index maintenance compared to a function-based index on a single column.

HIRARCHICAL QUERY

 LEVEL is pseudo column in Oracle which is used in Hierarchical query to


identify hierarchical level in numeric format.
 LEVEL returns 1 for ROOT, 2 for child of root & so on.

99 of 129
 LEVEL must be used with CONNECT BY queries.
 In Hierarchical query we can either go from top to bottom (Top down Approach)
or we can go from bottom to top (Bottom up Approach).

TOP-DOWN Approach:-

100 of 129
Bottom Up Approach:-

101 of 129
GLOBAL TEMPORARY TABLE
Data stored in GTT is private, such that data inserted by the session can be access for
that session.

ON COMMIT DELETE ROWS indicate that data will be DELETED at the end of
transaction or at the end of session.

CREATE GLOBAL TEMPORARY TABLE my_temp_table (


id NUMBER,
description VARCHAR2(20)
)
ON COMMIT DELETE ROWS;

-- Insert, but don't commit, then check contents of GTT.


INSERT INTO my_temp_table VALUES (1, 'ONE');

SELECT COUNT(*) FROM my_temp_table;

COUNT(*)
----------
1

SQL>

-- Commit and check contents.

102 of 129
COMMIT;

SELECT COUNT(*) FROM my_temp_table;

COUNT(*)
----------
0

SQL>

In contrast, the ON COMMIT PRESERVE ROWS clause indicates that rows should
persist beyond the end of the transaction. They will only be removed at the end of the
session.

CREATE GLOBAL TEMPORARY TABLE my_temp_table (


id NUMBER,
description VARCHAR2(20)
)
ON COMMIT PRESERVE ROWS;

-- Insert and commit, then check contents of GTT.


INSERT INTO my_temp_table VALUES (1, 'ONE');
COMMIT;

SELECT COUNT(*) FROM my_temp_table;

COUNT(*)
----------
1

103 of 129
SQL>

-- Reconnect and check contents of GTT.


CONN test/test

SELECT COUNT(*) FROM my_temp_table;

COUNT(*)
----------
0

Miscellaneous feature of GTT:-

 If TRUNCATE statement get issued against temporary table, only session


specific data get truncated. There is no effect on data of other sessions.

 Data in temporary table are stored in temporary segment of temporary table


space.

 Data in temporary table is automatically deleted at the end of the session.

 We can also create Index on temporary table. Scope of index is same as database
session.

 We can also create view on temporary table or combination of temporary table


& permanent table.

 Temporary table can also have Trigger.

 Import/Export can be used with GTT to transfer table definition but no data row
will process.

104 of 129
EXTERNAL TABLE
External Table is like complementary to existing SQL Loader function. It enable you
to access the data from external source. Prior to Oracle 10g we could perform Read
Only Operation with External Table but from Oracle 10g onwards we can perform
write operation to External Table.

How to create external table?

External tables are create by using CREATE TABLE ….. ORGANIZED EXTERNAL
statement.
When you are creating External table you are specifying following attributes:-
 TYPE
o ORACLE_LOADER:- For loading.
o ORACLE_DATADUMP:- For load & unload.

 Default Directory
 Access Parameter
 Location

Example: Creating and Loading an External Table Using ORACLE_LOADER


The steps in this section show an example of using the ORACLE_LOADER access
driver to create and load an external table. A traditional table named emp is defined
along with an external table named emp_load. The external data is then loaded into an
internal table.

Assume your .dat file looks as follows:

1. 56november, 15, 1980 baker mary alice 09/01/2004


2. 87december, 20, 1970 roper lisa marie 01/01/1999
3.

Execute the following SQL statements to set up a default directory (which contains the
data source) and to grant access to it:

4. CREATE DIRECTORY ext_tab_dir AS '/usr/apps/datafiles';


5. GRANT READ ON DIRECTORY ext_tab_dir TO SCOTT;
6.

105 of 129
Create a traditional table named emp:

7. CREATE TABLE emp (emp_no CHAR(6), last_name CHAR(25), first_name


CHAR(20), middle_initial CHAR(1), hire_date DATE, dob DATE);
8.

9. Create an external table named emp_load:

10. SQL> CREATE TABLE emp_load


11. 2 (employee_number CHAR(5),
12. 3 employee_dob CHAR(20),
13. 4 employee_last_name CHAR(20),
14. 5 employee_first_name CHAR(15),
15. 6 employee_middle_name CHAR(15),
16. 7 employee_hire_date DATE)
17. 8 ORGANIZATION EXTERNAL
18. 9 (TYPE ORACLE_LOADER
19. 10 DEFAULT DIRECTORY def_dir1
20. 11 ACCESS PARAMETERS
21. 12 (RECORDS DELIMITED BY NEWLINE
22. 13 FIELDS (employee_number CHAR(2),
23. 14 employee_dob CHAR(20),
24. 15 employee_last_name CHAR(18),
25. 16 employee_first_name CHAR(11),
26. 17 employee_middle_name CHAR(11),
27. 18 employee_hire_date CHAR(10) date_format DATE mask
"mm/dd/yyyy"
28. 19 )
29. 20 )
30. 21 LOCATION ('info.dat')
31. 22 );
32.
33. Table created.
34.

Load the data from the external table emp_load into the table emp:

106 of 129
35. SQL> INSERT INTO emp (emp_no,
36. 2 first_name,
37. 3 middle_initial,
38. 4 last_name,
39. 5 hire_date,
40. 6 dob)
41. 7 (SELECT employee_number,
42. 8 employee_first_name,
43. 9 substr(employee_middle_name, 1, 1),
44. 10 employee_last_name,
45. 11 employee_hire_date,
46. 12 to_date(employee_dob,'month, dd, yyyy')
47. 13 FROM emp_load);
48.
49. 2 rows created.
50.

Perform the following select operation to verify that the information in the .dat file
was loaded into the emp table:

51. SQL> SELECT * FROM emp;


52.
53. EMP_NO LAST_NAME FIRST_NAME M HIRE_DATE DOB
54. ------ ------------------------- -------------------- - --------- ---------
55. 56 baker mary a 01-SEP-04 15-NOV-80
56. 87 roper lisa m 01-JAN-99 20-DEC-70
57.
58. 2 rows selected.

Using External Tables to Load and Unload Data


In the context of external tables, loading data refers to the act of reading data from an
external table and loading it into a table in the database. Unloading data refers to the
act of reading data from a table in the database and inserting it into an external table.

Note:
Data can only be unloaded using the ORACLE_DATAPUMP
access driver.

107 of 129
Loading Data
When data is loaded, the data stream is read from the files specified by the
LOCATION and DEFAULT DIRECTORY clauses. The INSERT statement
generates a flow of data from the external data source to the Oracle SQL engine, where
data is processed. As data from the external source is parsed by the access driver and
provided to the external table interface, it is converted from its external representation
to its Oracle internal datatype.
Unloading Data Using the ORACLE_DATAPUMP Access Driver
To unload data, you use the ORACLE_DATAPUMP access driver. The data stream
that is unloaded is in a proprietary format and contains all the column data for every
row being unloaded.
An unload operation also creates a metadata stream that describes the contents of the
data stream. The information in the metadata stream is required for loading the data
stream. Therefore, the metadata stream is written to the datafile and placed before the
data stream.
Dealing with Column Objects
When the external table is accessed through a SQL statement, the fields of the external
table can be used just like any other field in a normal table. In particular, the fields can
be used as arguments for any SQL built-in function, PL/SQL function, or Java
function. This enables you to manipulate the data from the external source.
Although external tables cannot contain a column object, you can use constructor
functions to build a column object from attributes in the external table. For example,
assume a table in the database is defined as follows:

CREATE TYPE student_type AS object (


student_no CHAR(5),
name CHAR(20))
/

CREATE TABLE roster (


student student_type,
grade CHAR(2));

Also assume there is an external table defined as follows:

CREATE TABLE roster_data (


student_no CHAR(5),
name CHAR(20),

108 of 129
grade CHAR(2))
ORGANIZATION EXTERNAL (TYPE ORACLE_LOADER DEFAULT
DIRECTORY ext_tab_dir
ACCESS PARAMETERS (FIELDS TERMINATED BY ',')
LOCATION ('info.dat'));

To load table roster from roster_data, you would specify something similar to the
following:

INSERT INTO roster (student, grade)


(SELECT student_type(student_no, name), grade FROM roster_data);

Datatype Conversion During External Table Use


When data is moved into or out of an external table, it is possible that the same column
will have a different datatype in each of the following three places:
 The database: This is the source when data is unloaded into an external table and
it is the destination when data is loaded from an external table.
 The external table: When data is unloaded into an external table, the data from
the database is converted, if necessary, to match the datatype of the column in the
external table. Also, you can apply SQL operators to the source data to change its
datatype before the data gets moved to the external table. Similarly, when loading from
the external table into a database, the data from the external table is automatically
converted to match the datatype of the column in the database. Again, you can perform
other conversions by using SQL operators in the SQL statement that is selecting from
the external table. For better performance, the datatypes in the external table should
match those in the database.
 The datafile: When you unload data into an external table, the datatypes for
fields in the datafile exactly match the datatypes of fields in the external table.
However, when you load data from the external table, the datatypes in the datafile may
not match the datatypes in the external table. In this case, the data from the datafile is
converted to match the datatypes of the external table. If there is an error converting a
column, then the record containing that column is not loaded. For better performance,
the datatypes in the datafile should match the datatypes in the external table.
Any conversion errors that occur between the datafile and the external table cause the
row with the error to be ignored. Any errors between the external table and the column
in the database (including conversion errors and constraint violations) cause the entire
operation to terminate unsuccessfully.

109 of 129
When data is unloaded into an external table, data conversion occurs if the datatype of
a column in the source table does not match the datatype of the column in the external
table. If a conversion error occurs, then the datafile may not contain all the rows that
were processed up to that point and the datafile will not be readable. To avoid
problems with conversion errors causing the operation to fail, the datatype of the
column in the external table should match the datatype of the column in the database.
This is not always possible, because external tables do not support all datatypes. In
these cases, the unsupported datatypes in the source table must be converted into a
datatype that the external table can support. For example, if a source table has a LONG
column, the corresponding column in the external table must be a CLOB and the
SELECT subquery that is used to populate the external table must use the TO_LOB
operator to load the column. For example:

CREATE TABLE LONG_TAB_XT (LONG_COL CLOB) ORGANIZATION


EXTERNAL...SELECT TO_LOB(LONG_COL) FROM LONG_TAB;

Parallel Access to External Tables


To enable external table support of parallel processing on the datafiles, use the
PARALLEL clause when you create the external table. Each access driver supports
parallel access slightly differently.
Parallel Access with ORACLE_LOADER
The ORACLE_LOADER access driver attempts to divide large datafiles into chunks
that can be processed separately.
The following file, record, and data characteristics make it impossible for a file to be
processed in parallel:
 Sequential data sources (such as a tape drive or pipe)
 Data in any multibyte character set whose character boundaries cannot be
determined starting at an arbitrary byte in the middle of a string
This restriction does not apply to any datafile with a fixed number of bytes per record.
 Records with the VAR format
Specifying a PARALLEL clause is of value only when large amounts of data are
involved.
Parallel Access with ORACLE_DATAPUMP
When you use the ORACLE_DATAPUMP access driver to unload data, the data is
unloaded in parallel when the PARALLEL clause or parallel hint has been specified
and when multiple locations have been specified for the external table.
Each parallel process writes to its own file. Therefore, the LOCATION clause should
specify as many files as there are degrees of parallelism. If there are fewer files than
the degree of parallelism specified, then the degree of parallelism will be limited to the

110 of 129
number of files specified. If there are more files than the degree of parallelism
specified, then the extra files will not be used.
In addition to unloading data, the ORACLE_DATAPUMP access driver can also load
data. Parallel processes can read multiple dump files or even chunks of the same dump
file concurrently. Thus, data can be loaded in parallel even if there is only one dump
file, as long as that file is large enough to contain multiple file offsets. This is because
when the ORACLE_DATAPUMP access driver unloads data, it periodically
remembers the offset into the dump file of the start of a new data chunk and writes that
information into the file when the unload completes. For nonparallel loads, file offsets
are ignored because only one process at a time can access a file. For parallel loads, file
offsets are distributed among parallel processes for multiple concurrent processing on a
file or within a set of files.

Performance Hints When Using External Tables


When you monitor performance, the most important measurement is the elapsed time
for a load. Other important measurements are CPU usage, memory usage, and I/O
rates.
You can alter performance by increasing or decreasing the degree of parallelism. The
degree of parallelism indicates the number of access drivers that can be started to
process the datafiles. The degree of parallelism enables you to choose on a scale
between slower load with little resource usage and faster load with all resources
utilized. The access driver cannot automatically tune itself, because it cannot determine
how many resources you want to dedicate to the access driver.
An additional consideration is that the access drivers use large I/O buffers for better
performance. On databases with shared servers, all memory used by the access drivers
comes out of the system global area (SGA). For this reason, you should be careful
when using external tables on shared servers. Note that for the ORACLE_LOADER
access driver, you can use the READSIZE clause in the access parameters to specify
the size of the buffers.

External Table Restrictions


This section lists what the external tables feature does not do and also describes some
processing restrictions.
 Exporting and importing of external tables with encrypted columns is not
supported.
 An external table does not describe any data that is stored in the database.
 An external table does not describe how data is stored in the external source.
This is the function of the access parameters.
 Column processing: By default, the external tables feature fetches all columns
defined for an external table. This guarantees a consistent result set for all queries.
However, for performance reasons you can decide to process only the referenced

111 of 129
columns of an external table, thus minimizing the amount of data conversion and data
handling required to execute a query. In this case, a row that is rejected because a
column in the row causes a datatype conversion error will not get rejected in a different
query if the query does not reference that column. You can change this column-
processing behavior with the ALTER TABLE command.
 An external table cannot load data into a LONG column.
 When identifiers (for example, column or table names) are specified in the
external table access parameters, certain values are considered to be reserved words by
the access parameter parser. If a reserved word is used as an identifier, it must be
enclosed in double quotation marks.

GRANT & REVOKE


Description

You can GRANT and REVOKE privileges on various database objects in Oracle.
We'll first look at how to grant and revoke privileges on tables and then how to
grant and revoke privileges on functions and procedures in Oracle.

112 of 129
Grant Privileges on Table

You can grant users various privileges to tables. These privileges can be any
combination of SELECT, INSERT, UPDATE, DELETE, REFERENCES, ALTER,
INDEX, or ALL.

Syntax

The syntax for granting privileges on a table in Oracle is:

GRANT privileges ON object TO user;


privileges

The privileges to assign. It can be any of the following values:

Privilege Description

SELECT Ability to perform SELECT statements on the table.

INSERT Ability to perform INSERT statements on the table.

UPDATE Ability to perform UPDATE statements on the table.

DELETE Ability to perform DELETE statements on the table.

REFEREN
Ability to create a constraint that refers to the table.
CES

Ability to perform ALTER TABLE statements to change


ALTER
the table definition.

Ability to create an index on the table with the create


INDEX
index statement.

ALL All privileges on table

113 of 129
Example

Let's look at some examples of how to grant privileges on tables in Oracle.

For example, if you wanted to grant SELECT, INSERT, UPDATE, and DELETE
privileges on a table called suppliers to a user name smithj, you would run the
following GRANT statement:

GRANT SELECT, INSERT, UPDATE, DELETE ON suppliers TO smithj;

You can also use the ALL keyword to indicate that you wish ALL permissions to
be granted for a user named smithj. For example:

GRANT ALL ON suppliers TO smithj;

If you wanted to grant only SELECT access on your table to all users, you could
grant the privileges to the public keyword. For example:

GRANT SELECT ON suppliers TO public;

Revoke Privileges on Table

Once you have granted privileges, you may need to revoke some or all of these
privileges. To do this, you can run a revoke command. You can revoke any
combination of SELECT, INSERT, UPDATE, DELETE, REFERENCES, ALTER,
INDEX, or ALL.

Syntax

The syntax for revoking privileges on a table in Oracle is:

REVOKE privileges ON object FROM user;


privileges

The privileges to revoke. It can be any of the following values:

Privilege Description

SELECT Ability to perform SELECT statements on the table.

114 of 129
INSERT Ability to perform INSERT statements on the table.

UPDATE Ability to perform UPDATE statements on the table.

DELETE Ability to perform DELETE statements on the table.

REFEREN
Ability to create a constraint that refers to the table.
CES

Ability to perform ALTER TABLE statements to change


ALTER
the table definition.

Ability to create an index on the table with the create


INDEX
index statement.

ALL All privileges on table.

Example

Let's look at some examples of how to revoke privileges on tables in Oracle.

For example, if you wanted to revoke DELETE privileges on a table called


suppliers from a user named anderson, you would run the following REVOKE
statement:

REVOKE DELETE ON suppliers FROM anderson;

If you wanted to revoke ALL privileges on a table for a user named anderson, you
could use the ALL keyword as follows:

REVOKE ALL ON suppliers FROM anderson;

115 of 129
If you had granted ALL privileges to public (all users) on the suppliers table and
you wanted to revoke these privileges, you could run the following REVOKE
statement:

REVOKE ALL ON suppliers FROM public;

Grant Privileges on Functions/Procedures

When dealing with functions and procedures, you can grant users the ability to
EXECUTE these functions and procedures.

Syntax

The syntax for granting EXECUTE privileges on a function/procedure in Oracle is:

GRANT EXECUTE ON object TO user;

Example

Let's look at some examples of how to grant EXECUTE privileges on a function or


procedure in Oracle.

For example, if you had a function called Find_Value and you wanted to grant
EXECUTE access to the user named smithj, you would run the following GRANT
statement:

GRANT EXECUTE ON Find_Value TO smithj;

If you wanted to grant ALL users the ability to EXECUTE this function, you would
run the following GRANT statement:

GRANT EXECUTE ON Find_Value TO public;

Revoke Privileges on Functions/Procedures

Once you have granted EXECUTE privileges on a function or procedure, you may
need to REVOKE these privileges from a user. To do this, you can execute a
REVOKE command.

116 of 129
Syntax

The syntax for the revoking privileges on a function or procedure in Oracle is:

REVOKE EXECUTE ON object FROM user;

Example

Let's look at some examples of how to revoke EXECUTE privileges on a function


or procedure in Oracle.

If you wanted to revoke EXECUTE privileges on a function called Find_Value


from a user named anderson, you would run the following REVOKE statement:

REVOKE execute ON Find_Value FROM anderson;

If you had granted EXECUTE privileges to public (all users) on the function called
Find_Value and you wanted to revoke these EXECUTE privileges, you could run
the following REVOKE statement:

REVOKE EXECUTE ON Find_Value FROM public;

BULK COLLECT & FOR ALL


BULK COLLECT:-
SELECT statement which retrieve multiple rows with the single fetch &
improve the speed of data retrieval.

FOR ALL:-
INSERT, UPDATE & DELETE that uses collection to change multiple rows of
data very quickly.

PL/SQL statements are run by PL/SQL statement executor. SQL statements are run by
SQL statement executor. When PL/SQL runtime engine encounter SQL statement then

117 of 129
it will STOP and pass the SQL statement to SQL engine. SQL engine will execute
SQL statement and return back information to PL/SQL engine. This transfer of control
called “Context Switching”. Each context switch incurs overhead that slowdown the
overall performance of your program.

Suppose my manager asked me to write a procedure that accepts a department ID and a


salary percentage increase and gives everyone in that department a raise by the
specified percentage. Taking advantage of PL/SQL’s elegant cursor FOR loop and the
ability to call SQL statements natively in PL/SQL, I come up with the code in Listing
1.
Code Listing 1: increase_salary procedure with FOR loop 
PROCEDURE increase_salary (
department_id_in IN employees.department_id%TYPE,
increase_pct_in IN NUMBER)
IS
BEGIN
FOR employee_rec

118 of 129
IN (SELECT employee_id
FROM employees
WHERE department_id =
increase_salary.department_id_in)
LOOP
UPDATE employees emp
SET emp.salary = emp.salary +
emp.salary * increase_salary.increase_pct_in
WHERE emp.employee_id = employee_rec.employee_id;
END LOOP;
END increase_salary;
 
Suppose there are 100 employees in department 15. When I execute this block, 
BEGIN
increase_salary (15, .10);
END;
 
the PL/SQL engine will “switch” over to the SQL engine 100 times, once for each row
being updated.

Take another look at the increase_salary procedure. The SELECT statement identifies
all the employees in a department. The UPDATE statement executes for each of those
employees, applying the same percentage increase to all. In such a simple scenario, a
cursor FOR loop is not needed at all. I can simplify this procedure to nothing more
than the code in Listing 2.
Code Listing 2: Simplified increase_salary procedure without FOR loop 
PROCEDURE increase_salary (
department_id_in IN employees.department_id%TYPE,
increase_pct_in IN NUMBER)
IS
BEGIN
UPDATE employees emp
SET emp.salary =
emp.salary

119 of 129
+ emp.salary * increase_salary.increase_pct_in
WHERE emp.department_id =
increase_salary.department_id_in;
END increase_salary;

The bulk processing features of PL/SQL are designed specifically to reduce the
number of context switches required to communicate from the PL/SQL engine to the
SQL engine.
Use the BULK COLLECT clause to fetch multiple rows into one or more collections
with a single context switch.
Use the FORALL statement when you need to execute the same DML statement
repeatedly for different bind variable values. The UPDATE statement in the
increase_salary procedure fits this scenario; the only thing that changes with each new
execution of the statement is the employee ID.

CREATE OR REPLACE PROCEDURE increase_salary (


2 department_id_in IN employees.department_id%TYPE,
3 increase_pct_in IN NUMBER)
4 IS
5 TYPE employee_ids_t IS TABLE OF employees.employee_id%TYPE
6 INDEX BY PLS_INTEGER;
7 l_employee_ids employee_ids_t;
8 l_eligible_ids employee_ids_t;
9
10 l_eligible BOOLEAN;
11 BEGIN
12 SELECT employee_id
13 BULK COLLECT INTO l_employee_ids
14 FROM employees
15 WHERE department_id = increase_salary.department_id_in;
16
17 FOR indx IN 1 .. l_employee_ids.COUNT
18 LOOP
19 check_eligibility (l_employee_ids (indx),
20 increase_pct_in,
21 l_eligible);
22
23 IF l_eligible
24 THEN
25 l_eligible_ids (l_eligible_ids.COUNT + 1) :=

120 of 129
26 l_employee_ids (indx);
27 END IF;
28 END LOOP;
29
30 FORALL indx IN 1 .. l_eligible_ids.COUNT
31 UPDATE employees emp
32 SET emp.salary =
33 emp.salary
34 + emp.salary * increase_salary.increase_pct_in
35 WHERE emp.employee_id = l_eligible_ids (indx);
36 END increase_salary;

About BULK COLLECT


To take advantage of bulk processing for queries, you simply put BULK COLLECT
before the INTO keyword and then provide one or more collections after the INTO
keyword. Here are some things to know about how BULK COLLECT works: 
 It can be used with all three types of collections: associative arrays, nested
tables, and VARRAYs.
 You can fetch into individual collections (one for each expression in the
SELECT list) or a single collection of records.
 The collection is always populated densely, starting from index value 1.
If no rows are fetched, then the collection is emptied of all elements.
Code Listing 5: Fetching values for two columns into a collection 
DECLARE
TYPE two_cols_rt IS RECORD
(
employee_id employees.employee_id%TYPE,
salary employees.salary%TYPE
);

TYPE employee_info_t IS TABLE OF two_cols_rt;

l_employees employee_info_t;
BEGIN
SELECT employee_id, salary
BULK COLLECT INTO l_employees
FROM employees
WHERE department_id = 10;

121 of 129
END;

If you are fetching lots of rows, the collection that is being filled could consume too
much session memory and raise an error. To help you avoid such errors, Oracle
Database offers a LIMIT clause for BULK COLLECT. Suppose that, for example,
there could be tens of thousands of employees in a single department and my session
does not have enough memory available to store 20,000 employee IDs in a collection.
Instead I use the approach in Listing 6.
Code Listing 6: Fetching up to the number of rows specified 
DECLARE
c_limit PLS_INTEGER := 100;

CURSOR employees_cur
IS
SELECT employee_id
FROM employees
WHERE department_id = department_id_in;

TYPE employee_ids_t IS TABLE OF


employees.employee_id%TYPE;

l_employee_ids employee_ids_t;
BEGIN
OPEN employees_cur;

LOOP
FETCH employees_cur
BULK COLLECT INTO l_employee_ids
LIMIT c_limit;

EXIT WHEN l_employee_ids.COUNT = 0;


END LOOP;
END;

About FORALL
Whenever you execute a DML statement inside of a loop, you should convert that code
to use FORALL. The performance improvement will amaze you and please your users.

122 of 129
The FORALL statement is not a loop; it is a declarative statement to the PL/SQL
engine: “Generate all the DML statements that would have been executed one row at a
time, and send them all across to the SQL engine with one context switch.”

FORALL and DML Errors


Suppose that I’ve written a program that is supposed to insert 10,000 rows into a table.
After inserting 9,000 of those rows, the 9,001st insert fails with a
DUP_VAL_ON_INDEX error (a unique index violation). The SQL engine passes that
error back to the PL/SQL engine, and if the FORALL statement is written like the one
in Listing 4, PL/SQL will terminate the FORALL statement. The remaining 999 rows
will not be inserted.
If you want the PL/SQL engine to execute as many of the DML statements as possible,
even if errors are raised along the way, add the SAVE EXCEPTIONS clause to the
FORALL header. Then, if the SQL engine raises an error, the PL/SQL engine will save
that information in a pseudocollection named SQL%BULK_EXCEPTIONS, and
continue executing statements. When all statements have been attempted, PL/SQL then
raises the ORA-24381 error.
You can—and should—trap that error in the exception section and then iterate through
the contents of SQL%BULK_EXCEPTIONS to find out which errors have occurred.
You can then write error information to a log table and/or attempt recovery of the
DML statement.
Listing 7 contains an example of using SAVE EXCEPTIONS in a FORALL statement;
in this case, I simply display on the screen the index in the l_eligible_ids collection on
which the error occurred, and the error code that was raised by the SQL engine.
Code Listing 7: Using SAVE EXCEPTIONS with FORALL 
BEGIN
FORALL indx IN 1 .. l_eligible_ids.COUNT SAVE EXCEPTIONS
UPDATE employees emp
SET emp.salary =
emp.salary + emp.salary * increase_pct_in
WHERE emp.employee_id = l_eligible_ids (indx);
EXCEPTION
WHEN OTHERS
THEN
IF SQLCODE = -24381
THEN
FOR indx IN 1 .. SQL%BULK_EXCEPTIONS.COUNT
LOOP
DBMS_OUTPUT.put_line (

123 of 129
SQL%BULK_EXCEPTIONS (indx).ERROR_INDEX
|| ‘: ‘
|| SQL%BULK_EXCEPTIONS (indx).ERROR_CODE);
END LOOP;
ELSE
RAISE;
END IF;
END increase_salary;

DYNAMIC SQL
 Dynamic SQL is a programming method that will CREATE or RUN SQL
statement at run time.
 It is useful for Ad-hoc Query or when you are not aware of complete SQL
statement.
 PL/SQL have two ways to write Dynamic SQL
o Native Dynamic SQL ( EXECUTE IMMEDIATE )
o DBMS_SQL Package
 EXECUTE IMMEDIATE is the replacement of DBMS_SQL package.
 It will PARSE & Immediately EXECUTE SQL statements.
 EXECUTE IMMEDIATE will not COMMIT DML transaction, Explicitly
COMMIT should be done.
 Multi Row query are not supported for returning value, Alternative is to use
Temporary Table or Ref Cursor to store the records.
 Do not use Semi-Colon when executing SQL statement. Use Semi-Colon at the
end when executing PL/SQL blocks.

2.4.1 Example of EXECUTE IMMEDIATE usage

1. To run a DDL statement in PL/SQL.

begin
execute immediate 'set role all';
end;

124 of 129
2. To pass values to a dynamic statement (USING clause).

declare
l_depnam varchar2(20) := 'testing';
l_loc varchar2(10) := 'Dubai';
begin
execute immediate 'insert into dept values (:1, :2, :3)'
using 50, l_depnam, l_loc;
commit;
end;

3. To retrieve values from a dynamic statement (INTO clause).

declare
l_cnt varchar2(20);
begin
execute immediate 'select count(1) from emp'
into l_cnt;
dbms_output.put_line(l_cnt);
end;

4. To call a routine dynamically: The bind variables used for parameters of the routine
have to be specified along with the parameter type. IN type is the default, others have
to be specified explicitly.

declare
l_routin varchar2(100) := 'gen2161.get_rowcnt';
l_tblnam varchar2(20) := 'emp';
l_cnt number;
l_status varchar2(200);
begin
execute immediate 'begin ' || l_routin || '(:2, :3, :4); end;'
using in l_tblnam, out l_cnt, in out l_status;

if l_status != 'OK' then


dbms_output.put_line('error');
end if;
end;

5. To return value into a PL/SQL record type: The same option can be used for
%rowtype variables also.

125 of 129
declare
type empdtlrec is record (empno number(4),
ename varchar2(20),
deptno number(2));
empdtl empdtlrec;
begin
execute immediate 'select empno, ename, deptno ' ||
'from emp where empno = 7934'
into empdtl;
end;

6. To pass and retrieve values: The INTO clause should precede the USING clause.

declare
l_dept pls_integer := 20;
l_nam varchar2(20);
l_loc varchar2(20);
begin
execute immediate 'select dname, loc from dept where deptno = :1'
into l_nam, l_loc
using l_dept ;
end;

7. Multi-row query option. Use the insert statement to populate a temp table for this
option. Use the temporary table to carry out further processing. Alternatively, you may
use REF cursors to by-pass this drawback.

declare
l_sal pls_integer := 2000;
begin
execute immediate 'insert into temp(empno, ename) ' ||
' select empno, ename from emp ' ||
' where sal > :1'
using l_sal;
commit;
end;

126 of 129
FLASH BACK QUERY
https://docs.oracle.com/cd/B13789_01/appdev.101/b10795/adfns_fl.htm

 FLASH BACK provide the way to view PAST state of Database Objects.
 We can use FLASHBACK for following,
o Perform query that return past data
o Perform query that return Metadata which show the details history of
change of database.
o We can recover table or Individual row to previous point in time.
 FLASHBACK use Automatic Undo Management (AUM) system to obtain
metadata and historical data for transaction.
 They rely on Undo data.
 Beside this Oracle can use FLASHBACK for following,
o To Rollback active transaction.
o To Recover terminated transaction using database recovery process,
o Provide READ consistency for SQL query.

Application Development Feature

In Application Development Flash Back can be used to report on Historical Data


or to undo the changes. In Application Development allow you to do followings,
 Oracle Flash Back Query :-
o It allow you to retrieve the data for specified time using AS OF clause of
SELECT statement.
 Oracle Flash Back Version Query :-
o It retrieve Metadata or historical data for specified time interval.
o You can view all the rows of table for given interval of time.
o You can use VERSIONS BETWEEN clause of SELECT statement to
create Flash Back query versions.
 Oracle Flash Back Transaction Query :-

127 of 129
o It retrieve metadata or historical data for given transaction or for all
transactions within given time interval.
o You can also obtain SQL code to UNDO the changes of particular row
affect by transaction.
o You can use Flash Back Transaction Query with Flash Back Version
Query that provide transaction_id.
o To perform Flash Back Transaction Query, you select from
Flashback_Transaction_query view.
 DBMS_FLASHBACK_PACKAGE :-
o Set the clock back to time in past to examin data at that time.

Database Administration Feature:-

DBA have following features of FLASH BACK,


 Oracle Flashback Table :-
o It will recover the table to its previous point in time.
o You can restore Table Data while DB is online.
o With this you can undo the changes to specific table.
 Oracle Flashback Drop :-
o It will recover drop table.
 Oracle Flashback Database :-
o It will quickly return to database to earlier point in time.
o This is faster technique because you don’t need to restore database
backup.

Example

This example uses a Flashback Query to examine the state of a table at a previous time.
Suppose, for instance, that a DBA discovers at 12:30 PM that data for employee JOHN
had been deleted from the employee table, and the DBA knows that at 9:30AM the
data for JOHN was correctly stored in the database. The DBA can use a Flashback
Query to examine the contents of the table at 9:30, to find out what data had been lost.
If appropriate, the DBA can then re-insert the lost data in the database.

The following query retrieves the state of the employee record for JOHN at 9:30AM,
April 4, 2003:

SELECT * FROM employee AS OF TIMESTAMP


TO_TIMESTAMP('2003-04-04 09:30:00', 'YYYY-MM-DD HH:MI:SS')
WHERE name = 'JOHN';

128 of 129
This update then restores John's information to the employee table:

INSERT INTO employee


(SELECT * FROM employee AS OF TIMESTAMP
TO_TIMESTAMP('2003-04-04 09:30:00', 'YYYY-MM-DD HH:MI:SS')
WHERE name = 'JOHN');

Tips for Using Flashback Query


 You can specify or omit the AS OF clause for each table, and specify different
times for different tables. Use an AS OF clause in a query to perform DDL
operations or DML operations in the same session as the query.
 To use the results of a Flashback Query in a DDL or DML statement that affects
the current state of the database, use an AS OF clause inside an INSERT or
CREATE TABLE AS SELECT statement.
 You can create a view that refers to past data by using the AS OF clause in the
SELECT statement that defines the view. If you specify a relative time by
subtracting from SYSDATE, the past time is recalculated for each query. For
example:

CREATE VIEW hour_ago AS


SELECT * FROM employee AS OF
TIMESTAMP (SYSTIMESTAMP - INTERVAL '60' MINUTE);

 You can use the AS OF clause in self-joins, or in set operations such as


INTERSECT and MINUS, in order to extract or compare data from two
different times. You can store the results by preceding a Flashback Query with a
CREATE TABLE AS SELECT or INSERT INTO TABLE SELECT statement.
For example, this query re-inserts into table employee the rows that were present
there an hour ago:

INSERT INTO employee


(SELECT * FROM employee AS OF
TIMESTAMP (SYSTIMESTAMP - INTERVAL '60' MINUTE))

129 of 129
MINUS SELECT * FROM employee);

Using DBMS_FLASHBACK Package

 It generally provide same functionality as Flash Back query. But Flashback


query sometime are more convenient than DBMS_FLASHBACK package.
 This package act as time machine. You can turn back to clock, carryout normal
queries and then return back to present.
 You can use this package to perform queries on past data without any special
clause such as “AS OF” or “VERSION BETWEEN”.
 To use this in your PL/SQL code:-
o CALL DBMS_FLASHBACK.ENABLE_AT_TIME or
DBMS_FLASHBACK.ENABLE_AT_SYSTEM_CHANGE_NUMBER
to turn back the clock.
o After this perfume normal queries. Don’t perform DDL or DML queries.
o CALL DBMS_FLASHBACK.DISABLE to return to the present.

You can use a cursor to store the results of queries into the past. To do this, open the
cursor before calling DBMS_FLASHBACK.DISABLE. After storing the results and
then calling DISABLE, you can do the following:

 Perform INSERT or UPDATE operations, to modify the current database state


using the stored results from the past.
 Compare current data with the past data: After calling DISABLE, open a second
cursor. Fetch from the first cursor to retrieve past data; fetch from the second
cursor to retrieve current data. You can store the past data in a temporary table,
and then use set operators such as MINUS or UNION to contrast or combine the
past and current data.

2.5 Using ORA_ROWSCN

ORA_ROWSCN is a pseudo column of any table that is not fixed or external. It


represents the SCN of the most recent change to a given row; that is, the latest
COMMIT operation for the row. For example:

SQL> SELECT ora_rowscn, name, salary FROM employee WHERE empno = 7788;

ORA_ROWSCN NAME SALARY

130 of 129
---------- ---- ------
202553 Fudd 3000

The latest COMMIT operation for the row took place at approximately SCN 202553.
(You can use function SCN_TO_TIMESTAMP to convert an SCN, like
ORA_ROWSCN, to the corresponding TIMESTAMP value.)

ORA_SCN is in fact a conservative upper bound of the latest commit time: the actual
commit SCN can be somewhat earlier. ORA_SCN is more precise (closer to the actual
commit SCN) for a row-dependent table (created using CREATE TABLE with the
ROWDEPENDENCIES clause).

Noteworthy uses of ORA_ROWSCN in application development include concurrency


control and client cache invalidation. To see how you might use it in concurrency
control, consider the following scenario.

Your application examines a row of data, and records the corresponding


ORA_ROWSCN as 202553. Later, the application needs to update the row, but only if
its record of the data is still accurate. That is, this particular update operation depends,
logically, on the row not having been changed. The operation is therefore made
conditional on the ORA_ROWSCN being still 202553. Here is an equivalent
interactive command:

SQL> UPDATE employee SET salary = salary + 100


WHERE empno = 7788 AND ora_rowscn = 202553;

0 rows updated.

The conditional update fails in this case, because the ORA_ROWSCN is no longer
202553. This means that some user or another application changed the row and
performed a COMMIT more recently than the recorded ORA_ROWSCN.

Your application queries again to obtain the new row data and ORA_ROWSCN.
Suppose that the ORA_ROWSCN is now 415639. The application tries the conditional
update again, using the new ORA_ROWSCN. This time, the update succeeds, and it is
committed. Here is an interactive equivalent:

SQL> UPDATE employee SET salary = salary + 100


WHERE empno = 7788 AND ora_rowscn = 415639;

1 row updated.

131 of 129
SQL> COMMIT;

Commit complete.

SQL> SELECT ora_rowscn, name, salary FROM employee WHERE empno = 7788;

ORA_ROWSCN NAME SALARY


---------- ---- ------
465461 Fudd 3100

The SCN corresponding to the new COMMIT is 465461.

Besides using ORA_ROWSCN in an UPDATE statement WHERE clause, you can use
it in a DELETE statement WHERE clause or the AS OF clause of a Flashback Query.

See Also:

 Oracle Database SQL Reference

2.6 Using Flashback Version Query

You use a Flashback Version Query to retrieve the different versions of specific rows
that existed during a given time interval. A new row version is created whenever a
COMMIT statement is executed.

You specify a Flashback Version Query using the VERSIONS BETWEEN clauses of
the SELECT statement. Here is the syntax:

VERSIONS {BETWEEN {SCN | TIMESTAMP} start AND end}

Where start and end are expressions representing the start and end of the time interval
to be queried, respectively? The interval is closed at both ends: the upper and lower
limits specified (start and end) are both included in the time interval.

The Flashback Version Query returns a table with a row for each version of the row
that existed at any time during the time interval you specify. Each row in the table
includes pseudo columns of metadata about the row version, described in Table 15-1.
This information can reveal when and how a particular change (perhaps erroneous)
occurred to your database.

132 of 129
2.6.1.1.1 Table 15-1   Flashback Version Query Row Data Pseudo columns
Pseudo column Name Description

VERSIONS_STARTSCN Starting System Change Number (SCN) or TIMESTAMP


, when the row version was created. This identifies the time
VERSIONS_STARTTIM when the data first took on the values reflected in the row
E version. You can use this to identify the past target time
for a Flashback Table or Flashback Query operation.

If this is NULL, then the row version was created before


the lower time bound of the query BETWEEN clauses.
VERSIONS_ENDSCN, SCN or TIMESTAMP when the row version expired. This
VERSIONS_ENDTIME identifies the row expiration time.

If this is NULL, then either the row version was still


current at the time of the query or the row corresponds to a
DELETE operation.
VERSIONS_XID Identifier of the transaction that created the row version.
VERSIONS_OPERATIO Operation performed by the transaction: I for insertion, D
N for deletion, or U for update. The version is that of the row
that was inserted, deleted, or updated; that is, the row after
an INSERT operation, the row before a DELETE
operation, or the row affected by an UPDATE operation.

Note: For user updates of an index key, a Flashback


Version Query may treat an UPDATE operation as two
operations, DELETE plus INSERT, represented as two
version rows with a D followed by an I
VERSIONS_OPERATION.

A given row version is valid starting at its time VERSIONS_START* up to, but not
including, its time VERSIONS_END*. That is, it is valid for any time t such that
VERSIONS_START* <= t < VERSIONS_END*. For example, the following output
indicates that the salary was 10243 from September 9, 2002, included, to November
25, 2003, not included.

VERSIONS_START_TIME VERSIONS_END_TIME SALARY


------------------- ----------------- ------
09-SEP-2003 25-NOV-2003 10243

133 of 129
Here is a typical Flashback Version Query:

SELECT versions_startscn, versions_starttime,


versions_endscn, versions_endtime,
versions_xid, versions_operation,
name, salary
FROM employee
VERSIONS BETWEEN TIMESTAMP
TO_TIMESTAMP('2003-07-18 14:00:00', 'YYYY-MM-DD HH24:MI:SS')
AND TO_TIMESTAMP('2003-07-18 17:00:00', 'YYYY-MM-DD HH24:MI:SS')
WHERE name = 'JOE';

Pseudocolumn VERSIONS_XID provides a unique identifier for the transaction that


put the data in that state. You can use this value in connection with a Flashback
Transaction Query to locate metadata about this transaction in the
FLASHBACK_TRANSACTION_QUERY view, including the SQL required to undo
the row change and the user responsible for the change - see "Using Flashback
Transaction Query".

See Also:

Oracle Database SQL Reference for information on the Flashback


Version Query pseudo columns and the syntax of the VERSIONS
clause

2.7 Using Flashback Transaction Query

A Flashback Transaction Query is a query on the view


FLASHBACK_TRANSACTION_QUERY. You use a Flashback Transaction Query to
obtain transaction information, including SQL code that you can use to undo each of
the changes made by the transaction.

See Also:

Oracle Database Backup and Recovery Advanced User's Guide. and


Oracle Database Administrator's Guide for information on how a
DBA can use the Flashback Table feature to restore an entire table,
rather than individual rows

As an example, the following statement queries the


FLASHBACK_TRANSACTION_QUERY view for transaction information, including

134 of 129
the transaction ID, the operation, the operation start and end SCNs, the user
responsible for the operation, and the SQL code to undo the operation:

SELECT xid, operation, start_scn,commit_scn, logon_user, undo_sql


FROM flashback_transaction_query
WHERE xid = HEXTORAW('000200030000002D');

As another example, the following query uses a Flashback Version Query as a


subquery to associate each row version with the LOGON_USER responsible for the
row data change.

SELECT xid, logon_user FROM flashback_transaction_query


WHERE xid IN (SELECT versions_xid FROM employee VERSIONS
BETWEEN TIMESTAMP
TO_TIMESTAMP('2003-07-18 14:00:00', 'YYYY-MM-DD HH24:MI:SS') AND
TO_TIMESTAMP('2003-07-18 17:00:00', 'YYYY-MM-DD HH24:MI:SS'));

2.7.1 Flashback Transaction Query and Flashback Version Query: Example

This example demonstrates the use of a Flashback Transaction Query in conjunction


with a Flashback Version Query. The example assumes simple variations of the
employee and departments tables in the sample hr schema.

In this example, a DBA carries out the following series of actions in SQL*Plus:

connect hr/hr
CREATE TABLE emp
(empno number primary key, empname varchar2(16), salary number);
INSERT INTO emp VALUES (111, 'Mike', 555);
COMMIT;

CREATE TABLE dept (deptno number, deptname varchar2(32));


INSERT INTO dept VALUES (10, 'Accounting');
COMMIT;

At this point, emp and dept have one row each. In terms of row versions, each table has
one version of one row. Next, suppose that an erroneous transaction deletes employee
id 111 from table emp:

UPDATE emp SET salary = salary + 100 where empno = 111;


INSERT INTO dept VALUES (20, 'Finance');

135 of 129
DELETE FROM emp WHERE empno = 111;
COMMIT;

Subsequently, a new transaction reinserts employee id 111 with a new employee name
into the emp table.

INSERT INTO emp VALUES (111, 'Tom', 777);


UPDATE emp SET salary = salary + 100 WHERE empno = 111;
UPDATE emp SET salary = salary + 50 WHERE empno = 111;
COMMIT;

At this point, the DBA detects the application error and needs to diagnose the problem.
The DBA issues the following query to retrieve versions of the rows in the emp table
that correspond to empno 111. The query uses Flashback Version Query
pseudocolumns.

connect dba_name/password
SELECT versions_xid XID, versions_startscn START_SCN,
versions_endscn END_SCN, versions_operation OPERATION,
empname, salary FROM hr.emp
VERSIONS BETWEEN SCN MINVALUE AND MAXVALUE
where empno = 111;

XID START_SCN END_SCN OPERATION EMPNAME SALARY


---------------- ---------- --------- ---------- ---------- ----------
0004000700000058 113855 I Tom 927
000200030000002D 113564 D Mike 555
000200030000002E 112670 113564 I Mike 555
3 rows selected

The results table reads chronologically, from bottom to top. The third row corresponds
to the version of the row in emp that was originally inserted in the table when the table
was created. The second row corresponds to the row in emp that was deleted by the
erroneous transaction. The first row corresponds to the version of the row in emp that
was reinserted with a new employee name.

The DBA identifies transaction 000200030000002D as the erroneous transaction and


issues the following Flashback Transaction Query to audit all changes made by this
transaction:

136 of 129
SELECT xid, start_scn START, commit_scn COMMIT,
operation OP, logon_user USER,
undo_sql FROM flashback_transaction_query
WHERE xid = HEXTORAW('000200030000002D');

XID START COMMIT OP USER UNDO_SQL


---------------- ----- ------ -- ---- ---------------------------
000200030000002D 195243 195244 DELETE HR insert into "HR"."EMP"
("EMPNO","EMPNAME","SALARY") values ('111','Mike','655');

000200030000002D 195243 195244 INSERT HR delete from "HR"."DEPT"


where ROWID = 'AAAKD4AABAAAJ3BAAB';

000200030000002D 195243 195244 UPDATE HR update "HR"."EMP"


set "SALARY" = '555' where ROWID = 'AAAKD2AABAAAJ29AAA';

000200030000002D 195243 113565 BEGIN HR

4 rows selected

The rightmost column (undo_sql) contains the SQL code that will undo the
corresponding change operation. The DBA can execute this code to undo the changes
made by that transaction. The USER column (logon_user) shows the user responsible
for the transaction.

A DBA might also be interested in knowing all changes made in a certain time
window. In our scenario, the DBA performs the following query to view the details of
all transactions that executed since the erroneous transaction identified earlier
(including the erroneous transaction itself):

SELECT xid, start_scn, commit_scn, operation, table_name, table_owner


FROM flashback_transaction_query
WHERE table_owner = 'HR' AND
start_timestamp >=
TO_TIMESTAMP ('2002-04-16 11:00:00','YYYY-MM-DD HH:MI:SS');

XID START_SCN COMMIT_SCN OPERATION TABLE_NAME


TABLE_OWNER
---------------- --------- ---------- --------- ---------- -----------
0004000700000058 195245 195246 UPDATE EMP HR
0004000700000058 195245 195246 UPDATE EMP HR

137 of 129
0004000700000058 195245 195246 INSERT EMP HR
000200030000002D 195243 195244 DELETE EMP HR
000200030000002D 195243 195244 INSERT DEPT HR
000200030000002D 195243 195244 UPDATE EMP HR

6 rows selected

2.8 Flashback Tips

The following tips and restrictions apply to using flashback features.

2.8.1 Flashback Tips - Performance

 For better performance, generate statistics on all tables involved in a Flashback


Query by using the DBMS_STATS package, and keep the statistics current.
Flashback Query always uses the cost-based optimizer, which relies on these
statistics.
 The performance of a query into the past depends on how much undo data must
be accessed. For better performance, use queries to select small sets of past data
using indexes, not to scan entire tables. If you must do a full table scan, consider
adding a parallel hint to the query.
 The performance cost in I/O is the cost of paging in data and undo blocks that
are not already in the buffer cache. The performance cost in CPU use is the cost
of applying undo information to affected data blocks. When operating on
changes in the recent past, flashback features essentially CPU bound.
 Use index structures for Flashback Version Query: the database keeps undo data
for index changes as well as data changes. Performance of index lookup-based
Flashback Version Query is an order of magnitude faster than the full table scans
that are otherwise needed.
 In a Flashback Transaction Query, the type of the xid column is RAW(8). To
take advantage of the index built on the xid column, use the HEXTORAW
conversion function: HEXTORAW(xid).
 Flashback Query against a materialized view does not take advantage of query
rewrite optimizations.

See Also:

Oracle Database Performance Tuning Guide

138 of 129
2.8.2 Flashback Tips - General

 Use the DBMS_FLASHBACK package or other flashback features? Use


ENABLE/DISABLE calls to the DBMS_FLASHBACK package around SQL
code that you do not control, or when you want to use the same past time for
several consecutive queries. Use Flashback Query, Flashback Version Query, or
Flashback Transaction Query for SQL that you write, for convenience. A
Flashback Query, for example, is flexible enough to do comparisons and store
results in a single query.
 To obtain an SCN to use later with a flashback feature, use
DBMS_FLASHBACK.GET_SYSTEM_CHANGE_NUMBER.
 You can compute or retrieve a past time to use in a query by using a function
return value as a timestamp or SCN argument. For example, you can perform
date and time calculations by adding or subtracting an INTERVAL value to the
value of the SYSTIMESTAMP function.
 You can query locally or remotely (Flashback Query, Flashback Version Query,
or Flashback Transaction Query). for example here is a remote Flashback Query:
 (SELECT * FROM employee@some_remote_host AS OF
 TIMESTAMP (SYSTIMESTAMP - INTERVAL '60' MINUTE);

 To ensure database consistency, always perform a COMMIT or ROLLBACK
operation before querying past data.
 Remember that all flashback processing is done using the current session
settings, such as national language and character set, not the settings that were in
effect at the time being queried.
 Some DDLs that alter the structure of a table, such as drop/modify column,
move table, drop partition, and truncate table/partition, invalidate any existing
undo data for the table. It is not possible to retrieve data from a point earlier than
the time such DDLs were executed. Trying such a query results in error ORA-
1466. This restriction does not apply to DDL operations that alter the storage
attributes of a table, such as PCTFREE, INITTRANS and MAXTRANS.
 Use an SCN to query past data at a precise time. If you use a timestamp, the
actual time queried might be up to 3 seconds earlier than the time you specify.
Internally, Oracle Database uses SCNs; these are mapped to timestamps at a
granularity of every 3 seconds.

For example, assume that the SCN values 1000 and 1005 are mapped to the
times 8:41 and 8:46 AM respectively. A query for a time between 8:41:00 and
8:45:59 AM is mapped to SCN 1000; a Flashback Query for 8:46 AM is mapped
to SCN 1005.

139 of 129
Due to this time-to-SCN mapping, if you specify a time that is slightly after a
DDL operation (such as a table creation) the database might actually use an SCN
that is just before the DDL operation. This can result in error ORA-1466.

 You cannot retrieve past data from a V$ view in the data dictionary. Performing
a query on such a view always returns the current data. You can, however,
perform queries on past data in other views of the data dictionary, such as
USER_TABLES.

SQL LOADER
 SQL Loader loads the data from external file to database table.
 You can use SQL Loader to do following:-
o LOAD the data from multiple data files in the same session.
o LOAD the data into multiple table in same session.
o Selectively Load the data.
o Manipulate data before loading it.
o Generate unique sequence key value for specific column.
 SQL Loader takes the Input from .ctl (Control) file.
 Control file contain one or more data files.
 Output of SQL Loader can be LOG File, BAD File, and DISCARD file.

140 of 129
SQL*Loader Parameter:-

SQL*Loader is invoked when you specify the sqlldr command.

In situations where you always use the same parameters for which the values seldom
change, it can be more efficient to specify parameters using the following methods,
rather than on the command line:
 Parameters can be grouped together in a parameter file. You could then specify
the name of the parameter file on the command line using the PARFILE
parameter.
 Certain parameters can also be specified within the SQL*Loader control file by
using the OPTIONS clause.

PARFILE (parameter file)

Default: none
PARFILE specifies the name of a file that contains commonly used command-line
parameters. For example, the command line could read:

sqlldr PARFILE=example.par

The parameter file could have the following contents:

141 of 129
USERID=scott/tiger
CONTROL=example.ctl
ERRORS=9999
LOG=example.log

OPTIONS Clause
The following command-line parameters can be specified using the OPTIONS clause.
These parameters are described in greater detail in Chapter 7.

BINDSIZE = n
COLUMNARRAYROWS = n
DIRECT = {TRUE | FALSE}
ERRORS = n
LOAD = n
MULTITHREADING = {TRUE | FALSE}
PARALLEL = {TRUE | FALSE}
READSIZE = n
RESUMABLE = {TRUE | FALSE}
RESUMABLE_NAME = 'text string'
RESUMABLE_TIMEOUT = n
ROWS = n
SILENT = {HEADER | FEEDBACK | ERRORS | DISCARDS | PARTITIONS | ALL}
SKIP = n
SKIP_INDEX_MAINTENANCE = {TRUE | FALSE}
SKIP_UNUSABLE_INDEXES = {TRUE | FALSE}
STREAMSIZE = n

The following is an example use of the OPTIONS clause that you could use in a
SQL*Loader control file:

OPTIONS (BINDSIZE=100000, SILENT=(ERRORS, FEEDBACK) )

SQL*Loader Control File

142 of 129
The control file is a text file written in a language that SQL*Loader understands. The
control file tells SQL*Loader where to find the data, how to parse and interpret the
data, where to insert the data, and more.

Although not precisely defined, a control file can be said to have three sections,

The first section contains session-wide information, for example:


 Global options such as bindsize, rows, records to skip, and so on
 INFILE clauses to specify where the input data is located
 Data to be loaded

The second section consists of one or more INTO TABLE blocks. Each of these
blocks contains information about the table into which the data is to be loaded.

The third section is optional and, if present, contains input data.

Input Data and Data files

SQL*Loader reads data from one or more files (or operating system equivalents
of files) specified in the control file. From SQL*Loader's perspective, the data in the
datafile is organized as records. A particular datafile can be in fixed record format,
variable record format, or stream record format. The record format can be specified in
the control file with the INFILE parameter. If no record format is specified, the default
is stream record format.

LOBFILEs and Secondary Datafiles (SDFs)

LOB data can be lengthy enough that it makes sense to load it from a LOBFILE.
In LOBFILEs, LOB data instances are still considered to be in fields (predetermined
size, delimited, length-value), but these fields are not organized into records (the
concept of a record does not exist within LOBFILEs). Therefore, the processing
overhead of dealing with records is avoided. This type of organization of data is ideal
for LOB loading.

143 of 129
For example, you might use LOBFILEs to load employee names, employee IDs, and
employee resumes. You could read the employee names and IDs from the main
datafiles and you could read the resumes, which can be quite lengthy, from LOBFILEs.
You might also use LOBFILEs to facilitate the loading of XML data. You can use
XML columns to hold data that models structured and semistructured data. Such data
can be quite lengthy.

Secondary datafiles (SDFs) are similar in concept to primary datafiles. Like


primary datafiles, SDFs are a collection of records, and each record is made up of
fields. The SDFs are specified on a per control-file-field basis. Only a
collection_fld_spec can name an SDF as its data source.

SDFs are specified using the SDF parameter. The SDF parameter can be
followed by either the file specification string, or a FILLER field that is mapped to a
data field containing one or more file specification strings.

Supported Collection Types

SQL*Loader supports loading of the following two collection types:


 Nested Tables
 VARRAYs

Supported LOB Types

A LOB is a large object type. This release of SQL*Loader supports loading of four
LOB types:

 BLOB: a LOB containing unstructured binary data


 CLOB: a LOB containing character data
 NCLOB: a LOB containing characters in a database national character set
 BFILE: a BLOB stored outside of the database tablespaces in a server-side
operating system file

Partitioned Object Support

 A single partition of a partitioned table


 All partitions of a partitioned table
 A non-partitioned table

144 of 129
View

 View is basically developed for our convenience. Basically it provide level of


abstraction.
 When we have scenario like,
o How can I rename the column?
o How can I change the order of column?
o How can I add column in middle of table?

145 of 129
 If you think of all above scenario you will come to know the use of view.
 If you have complex SQL query & you want to use that as simple query at
application level. Then in such scenario you can use view. You can put that
complex query in VIEW & after than you can use that view as simple query at
application level.
 View is nothing more than stored query. It will run no slower nor faster than
query directly against base tables.

CREATE VIEW VIEW_NAME

AS

SELECT…..;

Force VIEW

A view can be created even if the defining query of the view cannot be executed, as
long as the CREATE VIEW command has no syntax errors. We call such a view a
view with errors. For example, if a view refers to a non-existent table or an invalid
column of an existing table, or if the owner of the view does not have the required
privileges, then the view can still be created and entered into the data dictionary. You
can only create a view with errors by using the FORCE option of the CREATE VIEW
command:

CREATE FORCE VIEW VIEW_NAME

AS

SELECT …;

When a view is created with errors, Oracle returns a message and leaves the status of
the view as INVALID. If conditions later change so that the query of an invalid view
can be executed, then the view can be recompiled and become valid. Oracle
dynamically compiles the invalid view if you attempt to use it

146 of 129
MATERIALIZED VIEW

CREATE MATERIALIZED VIEW view-name


BUILD [IMMEDIATE | DEFERRED]
REFRESH [FAST | COMPLETE | FORCE]
ON [COMMIT | DEMAND]
[[ENABLE | DISABLE] QUERY REWRITE]
[ON PREBUILT TABLE]
AS
SELECT ….

Use the CREATE MATERIALIZED VIEW LOG statement to create a


materialized view log, which is a table associated with the master table of a
materialized view.

Note:

147 of 129
The keyword SNAPSHOT is supported in place of MATERIALIZED VIEW for
backward compatibility.

When DML changes are made to master table data, Oracle Database stores rows
describing those changes in the materialized view log and then uses the materialized
view log to refresh materialized views based on the master table. This process is called
incremental or fast refresh. Without a materialized view log, Oracle Database must
re-execute the materialized view query to refresh the materialized view. This process is
called a complete refresh. Usually, a fast refresh takes less time than a complete
refresh.
A materialized view log is located in the master database in the same schema as the
master table. A master table can have only one materialized view log defined on it.

The BUILD clause options are shown below.

 IMMEDIATE: The materialized view is populated immediately.


 DEFERRED: The materialized view is populated on the first requested refresh.

The following refresh types are available.

 FAST :
o FAST indicate incremental refresh method which perform the refresh
according to the changes that occurs to the master table.
 COMPLETE :-
o Oracle will perform COMPLETE refresh even though FAST refresh is
possible.
 FORCE :-
o It’s by default refresh method.

A refresh can be triggered in one of two ways.

 ON COMMIT: The refresh is triggered by a committed data change in one of the


dependent tables.
 ON DEMAND: The refresh is initiated by a manual request or a scheduled task.

148 of 129
When Materialized View is with FAST Refresh Oracle must examine the last refresh
time of Master Table or Master Materialized View.

Complete Refresh occur when MV created with BUILD IMMEDIATE.


User can perform Complete Refresh any time after creation. Complete Refresh Involve
execution of query assigned to it while creation of MVIEW.
It can be slower especially when DB is huge.
In case of Incremental Refresh it eliminate rebuilding of MV from scratch. Hence it is
faster.
It can refresh either on Demand or on Regular Time Interval assigned to it.
While creating MV you have option to specify when MV should get refresh.
 On COMMIT
 On DEMAND

The DBMS_MVIEW package contains three APIs for performing refresh operations:
 DBMS_MVIEW.REFRESH
 DBMS_MVIEW.REFRESH_ALL_MVIEWS
 DBMS_MVIEW.REFRESH_DEPENDENT

For e.g. No:-01

create materialized view emp_6_mv

build deferred

refresh complete

start with (sysdate+20/(60*60*24)) next (sysdate+10/(60*60*24))

on demand

as

select * from emp_6;

begin

dbms_mview.refresh('EMP_6_MV');

end;

149 of 129
drop materialized view emp_6_mv;

For e.g. No: - 02

Step no:-1

150 of 129
Step no:-2

Step no:-3

If you want to drop materialized view log then use below command,

151 of 129
Before & after refreshing MV manually the o/p of MV as follows,

152 of 129
Analytical Function

FIRST_VALUE:-

 It will return first value of in order set of value from analytical window.

For e.g.

153 of 129
select empno,ename,sal

from emp

order by sal desc;

EMPNO ENAME SAL


7934 MILLER 42000
7782 CLARK 32000
7839 KING 3000
7902 FORD 3000
7788 SCOTT 3000
7566 JONES 2975
7698 BLAKE 2850
7499 ALLEN 1600
7844 TURNER 1500
7521 WARD 1250
7654 MARTIN 1250
7876 ADAMS 1100
7900 JAMES 950
7369 SMITH 800

If we write below query we will get following output,

select empno,ename,FIRST_VALUE(sal) over(order by sal desc) as highest_sal

from emp;

EMPNO ENAME HIGHEST_SAL


7934 MILLER 42000
7782 CLARK 42000

154 of 129
7839 KING 42000
7902 FORD 42000
7788 SCOTT 42000
7566 JONES 42000
7698 BLAKE 42000
7499 ALLEN 42000
7844 TURNER 42000
7521 WARD 42000
7654 MARTIN 42000
7876 ADAMS 42000
7900 JAMES 42000
7369 SMITH 42000

select distinct FIRST_VALUE(sal) over(order by sal desc) as highest_sal

from emp;

HIGHEST_
SAL
42000

select distinct deptno,FIRST_VALUE(sal) over(partition by deptno order by sal desc)


as highest_sal from emp order by 1;

DEPT HIGHEST_
NO SAL
10 42000
20 3000
30 2850
- 3000

LAST_VALUES:-

 It will return last value from order set of value of analytical window.

For e.g.

155 of 129
select distinct LAST_VALUE(sal) over(order by sal desc RANGE BETWEEN
UNBOUNDED PRECEDING AND UNBOUNDED FOLLOWING) as lowest_sal
from emp;

LOWEST_
SAL
800

SELECT DISTINCT deptno,LAST_VALUE(sal) OVER (partition by deptno

ORDER BY sal desc

RANGE BETWEEN UNBOUNDED PRECEDING AND UNBOUNDED


FOLLOWING) AS "lowest"

FROM emp;

DEPTNO lowest
10 32000
20 800
30 950
- 3000

Nth value:-

select empno,ename,sal,deptno

from emp order by sal desc;

156 of 129
EMPNO ENAME SAL DEPTNO
7934 MILLER 42000 10
7782 CLARK 32000 10
7839 KING 3000 -
7902 FORD 3000 20
7788 SCOTT 3000 20
7566 JONES 2975 20
7698 BLAKE 2850 30
7499 ALLEN 1600 30
7844 TURNER 1500 30
7521 WARD 1250 30
7654 MARTIN 1250 30
7876 ADAMS 1100 20
7900 JAMES 950 30
7369 SMITH 800 20

select distinct nth_value(sal,2) over(order by sal desc

RANGE BETWEEN UNBOUNDED PRECEDING AND UNBOUNDED


FOLLOWING) as RK

from emp;

RK
32000

RANK() & DENCE_RANK():-

RANK():-
 RANK() gives you ranking with your ordered partition.
 It will give same RANK if order output value is same & will skip that much next
ranking.

157 of 129
 For details look into given output.

SELECT emp_no,ename,sal,dno,date_of_join,RANK() OVER (PARTITION BY dno


ORDER by sal DESC) rank_1 FROM sag_test_emp;

Dense_rank():-
 DENSE_RANK() gives you ranking with your ordered partition.
 It will give same RANK if order output value is same & will not skip that much
next ranking.
 For details look into given output.

SELECT emp_no,ename,sal,dno,date_of_join,DENSE_RANK() OVER (PARTITION


BY dno ORDER by sal DESC) rank_1 FROM sag_test_emp;

LEAD () & LAG ():-

LEAD ():-

 It let you to query on more than one row in the table at a time without JOIN
table to itself.
 It return the value from NEXT row.

158 of 129
 To return the value from previous row we should use LAG ().

2.9 Example

The LEAD function can be used in Oracle/PLSQL.

Let's look at an example. If we had an orders table that contained the following data:

ORDER_DA PRODUCT_ QT
TE ID Y

2007/09/25 1000 20

2007/09/26 2000 15

2007/09/27 1000 8

2007/09/28 2000 12

2007/09/29 2000 2

2007/09/30 1000 4

And we ran the following SQL statement:

SELECT product_id, order_date,


LEAD (order_date,1) OVER (ORDER BY order_date) AS next_order_date
FROM orders;

It would return the following result:

PRODUCT_ ORDER_DA NEXT_ORDER_D


ID TE ATE

1000 2007/09/25 2007/09/26

2000 2007/09/26 2007/09/27

159 of 129
PRODUCT_ ORDER_DA NEXT_ORDER_D
ID TE ATE

1000 2007/09/27 2007/09/28

2000 2007/09/28 2007/09/29

2000 2007/09/29 2007/09/30

1000 2007/09/30 NULL

2.9.1 Using Partitions

Now let's look at a more complex example where we use a query partition clause to
return the next order_date for each product_id.

Enter the following SQL statement:

SELECT product_id, order_date,


LEAD (order_date,1) OVER (PARTITION BY product_id ORDER BY order_date)
AS next_order_date
FROM orders;

It would return the following result:

PRODUCT_ ORDER_DA NEXT_ORDER_D


ID TE ATE

1000 2007/09/25 2007/09/27

1000 2007/09/27 2007/09/30

1000 2007/09/30 NULL

2000 2007/09/26 2007/09/28

160 of 129
PRODUCT_ ORDER_DA NEXT_ORDER_D
ID TE ATE

2000 2007/09/28 2007/09/29

2000 2007/09/29 NULL

LAG() :-

 Lets you query more than one row in a table at a time without having to join the
table to itself.
 It returns values from a previous row in the table.
 To return a value from the next row, try using the LEAD function.

1.1 Example

The LAG function can be used in Oracle/PLSQL.

Let's look at an example. If we had an orders table that contained the following data:

ORDER_DA PRODUCT_ QT
TE ID Y

2007/09/25 1000 20

2007/09/26 2000 15

2007/09/27 1000 8

2007/09/28 2000 12

2007/09/29 2000 2

2007/09/30 1000 4

161 of 129
ORDER_DA PRODUCT_ QT
TE ID Y

And we ran the following SQL statement:

SELECT product_id, order_date,


LAG (order_date,1) OVER (ORDER BY order_date) AS prev_order_date
FROM orders;

It would return the following result:

PRODUCT_ ORDER_DA PREV_ORDER_D


ID TE ATE

1000 2007/09/25 NULL

2000 2007/09/26 2007/09/25

1000 2007/09/27 2007/09/26

2000 2007/09/28 2007/09/27

2000 2007/09/29 2007/09/28

1000 2007/09/30 2007/09/29

In this example, the LAG function will sort in ascending order all of the order_date
values in the orders table and then return the previous order_date since we used an
offset of 1.

If we had used an offset of 2 instead, it would have returned the order_date from 2
orders earlier. If we had used an offset of 3, it would have returned the order_date from
3 orders earlier....and so on.

162 of 129
2.9.2 Using Partitions

Now let's look at a more complex example where we use a query partition clause to
return the previous order_date for each product_id.

Enter the following SQL statement:

SELECT product_id, order_date,


LAG (order_date,1) OVER (PARTITION BY product_id ORDER BY order_date) AS
prev_order_date
FROM orders;

It would return the following result:

PRODUCT_ ORDER_DA PREV_ORDER_D


ID TE ATE

1000 2007/09/25 NULL

1000 2007/09/27 2007/09/25

1000 2007/09/30 2007/09/27

2000 2007/09/26 NULL

2000 2007/09/28 2007/09/26

2000 2007/09/29 2007/09/28

LISTAGG ():-

163 of 129
INPUT.
EMP_ ENA
NO ME SAL DNO
1 A 15000 10
2 B 17000 10
3 C 22000 20
4 D 24000 30
5 E 30000 30
6 F 35000 20
7 G 50000 10
12232
8 H 1 40

DESIRED OUTPUT.

DN LIST_OF_EMP
O LOYEE
10 1,2,7
20 3,6
30 4,5
40 8

SELECT NVL(dno,0) dno,listagg(emp_no,'*') WITHIN GROUP (ORDER BY


emp_no) AS emp_no
FROM sag_test_emp
GROUP BY dno
ORDER BY 1;

Performance Tuning
Performance Tuning include following topics,

164 of 129
 Performance Planning
 Instance Tuning
 SQL Tuning

Performance Planning:-

 Understand Investment Option.


 Understand Scalability.
 Understand System Architecture.
 Understand Application Design.
 Workload Testing, Modeling and Implementation.
 Deploying New Application.

Instance Tuning:-

 While considering Instance Tuning we should take care of initial database


designing to avoid bottleneck that could lead to performance issue.
 In addition to this you must consider following,
o Allocation of memory to DB.
o Determine I/O to DB.
o Tune to OS for optimal performance of DB.

SQL Tuning:-

 Many Application Programmer consider SQL for query issue to get data.
 When SQL statement execute Query Optimizer will determine most efficient
plan for execution of query.
 It plays most important role since it directly effect on execution time.
 You can Override Execution Plan of query optimizer with HINT insert into SQL
statement.

Performance Improvement Method:-

It involve Identify bottleneck & fix them. Removing bottleneck may not lead to
Performance Improvement immediately because another bottleneck may be revealed.
Below are the steps to Improve Oracle Performance.

1. Perform Initial Standard Check:-

165 of 129
 Get the feedback from user. Determine project scope and
performance goal for future.
 Get the full set of operating system, database and application
statistics from system when performance is both GOOD and BAD.
 Perform Sanity-check of OS for all system those are involve in user
performance.

2. Check for Top Ten most common mistake with Oracle Database.
 Bad Connection
 Bad use of cursor and shared pool
 Bad SQL
 Use of nonstandard initialization parameter
 Getting DB I/O wrong
 Online Redo Log setup problem
 Serialization of data due to lack of Free List, Free List Group,
Transaction Slot or Shortage of Rollback Segment
 Full Table Scan
 High amount of recursive SQL
 Deployment and migration error

3. Build the conceptual model of what is happing in system.

4. Take Remedy actions. Golden rule in performance improvement is that


“Change only one thing at a time and the measure the difference”. If
multiple changes are applied at the same time then ensure that they are
isolated so that effect of change can be isolated. So that effect of each
change can be effectively validate.

5. Validate the changes & see the user perception has improve or not.
Otherwise look for other bottleneck until your understanding of
application become more accurate.
6. Perform last three steps until performance goal is met.

Introduction to Performance Tuning Features & Tool

 Data Collection & Analysis is essential for identifying & correcting performance
problem.
 Oracle Database provide various tool to monitor performance, diagnose the
problem & tune the application.
 In Oracle Database Information Gathering & Monitoring process is Automatic,
which is manage my Oracle Background Processes.

166 of 129
 To enable statistics collection & automatic performance feature we need to
STATISTIC_LEVEL = TYPICAL OR ALL.
 For easy use, Oracle Enterprise Manager Database Control is recommended.

Automatic Performance Tuning Features

 Automatic Workload Repository (AWR)


 Automatic Database Diagnostic Monitoring(ADDM)
 SQL Tuning Advisor
 SQL Access Advisor
 End to End Application Tracing
 DAB Guide
 Performance Tuning Guide

 Automatic Workload Repository (AWR):-

o It will collect, process & maintain the performance statistics for problem
detection & self-tuning purpose.
o Data gather from both, Memory & Database.
o Data gather can display by both View & Reports.

Statistics which collect & process by AWR includes following,

o Object Statistics determines both Access & Usage statistics of Database


Segment.
o Time Model Statistics
o Some of System Statistics & Session Statistics collects in V$SYSSTAT &
V$SESSTAT VIEW.
o SQL statements that produce highest load on system based on criteria such
as Elapsed Time & CPU Time.
o ASH statistics which represent history of session.

While studding AWR we also need to go through following topics,

o Snapshot
o Baseline
o Adaptive Threshold
o Space Consumption

167 of 129
SNAPSHOT:-

o SNAPSHOT is set of historical data for specific time period that use for
performance comparison by ADDM.
o By default, Oracle Database automatically generate the snapshot after
every hour & it will retain the statistic in workload repository for next 8
days.
o Data in snapshot is analyzed by ADDM.

Baseline:-

o Baseline contain the data from specific time period which are
preserved for comparison with other similar workload period when
performance issue occurred.
o There are following types of Baselines available in Oracle Database,
o Fixed Baseline
o Moving Window Baseline
o Baseline Templates
o Fixed Baseline:-
o In this type of base line while creating baseline we specify fixed
time period.
o So we need to be very careful while choosing time period for
baseline. Since it represent the system operating at optimal time.
o In future you can refer this baseline to compare with other
baseline or snapshot during the period of poor performance.

Moving Window Baseline:-

o It represent AWR data that exists in AWR Retention Period.


o It is useful when we are using Adaptive Threshold.
o Oracle DB automatically maintain System Define Moving
Window Baseline.
o Default window size is 8 days.
o If you planning to use Adaptive Threshold then you need to
consider Large Moving Window such as 30 days.

168 of 129
o Therefore, to increase the size of Moving Window you must
first increase AWR retention period accordingly.

Baseline Template:-

o You can use Baseline Template when you want to create


Baseline for Continuous Time Period.
o There are two types of baseline template,
 Single
 Repeating
o Single Baseline Template is useful if you know the time period.
o For example, you may want to capture the AWR data during a
system test that is scheduled for the upcoming weekend. In this
case, you can create a single baseline template to capture the
time period when the test occurs.
o Repeating Baseline Template is useful when you want Oracle
DB should capture contiguous (nearest) time period on ongoing
basis.
o For e.g. If you want to capture AWR data during every Monday
morning for the month. In this case you can use Repeating
Baseline Template.

Adaptive Threshold:-

o Adaptive thresholds enable you to monitor and detect


performance issues.
o Adaptive thresholds can automatically set warning and critical
alert thresholds for some system using statistics derived from
moving window baseline.
o The statistics for these thresholds are recomputed weekly.
o For example, many databases support an online transaction
processing (OLTP) workload during the day and batch
processing at night.
o The performance metric can be useful for detecting degradation
in OLTP performance during the day.
o As a result, threshold values for OLTP might trigger frequently.
o Adaptive thresholds can detect such a workload pattern and
automatically set different threshold values for the daytime and
nighttime.
o There are two type of Adaptive threshold,

169 of 129
 Percentage of Maximum:-
 In this case threshold value will get calculate as a
percentage of Multiple of maximum value observed
for data in movie window baseline
 Significance Level:-
 In this case threshold value is set in percentile.
 It represent that values are unusual that are
observed above threshold value.
 You can specify following percentile,
o High (.95):- Only 5 out of 100 are expected to
exceed this value.
o Very high (.99):- Only 1 out of 100 are
expected to exceed this value.
o Sever (.999):- Only 1 out of 1000 are
expected to exceed this value.
o Extream (.9999):- Only 1 out of 10000 are
expected to exceed this value.
o Percentage of Maximum threshold is useful when system is at
peak workload at that you want to be alerted.
o Significance Level threshold should be used when system is
operating normally but it might be vary over the wide range
when system perform poorly.

Space Consumption:-

Space Consume by AWR is determine by several factors as follows,

 Number of active session in the system at any given time.


 Snapshot Interval:-
o Snapshot interval determines the frequency at which snapshots are
captured.
o Smaller snapshot will increase the frequency which increase the volume
of data collect by AWR.
 Historical Data Retention Period:-
o It determines how long data will retain before being purged. A longer
retention period will increase the space consume by AWR.
 By default snapshot will capture once in hour & it will retain in database for 8
days. With this setting system with 10 active session at a time can require
approximately 200 to 300 MB space for AWR data.
 AWR space consumption can be reduce by increasing snapshot interval (from 1
hour to 5 hours) & reduce retention period (from8 days to 30 days).

170 of 129
 Not having enough data can affect validity & accuracy following components,
o Automatic Database Diagnostic Monitoring ( ADDM )
o SQL Tuning Adviser.
o Undo Adviser.
o Segment Adviser.
 If possible Oracle recommend you to set AWR retention period large enough to
capture at least one complete workload cycle.
 Under exceptional circumstance you can turn off automatic snap shot collection
by keeping snapshot interval to zero (0). Under this condition automatic
collection of workload and statistical data will stop. In additional you cannot
manually create snapshot.

5.3 Managing the Automatic Workload Repository

This section describes how to manage the Automatic Workload Repository and
contains the following topics:

 Managing Snapshots
 Managing Baselines
 Transporting Automatic Workload Repository Data
 Using Automatic Workload Repository Views
 Generating Automatic Workload Repository Reports
 Generating Active Session History Reports

For a description of the Automatic Workload Repository, see "Overview of the


Automatic Workload Repository".
5.3.1 Managing Snapshots
By default, Oracle Database generates snapshots once every hour, and retains the
statistics in the workload repository for 7 days. When necessary, you can use
DBMS_WORKLOAD_REPOSITORY procedures to manually create, drop, and
modify the snapshots. To invoke these procedures, a user must be granted the DBA
role. For more information about snapshots, see "Snapshots".
The primary interface for managing the Automatic Workload Repository is Oracle
Enterprise Manager. Whenever possible, you should manage snapshots using Oracle
Enterprise Manager, as described in Oracle Database 2 Day + Performance Tuning

171 of 129
Guide. If Oracle Enterprise Manager is unavailable, you can manage the AWR
snapshots and baselines using the DBMS_WORKLOAD_REPOSITORY package,
as described in this section.
This section contains the following topics:
 Creating Snapshots
 Dropping Snapshots
 Modifying Snapshot Settings
See Also:
Oracle Database PL/SQL Packages and Types Reference for detailed information on
the DBMS_WORKLOAD_REPOSITORY package
5.3.1.1 Creating Snapshots
You can manually create snapshots with the CREATE_SNAPSHOT procedure if you
want to capture statistics at times different than those of the automatically generated
snapshots. For example:
BEGIN
DBMS_WORKLOAD_REPOSITORY.CREATE_SNAPSHOT ();
END;
/
In this example, a snapshot for the instance is created immediately with the flush level
specified to the default flush level of TYPICAL. You can view this snapshot in the
DBA_HIST_SNAPSHOT view.
5.3.1.2 Dropping Snapshots
You can drop a range of snapshots using the DROP_SNAPSHOT_RANGE
procedure. To view a list of the snapshot Ids along with database Ids, check the
DBA_HIST_SNAPSHOT view. For example, you can drop the following range of
snapshots:
BEGIN
DBMS_WORKLOAD_REPOSITORY.DROP_SNAPSHOT_RANGE (low_snap_id
=> 22,
high_snap_id => 32, dbid => 3310949047);
END;
/
In the example, the range of snapshot Ids to drop is specified from 22 to 32. The
optional database identifier is 3310949047. If you do not specify a value for dbid, the
local database identifier is used as the default value.
Active Session History data (ASH) that belongs to the time period specified by the
snapshot range is also purged when the DROP_SNAPSHOT_RANGE procedure is
called.
5.3.1.3 Modifying Snapshot Settings

172 of 129
You can adjust the interval, retention, and captured Top SQL of snapshot generation
for a specified database Id, but note that this can affect the precision of the Oracle
diagnostic tools.
The INTERVAL setting affects how often in minutes that snapshots are automatically
generated. The RETENTION setting affects how long in minutes that snapshots are
stored in the workload repository. The TOPNSQL setting affects the number of Top
SQL to flush for each SQL criteria (Elapsed Time, CPU Time, Parse Calls, Shareable
Memory, and Version Count). The value for this setting will not be affected by the
statistics/flush level and will override the system default behavior for the AWR SQL
collection. It is possible to set the value for this setting to MAXIMUM to capture the
complete set of SQL in the cursor cache, though by doing so (or by setting the value to
a very high number) may lead to possible space and performance issues since there
will more data to collect and store. To adjust the settings, use the
MODIFY_SNAPSHOT_SETTINGS procedure. For example:
BEGIN

DBMS_WORKLOAD_REPOSITORY.MODIFY_SNAPSHOT_SETTINGS( retentio
n => 43200,
interval => 30, topnsql => 100, dbid => 3310949047);
END;
/
In this example, the retention period is specified as 43200 minutes (30 days), the
interval between each snapshot is specified as 30 minutes, and the number of Top SQL
to flush for each SQL criteria as 100. If NULL is specified, the existing value is
preserved. The optional database identifier is 3310949047. If you do not specify a
value for dbid, the local database identifier is used as the default value. You can check
the current settings for your database instance with the DBA_HIST_WR_CONTROL
view.
5.3.2 Managing Baselines
This section describes how to manage baselines. For more information about baselines,
see "Baselines".
The primary interface for managing snapshots is Oracle Enterprise Manager.
Whenever possible, you should manage snapshots using Oracle Enterprise Manager, as
described in Oracle Database 2 Day + Performance Tuning Guide. If Oracle
Enterprise Manager is unavailable, you can manage snapshots using the
DBMS_WORKLOAD_REPOSITORY package, as described in the following
sections:
 Creating a Baseline
 Dropping a Baseline
5.3.2.1 Creating a Baseline

173 of 129
This section describes how to create a baseline using an existing range of snapshots.
To create a baseline:
1. Review the existing snapshots in the DBA_HIST_SNAPSHOT view to
determine the range of snapshots that you want to use.
2. Use the CREATE_BASELINE procedure to create a baseline using the desired
range of snapshots:
3. BEGIN
4. DBMS_WORKLOAD_REPOSITORY.CREATE_BASELINE (start_snap_id
=> 270,
5. end_snap_id => 280, baseline_name => 'peak baseline',
6. dbid => 3310949047, expiration => 30);
7. END;
8. /
In this example, 270 is the start snapshot sequence number and 280 is the end snapshot
sequence. The name of baseline is peak baseline. The optional database identifier is
3310949047. If you do not specify a value for dbid, the local database identifier is
used as the default value. The optional expiration parameter is set to 30, so the
baseline will expire and be dropped automatically after 30 days. If you do not specify a
value for expiration, the baseline will never expire.
The system automatically assign a unique baseline Id to the new baseline when the
baseline is created. The baseline Id and database identifier are displayed in the
DBA_HIST_BASELINE view.
5.3.2.2 Dropping a Baseline
This section describes how to drop an existing baseline. Periodically, you may want to
drop a baseline that is no longer used to conserve disk space. The snapshots associated
with a baseline are retained indefinitely until you explicitly drop the baseline or the
baseline has expired.
To drop a baseline:
1. Review the existing baselines in the DBA_HIST_BASELINE view to determine
the baseline that you want to drop.
2. Use the DROP_BASELINE procedure to drop the desired baseline:
3. BEGIN
4. DBMS_WORKLOAD_REPOSITORY.DROP_BASELINE (baseline_name
=> 'peak baseline',
5. cascade => FALSE, dbid => 3310949047);
6. END;
7. /
In the example, the name of baseline is peak baseline. The cascade parameter is set to
FALSE, which specifies that only the baseline is dropped. Setting this parameter to

174 of 129
TRUE specifies that the drop operation will also remove the snapshots associated with
the baseline. The optional dbid parameter specifies the database identifier, which in
this example is 3310949047. If you do not specify a value for dbid, the local database
identifier is used as the default value.
5.3.3 Transporting Automatic Workload Repository Data
Oracle Database enables you to transport AWR data between systems. This is useful in
cases where you want to use a separate system to perform analysis of the AWR data.
To transport AWR data, you need to first extract the AWR snapshot data from the
database on the source system, then load the data into the database on the target
system, as described in the following sections:
 Extracting AWR Data
 Loading AWR Data
5.3.3.1 Extracting AWR Data
The awrextr.sql script extracts the AWR data for a range of snapshots from the
database into a Data Pump export file. Once created, this dump file can be transported
to another system where the extracted data can be loaded. To run the awrextr.sql
script, you need to be connected to the database as the SYS user.
To extract AWR data:
1. At the SQL prompt, enter:
2. @$ORACLE_HOME/rdbms/admin/awrextr.sql
A list of the databases in the AWR schema is displayed.
3. Specify the database from which the AWR data will be extracted:
4. Enter value for db_id: 1377863381
In this example, the database with the database identifier of 1377863381 is selected.
5. Specify the number of days for which you want to list snapshot Ids.
6. Enter value for num_days: 2
A list of existing snapshots for the specified time range is displayed. In this example,
snapshots captured in the last 2 days are displayed.
7. Define the range of snapshots for which AWR data will be extracted by
specifying a beginning and ending snapshot Id:
8. Enter value for begin_snap: 30
9. Enter value for end_snap: 40
In this example, the snapshot with a snapshot Id of 30 is selected as the beginning
snapshot, and the snapshot with a snapshot Id of 40 is selected as the ending snapshot.
10. A list of directory objects is displayed.
Specify the directory object pointing to the directory where the export dump file will
be stored:

175 of 129
Enter value for directory_name: DATA_PUMP_DIR
In this example, the directory object DATA_PUMP_DIR is selected.
11. Specify the prefix for name of the export dump file (the .dmp suffix will be
automatically appended):
12. Enter value for file_name: awrdata_30_40
In this example, an export dump file named awrdata_30_40 will be created in the
directory corresponding to the directory object you specified:
Dump file set for SYS.SYS_EXPORT_TABLE_01 is:
C:\ORACLE\PRODUCT\11.1.0.5\DB_1\RDBMS\LOG\AWRDATA_30_40.DMP
Job "SYS"."SYS_EXPORT_TABLE_01" successfully completed at 08:58:20
Depending on the amount of AWR data that needs to be extracted, the AWR extract
operation may take a while to complete. Once the dump file is created, you can use
Data Pump to transport the file to another system.
See Also:
Oracle Database Utilities for information about using Data Pump
5.3.3.2 Loading AWR Data
Once the export dump file is transported to the target system, you can load the
extracted AWR data using the awrload.sql script. The awrload.sql script will first
create a staging schema where the snapshot data is transferred from the Data Pump file
into the database. The data is then transferred from the staging schema into the
appropriate AWR tables. To run the awrload.sql script, you need to be connected to
the database as the SYS user.
To load AWR data:
1. At the SQL prompt, enter:
2. @$ORACLE_HOME/rdbms/admin/awrload.sql
A list of directory objects is displayed.
3. Specify the directory object pointing to the directory where the export dump file
is located:
4. Enter value for directory_name: DATA_PUMP_DIR
In this example, the directory object DATA_PUMP_DIR is selected.
5. Specify the prefix for name of the export dump file (the .dmp suffix will be
automatically appended):
6. Enter value for file_name: awrdata_30_40
In this example, the export dump file named awrdata_30_40 is selected.
7. Specify the name of the staging schema where the AWR data will be loaded:

176 of 129
8. Enter value for schema_name: AWR_STAGE
In this example, a staging schema named AWR_STAGE will be created where the
AWR data will be loaded.
9. Specify the default tablespace for the staging schema:
10. Enter value for default_tablespace: SYSAUX
In this example, the SYSAUX tablespace is selected.
11. Specify the temporary tablespace for the staging schema:
12. Enter value for temporary_tablespace: TEMP
In this example, the TEMP tablespace is selected.
13. A staging schema named AWR_STAGE will be created where the AWR data
will be loaded. After the AWR data is loaded into the AWR_STAGE schema, the data
will be transferred into the AWR tables in the SYS schema:
14. Processing object type
TABLE_EXPORT/TABLE/CONSTRAINT/CONSTRAINT
15. Completed 113 CONSTRAINT objects in 11 seconds
16. Processing object type
TABLE_EXPORT/TABLE/CONSTRAINT/REF_CONSTRAINT
17. Completed 1 REF_CONSTRAINT objects in 1 seconds
18. Job "SYS"."SYS_IMPORT_FULL_03" successfully completed at 09:29:30
19. ... Dropping AWR_STAGE user
20. End of AWR Load
Depending on the amount of AWR data that needs to be loaded, the AWR load
operation may take a while to complete. After the AWR data is loaded, the staging
schema will be dropped automatically.
5.3.4 Using Automatic Workload Repository Views
Typically, you would view the AWR data through Oracle Enterprise Manager or AWR
reports. However, you can also view the statistics with the following views:
 V$ACTIVE_SESSION_HISTORY
This view displays active database session activity, sampled once every second. See
"Active Session History (ASH)".
 V$ metric views provide metric data to track the performance of the system
The metric views are organized into various groups, such as event, event class, system,
session, service, file, and tablespace metrics. These groups are identified in the
V$METRICGROUP view.
 DBA_HIST views
The DBA_HIST views contain historical data stored in the database. This group of
views includes:

177 of 129
o DBA_HIST_ACTIVE_SESS_HISTORY displays the history of the
contents of the in-memory active session history for recent system activity.
o DBA_HIST_BASELINE displays information about the baselines
captured on the system
o DBA_HIST_DATABASE_INSTANCE displays information about the
database environment
o DBA_HIST_SNAPSHOT displays information on snapshots in the
system
o DBA_HIST_SQL_PLAN displays the SQL execution plans
o DBA_HIST_WR_CONTROL displays the settings for controlling AWR
See Also:
Oracle Database Reference for information on dynamic and static data dictionary
views
5.3.5 Generating Automatic Workload Repository Reports
An AWR report shows data captured between two snapshots (or two points in time).
The AWR reports are divided into multiple sections. The HTML report includes links
that can be used to navigate quickly between sections. The content of the report
contains the workload profile of the system for the selected range of snapshots.
The primary interface for generating AWR reports is Oracle Enterprise Manager.
Whenever possible, you should generate AWR reports using Oracle Enterprise
Manager, as described in Oracle Database 2 Day + Performance Tuning Guide. If
Oracle Enterprise Manager is unavailable, you can generate AWR reports by running
SQL scripts:
 The awrrpt.sql SQL script generates an HTML or text report that displays
statistics for a range of snapshot Ids.
 The awrrpti.sql SQL script generates an HTML or text report that displays
statistics for a range of snapshot Ids on a specified database and instance.
 The awrsqrpt.sql SQL script generates an HTML or text report that displays
statistics of a particular SQL statement for a range of snapshot Ids. Run this report to
inspect or debug the performance of a SQL statement.
 The awrsqrpi.sql SQL script generates an HTML or text report that displays
statistics of a particular SQL statement for a range of snapshot Ids on a specified
database and instance. Run this report to inspect or debug the performance of a SQL
statement on a specific database and instance.
 The awrddrpt.sql SQL script generates an HTML or text report that compares
detailed performance attributes and configuration settings between two selected time
periods.
 The awrddrpi.sql SQL script generates an HTML or text report that compares
detailed performance attributes and configuration settings between two selected time
periods on a specific database and instance.
Note:

178 of 129
To run these scripts, you must be granted the DBA role.
If you run a report on a database that does not have any workload activity during the
specified range of snapshots, calculated percentages for some report statistics can be
less than 0 or greater than 100. This result simply means that there is no meaningful
value for the statistic.
5.3.5.1 Running the awrrpt.sql Report
To generate an HTML or text report for a range of snapshot Ids, run the awrrpt.sql
script at the SQL prompt:
@$ORACLE_HOME/rdbms/admin/awrrpt.sql
First, you need to specify whether you want an HTML or a text report.
Enter value for report_type: text
Specify the number of days for which you want to list snapshot Ids.
Enter value for num_days: 2
After the list displays, you are prompted for the beginning and ending snapshot Id for
the workload repository report.
Enter value for begin_snap: 150
Enter value for end_snap: 160
Next, accept the default report name or enter a report name. The default name is
accepted in the following example:
Enter value for report_name:
Using the report name awrrpt_1_150_160
The workload repository report is generated.
5.3.5.2 Running the awrrpti.sql Report
To specify a database and instance before entering a range of snapshot Ids, run the
awrrpti.sql script at the SQL prompt to generate an HTML or text report:
@$ORACLE_HOME/rdbms/admin/awrrpti.sql
First, specify whether you want an HTML or a text report. After that, a list of the
database identifiers and instance numbers displays, similar to the following:
Instances in this Workload Repository schema
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
DB Id Inst Num DB Name Instance Host
----------- -------- ------------ ------------ ------------
3309173529 1 MAIN main dlsun1690
3309173529 1 TINT251 tint251 stint251

179 of 129
Enter the values for the database identifier (dbid) and instance number (inst_num) at
the prompts.
Enter value for dbid: 3309173529
Using 3309173529 for database Id
Enter value for inst_num: 1
Next you are prompted for the number of days and snapshot Ids, similar to the
awrrpt.sql script, before the text report is generated. See "Running the awrrpt.sql
Report".
5.3.5.3 Running the awrsqrpt.sql Report
To generate an HTML or text report for a particular SQL statement, run the
awrsqrpt.sql script at the SQL prompt:
@$ORACLE_HOME/rdbms/admin/awrsqrpt.sql
First, you need to specify whether you want an HTML or a text report.
Enter value for report_type: text
Specify the number of days for which you want to list snapshot Ids.
Enter value for num_days: 1
After the list displays, you are prompted for the beginning and ending snapshot Id for
the workload repository report.
Enter value for begin_snap: 146
Enter value for end_snap: 147
Specify the SQL Id of a particular SQL statement to display statistics.
Enter value for sql_id: 2b064ybzkwf1y
Next, accept the default report name or enter a report name. The default name is
accepted in the following example:
Enter value for report_name:
Using the report name awrsqlrpt_1_146_147.txt
The workload repository report is generated.
5.3.5.4 Running the awrsqrpi.sql Report
To specify a database and instance before entering a particular SQL statement Id, run
the awrsqrpi.sql script at the SQL prompt to generate an HTML or text report:
@$ORACLE_HOME/rdbms/admin/awrsqrpi.sql
First, you need to specify whether you want an HTML or a text report.

180 of 129
Enter value for report_type: text
Next, a list of the database identifiers and instance numbers displays, similar to the
following:
Instances in this Workload Repository schema
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
DB Id Inst Num DB Name Instance Host
----------- -------- ------------ ------------ ------------
3309173529 1 MAIN main dlsun1690
3309173529 1 TINT251 tint251 stint251
Enter the values for the database identifier (dbid) and instance number (inst_num) at
the prompts.
Enter value for dbid: 3309173529
Using 3309173529 for database Id
Enter value for inst_num: 1
Using 1 for instance number
Next you are prompted for the number of days, snapshot Ids, SQL Id and report name,
similar to the awrsqrpt.sql script, before the text report is generated. See "Running the
awrsqrpt.sql Report".
5.3.5.5 Running the awrddrpt.sql Report
To compare detailed performance attributes and configuration settings between two
time periods, run the awrddrpt.sql script at the SQL prompt to generate an HTML or
text report:
@$ORACLE_HOME/rdbms/admin/awrddrpt.sql
First, you need to specify whether you want an HTML or a text report.
Enter value for report_type: text
Specify the number of days for which you want to list snapshot Ids for the first time
period.
Enter value for num_days: 2
After the list displays, you are prompted for the beginning and ending snapshot Id for
the first time period.
Enter value for begin_snap: 102
Enter value for end_snap: 103
Next, specify the number of days for which you want to list snapshot Ids for the second
time period.

181 of 129
Enter value for num_days2: 1
After the list displays, you are prompted for the beginning and ending snapshot Id for
the second time period.
Enter value for begin_snap2: 126
Enter value for end_snap2: 127
Next, accept the default report name or enter a report name. The default name is
accepted in the following example:
Enter value for report_name:
Using the report name awrdiff_1_102_1_126.txt
The workload repository report is generated.
5.3.5.6 Running the awrddrpi.sql Report
To specify a database and instance before selecting time periods to compare, run the
awrddrpi.sql script at the SQL prompt to generate an HTML or text report:
@$ORACLE_HOME/rdbms/admin/awrddrpi.sql
First, you need to specify whether you want an HTML or a text report.
Enter value for report_type: text
Next, a list of the database identifiers and instance numbers displays, similar to the
following:
Instances in this Workload Repository schema
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
DB Id Inst Num DB Name Instance Host
----------- -------- ------------ ------------ ------------
3309173529 1 MAIN main dlsun1690
3309173529 1 TINT251 tint251 stint251
Enter the values for the database identifier (dbid) and instance number (inst_num) for
the first time period at the prompts.
Enter value for dbid: 3309173529
Using 3309173529 for Database Id for the first pair of snapshots
Enter value for inst_num: 1
Using 1 for Instance Number for the first pair of snapshots
Specify the number of days for which you want to list snapshot Ids for the first time
period.
Enter value for num_days: 2

182 of 129
After the list displays, you are prompted for the beginning and ending snapshot Id for
the first time period.
Enter value for begin_snap: 102
Enter value for end_snap: 103
Next, enter the values for the database identifier (dbid) and instance number
(inst_num) for the second time period at the prompts.
Enter value for dbid2: 3309173529
Using 3309173529 for Database Id for the second pair of snapshots
Enter value for inst_num2: 1
Using 1 for Instance Number for the second pair of snapshots
Specify the number of days for which you want to list snapshot Ids for the second time
period.
Enter value for num_days2: 1
After the list displays, you are prompted for the beginning and ending snapshot Id for
the second time period.
Enter value for begin_snap2: 126
Enter value for end_snap2: 127
Next, accept the default report name or enter a report name. The default name is
accepted in the following example:
Enter value for report_name:
Using the report name awrdiff_1_102_1_126.txt
The workload repository report is generated.
5.3.6 Generating Active Session History Reports
Use Active Session History (ASH) reports to perform analysis of:
 Transient performance problems that typically last for a few minutes
 Scoped or targeted performance analysis by various dimensions or their
combinations, such as time, session, module, action, or SQL_ID
You can view ASH reports using Enterprise Manager or by running the following SQL
scripts:
 The ashrpt.sql SQL script generates an HTML or text report that displays ASH
information for a specified duration.
 The ashrpti.sql SQL script generates an HTML or text report that displays ASH
information for a specified duration for a specified database and instance.
The reports are divided into multiple sections. The HTML report includes links that
can be used to navigate quickly between sections. The content of the report contains
ASH information used to identify blocker and waiter identities and their associated

183 of 129
transaction identifiers and SQL for a specified duration. For more information on ASH,
see "Active Session History (ASH)".
The primary interface for generating ASH reports is Oracle Enterprise Manager.
Whenever possible, you should generate ASH reports using Oracle Enterprise
Manager, as described in Oracle Database 2 Day + Performance Tuning Guide. If
Oracle Enterprise Manager is unavailable, you can generate ASH reports by running
SQL scripts, as described in the following sections:
 Running the ashrpt.sql Report
 Running the ashrpti.sql Report
5.3.6.1 Running the ashrpt.sql Report
To generate a text report of ASH information, run the ashrpt.sql script at the SQL
prompt:
@$ORACLE_HOME/rdbms/admin/ashrpt.sql
First, you need to specify whether you want an HTML or a text report.
Enter value for report_type: text
Specify the time frame to collect ASH information by first specifying the begin time in
minutes prior to the system date.
Enter value for begin_time: -10
Next, enter the duration in minutes that the report for which you want to capture ASH
information from the begin time. The default duration of system date minus begin time
is accepted in the following example:
Enter value for duration:
The report in this example will gather ASH information beginning from 10 minutes
before the current time and ending at the current time. Next, accept the default report
name or enter a report name. The default name is accepted in the following example:
Enter value for report_name:
Using the report name ashrpt_1_0310_0131.txt
The session history report is generated.
5.3.6.2 Running the ashrpti.sql Report
If you want to specify a database and instance before setting the time frame to collect
ASH information, run the ashrpti.sql report at the SQL prompt to generate a text
report:
@$ORACLE_HOME/rdbms/admin/ashrpti.sql
First, specify whether you want an HTML or a text report. After that, a list of the
database Ids and instance numbers displays, similar to the following:

184 of 129
Instances in this Workload Repository schema
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
DB Id Inst Num DB Name Instance Host
----------- -------- ------------ ------------ ------------
3309173529 1 MAIN main dlsun1690
3309173529 1 TINT251 tint251 stint251
Enter the values for the database identifier (dbid) and instance number (inst_num) at
the prompts.
Enter value for dbid: 3309173529
Using 3309173529 for database id
Enter value for inst_num: 1
Next you are prompted for the begin time and duration to capture ASH information,
similar to the ashrpt.sql script, before the report is generated. See "Running the
ashrpt.sql Report".

185 of 129
What is AWR?

 AWR is Oracle Utility where Admin or Privileged User have access to Create
Database Snapshot.
 AWR Data stored in SYSAUX Table space or AWR Repository as ASH Data.
 DB Snapshot is an image of DB State which occurs every hour & retention
period is of 7 or 8 days (Default Setting).
 DBMS Package such as DBMS_WORKLOAD_REPOSITORY is used to
CREATE, MODIFY or DROP Snapshot, Baseline, etc.
 In case of DB Baseline two snapshot capture at separate time period gather the
statistics for optimal DB performance which is used to configure DB Setting
during performance tuning.
 In case of Background Process MMON will gather the statistics from SGA &
then transfer the snapshot data to AWR.
 Oracle features such as,
o UNDO ADVISOR.
o SEGMENT ADVISOR.
o SQL TUNNING ADVISOR.
o ADDM

186 of 129
187 of 129
How to read AWR Report?

 Elapsed Time:-
o It is an amount of time spend by SQL statement for execution.
o Note that SELECT statement also include amount of time to fetch query
result.
o Oracle cannot know the actual end to end response time for particular
SQL statement because Oracle cannot measure network latency outside
the instance. Hence Oracle introduce “Elapse Time”.
o Formula to calculate Elapse Time is as follows,
 elapsed time =    cpu time +    user i/o wait time
+    application_wait_time +
   concurrency_wait_time +    cluster_wait_time +
   plsql_exec_time +    java_exec_time

This query will show the SQL execution elapsed time duration (in hours) for long-
running SQL statements:

col program format a30

select query_runs.*,
                round ( (end_time - start_time) * 24, 2) as duration_hrs
           from (  select u.username,
                          ash.program,
                          ash.sql_id,
                          ash.sql_plan_hash_value as plan_hash_value,
                          ash.session_id as sess#,
                          ash.session_serial# as sess_ser,
                          cast (min (ash.sample_time) as date) as start_time,
                          cast (max (ash.sample_time) as date) as end_time
                     from dba_hist_active_sess_history ash, dba_users u
                    where u.user_id = ash.user_id and ash.sql_id = lower(trim('&sql_id'))
                 group by u.username,
                          ash.program,
                          ash.sql_id,
                          ash.sql_plan_hash_value,
                          ash.session_id,

188 of 129
                          ash.session_serial#) query_runs
order by sql_id, start_time;

While STATSPACK and AWR reports can easily show the top SQL that ran with the
longest execution time, you can run a dictionary query to see the SQL with the longest
run times:

select
   sql_id,
   child_number,
   sql_text,
   elapsed_time
from
   (select
      sql_id_child_number,
      sql_text,
      elaped_time,
      cpu_time,
      disk_reads,
   rank ()
   over
      (order by elapsed_time desc)
   as
      sql_rank
   from
      v$sql)
where
   sql_rank < 10;

In sum, it is important to note that the SQL elapsed time metric is not the same as the
actual response time for a SQL statement.

The sys_time_model_int_ query below can be used to retrieve information from


dba_hist_sys_time_model view for a particular AWR snapshot interval.

 
            sys_time_model.sql
 
 
column "Statistic Name" format A40

189 of 129
column "Time (s)" format 999,999
column "Percent of Total DB Time" format 999,999
 
select e.stat_name "Statistic Name"
     , (e.value - b.value)/1000000        "Time (s)"
     , decode( e.stat_name,'DB time'
             , to_number(null)
             , 100*(e.value - b.value)
             )
/
     ( select nvl((e1.value - b1.value),-1)
     from dba_hist_sys_time_model  e1
        , dba_hist_sys_time_model  b1
     where b1.snap_id              = b.snap_id
     and e1.snap_id                = e.snap_id
     and b1.dbid                   = b.dbid
     and e1.dbid                   = e.dbid
     and b1.instance_number        = b.instance_number
     and e1.instance_number        = e.instance_number
     and b1.stat_name             = 'DB time'
     and b1.stat_id                = e1.stat_id
)       
     "Percent of Total DB Time"
  from dba_hist_sys_time_model e
     , dba_hist_sys_time_model b
 where b.snap_id                = &pBgnSnap
   and e.snap_id                = &pEndSnap
 
 

190 of 129
 DB Time:-
o DB Time is an amount of time spend to perform DB user level call.
o It does not include the time spend on instance background process such as
PMON.
o Goal for tuning Oracle Process should be to reduce minimum CPU time &
WAIT time so that more transaction can be proceed. This is done by
tuning the SQL.
o DB Time:= CPU Time + I/O Time + No-Idle Wait Time
o DB Time is total time spend by user process either by Actively Working
or Actively Waiting in DB CALL.

From this formula we can conclude that database requests are composed from CPU
(service time, performing some work) and wait time (session is waiting for resources).

select
   to_char(begin_time,'dd.mm.yyy hh24:mi:ss') begin_time,
   to_char(end_time,'dd.mm.yyy hh24:mi:ss') end_time,
   intsize_csec interval_size,
   group_id,
   metric_name,
  value
from
   v$sysmetric
where
   metric_name = 'Database Time Per Sec';

Here is a DBV time query from v$sysmetric_summary:

select
   maxval,
   minval,
   average,
   standard_deviation
from
   v$sysmetric_summary
where
   metric_name = 'Database Time Per Sec';

Here is another query for DB time from the ASH table:

select
   count(*) DB_TIME,

191 of 129
from
   v$active_session_history
where
   session_type = 'FOREGROUND'
and
   sample_time between to_date('30032016 10:00:00','ddmmyyyy hh24:mi:ss')
and
   to_date('30032016 10:30:00','ddmmyyyy hh24:mi:ss'); 

You can see the current value of DB time for the entire system by querying the
V$SYS_TIME_MODEL or you can see it for a given session by using the
V$SESS_TIME_MODEL view as seen here:

select    sum(value) "DB time" from    v$sess_time_model


where    stat_name='DB time';

DB time
----------
109797

Query Optimizer:-

It is a built-in software that determine most efficient way to execute SQL statement. It
contain following topics,

 Optimizer Operation
 Component of Query Optimizer
 Bind Variable Picking

Optimizer Operation:-

Database can execute SQL query in many ways such as Full Table Scan, Index
Scan, Nested Loop, Hash Join, etc. Optimizer consider many factors related to objects,
conditions in queries at the time of determining execution plan for SQL query. This is
one the important step since it affect execution time.

When user submit query for execution optimizer perform following steps.

 It generate potential plan for SQL statement.

192 of 129
 After than it estimate COST of each plan based on statistic of data.
 Optimizer compare the plan & chose plan with lowest cost.

Component of Query Optimizer:-

It include three component,

 Query Transformation
 Estimation
 Plan Generation

Query Transformation:-

1Z0144-prog with plsql 11g.

DBMS_PROFILER

 It provide interface to existing PL/SQL Application & identified performance


bottleneck. You can collect & stored PL/SQL profiler data.
 With this package you can get all profiling information about all library units
which are executed during the session.
 Profiler gather information at Virtual Machine Level.
 It contain following information,
o Total number of time line is going to execute.
o Total amount of time that line spend for execution.
o Minimum & Maximum amount of time that have been spend on particular
line execution.
 Profiler information stores on database tables which you can use further to build
the report.
 PROFTAB.sql script create the table with columns, data type & definition as
below,

193 of 129
Table 73-1 Columns in Table PLSQL_PROFILER_RUNS
Column Datatype Definition
runid NUMBER PRIMARY Unique run identifier from
KEY plsql_profiler_runnumber
related_run NUMBER Runid of related run (for
client/server correlation)
run_owner VARCHAR2(32), User who started run
run_date DATE Start time of run
run_comment VARCHAR2(2047) User provided comment for
this run
run_total_time NUMBER Elapsed time for this run in
nanoseconds
run_system_info VARCHAR2(2047) Currently unused
run_comment1 VARCHAR2(2047) Additional comment
spare1 VARCHAR2(256) Unused

Table 73-2 Columns in Table PLSQL_PROFILER_UNITS


Column Datatype Definition
runid NUMBER Primary key, references
plsql_profiler_runs,
unit_number NUMBER Primary key, internally
generated library unit #
unit_type VARCHAR2(32) Library unit type
unit_owner VARCHAR2(32) Library unit owner name
unit_name VARCHAR2(32) Library unit name timestamp
on library unit
unit_timestamp DATE In the future will be used to
detect changes to unit
between runs
total_time NUMBER Total time spent in this unit
in nanoseconds. The profiler
does not set this field, but it
is provided for the
convenience of analysis
tools.
spare1 NUMBER Unused
spare2 NUMBER Unused

194 of 129
Table 73-3 Columns in Table PLSQL_PROFILER_DATA
Column Datatype Definition
runid NUMBER Primary key, unique
(generated) run identifier
unit_number NUMBER Primary key, internally
generated library unit
number
line# NUMBER Primary key, not null, line
number in unit
total_occur NUMBER Number of times line was
executed
total_time NUMBER Total time spent executing
line in nanoseconds
min_time NUMBER Minimum execution time for
this line in nanoseconds
max_time NUMBER Maximum execution time
for this line in nanoseconds
spare1 NUMBER Unused
spare2 NUMBER Unused
spare3 NUMBER Unused
spare4 NUMBER Unused

195 of 129
Using dbms_profiler

The dbms_profiler package is a built-in set of procedures to capture performance


information from PL/SQL.   The dbms_profiler package has these procedures:

 dbms_profiler.start_profiler
 dbms_profiler.flush_data
 dbms_profiler.stop_profiler

The basic idea behind profiling with dbms_profiler is for the developer to understand
where their code is spending the most time, so they can detect and optimize it.  The
profiling utility allows Oracle to collect data in memory structures and then dumps it
into tables as application code is executed.  dbms_profiler is to PL/SQL, what tkprof
and Explain Plan are to SQL. 

Once you have run the profiler, Oracle will place the results inside the dbms_profiler
tables. 

The dbms_profiler procedures are not a part of the base installation of Oracle.  Two
tables need to be installed along with the Oracle supplied PL/SQL package.  In the
$ORACLE_HOME/rdbms/admin directory, two files exist that create the environment
needed for the profiler to execute. 

·     proftab.sql - Creates three tables and a sequence and must be executed before the
profload.sql file.

·     profload.sql - Creates the package header and package body for
DBMS_PROFILER.  This script must be executed as the SYS user.

Oracle - Starting a Profiling Session

The profiler does not begin capturing performance information until the call to
start_profiler is executed.

SQL> exec dbms_profiler.start_profiler ('Test of raise procedure by Scott');

Flushing Data during a Profiling Session

The flush command enables the developer to dump statistics during program execution
without stopping the profiling utility. The only other time Oracle saves data to the
underlying tables is when the profiling session is stopped, as shown below:

196 of 129
SQL> exec dbms_profiler.flush_data();

PL/SQL procedure successfully completed.

Stopping a Profiling Session

Stopping a profiler execution using the Oracle dbms_profiler package is done after an
adequate period of time of gathering performance benchmarks - determined by the
developer. Once the developer stops the profiler, all the remaining (unflushed) data is
loaded into the profiler tables.

SQL> exec dbms_profiler.stop_profiler();

PL/SQL procedure successfully completed.

Oracle dbms_profiler package also provides procedures that suspend and resume
profiling (pause_profiler(), resume_profiler()).

select runid, unit_number, line#, total_occur, total_time,   


       min_time, max_time
from plsql_profiler_data;
 
 
     RUNID UNIT_NUMBER      LINE# TOTAL_OCCUR TOTAL_TIME  
MIN_TIME   MAX_TIME
---------- ----------- ---------- ----------- ---------- ---------- ----------
         1           1          8           3   33284677     539733   28918759
         1           1         80           2    1134222     516266     617955
         1           1         89           0          0          0          0
         1           1         90           0          0          0          0
         1           1         92           0          0          0          0
         1           1         95           0          0          0          0
         1           1        103           0          0          0          0
         1           1        111           0          0          0          0
         1           1        112           0          0          0          0
         1           1        116           1    1441523    1441523    1441523
         1           1        119           0          0          0          0
         1           1        121           1    1431466    1431466    1431466
         1           1        123           1     136330     136330     136330
         1           1        132           1     978895     978895     978895

197 of 129
         1           1        140           0          0          0          0
         1           1        141           0          0          0          0
         1           1        143           0          0          0          0
         1           1        146           1    2905397    2905397    2905397
         1           1        152           2    1622552     574374    1048177
         1           1        153           0          0          0          0
         1           1        157           1     204495     204495     204495
         1           1        160           0          0          0          0

Working with Captured Profiler Data

The profiler utility populates three tables with information, plsql_profiler_runs,


plsql_profiler_units, and plsql_profiler_data.  Each "run" is initiated by a user and
contains zero or more "units".  Each unit contains "data" about its execution - the guts
of the performance data benchmarks. 

The performance information for a line in a unit needs to be tied back to the line source
in user_source.  Once that join is made, the developer will have all of the information
that they need to optimize, enhance, and tune their application code, as well as the
SQL.

Using the dbms_profiler Scripts

To extract high-level data, including the length of a particular run, the script
(profiler_runs.sql) below can be executed:

column runid format 990


column type format a15
column run_comment format a20
column object_name format a20
 
select a.runid,
     substr(b.run_comment, 1, 20) as run_comment,
     decode(a.unit_name, '', '<anonymous>',
           substr(a.unit_name,1, 20)) as object_name,
     TO_CHAR(a.total_time/1000000000, '99999.99') as sec,
     TO_CHAR(100*a.total_time/b.run_total_time, '999.9') as pct
     from plsql_profiler_units a, plsql_profiler_runs b

198 of 129
     where a.runid=b.runid
     order by a.runid asc;
 
 
RUNID UNIT_NUMBER OBJECT_NAME          TYPE            SEC       PCT
----- ----------- -------------------- --------------- --------- ------
    1           1 <anonymous>                                .00     .0
    1           2 <anonymous>                               1.01     .0
    1           3 BMC$PKKPKG           PACKAGE BODY      6921.55   18.2
    1           4 <anonymous>                                .02     .0
    2           1 <anonymous>                                .00     .0
    2           2 <anonymous>                                .01     .0
 

Note that anonymous PL/SQL blocks are also included in the profiler tables. 
Anonymous blocks are less useful from a tuning perspective since they cannot be tied
back to a source object in user_source.  Anonymous PL/SQL blocks are simply
runtime source objects and do not have a corresponding dictionary object (package,
procedure, function).  For this reason, the anonymous blocks should be eliminated
from most reports.

From the data displayed above, the next step is to focus on the lines within the package
body, testproc, that are taking the longest.  The script (profiler_top10_lines.sql) below
displays the line numbers and their performance benchmarks of the top 10 worst
performing lines of code.

select line#, total_occur, 


  decode (total_occur,null,0,0,0,total_time/total_occur/1000,0) as avg,
  decode(total_time,null,0,total_time/1000) as total_time,
  decode(min_time,null,0,min_time/1000) as min,
  decode(max_time,null,0,max_time/1000) as max
  from plsql_profiler_data
  where runid = 1  
  and unit_number = 3       -- testproc
  and rownum < 11           -- only show Top 10
  order by total_time desc ;
 
 
     LINE# TOTAL_OCCUR        AVG TOTAL_TIME        MIN        MAX
---------- ----------- ---------- ---------- ---------- ----------
       156           1              5008.457   5008.457   5008.457
        27           1               721.879    721.879    721.879

199 of 129
      2113           1               282.717    282.717    282.717
        89           1               138.565    138.565    138.565
      2002           1               112.863    112.863    112.863
      1233           1                94.984     94.984     94.984
        61           1                94.984     94.984     94.984
       866           1                94.984     94.984     94.984
       481           1                92.749     92.749     92.749
       990           1                90.514     90.514     90.514
 
10 rows selected.

Taking it one step further, the query below (profiler_line_source.sql) will extract the
actual source code for the top 10 worst performing lines. 

 select line#,
  decode (a.total_occur,null,0,0,0,           
  a.total_time/a.total_occur/1000) as Avg,

See code depot


  from plsql_profiler_data a, plsql_profiler_units b, user_source c
     where a.runid       = 1  
     and a.unit_number   = 3
     and a.runid         = b.runid
     and a.unit_number   = b.unit_number
     and b.unit_name     = c.name
     and a.line#         = c.line
     and rownum          < 11  
     order by a.total_time desc ;
 
 
 
 
     LINE#        AVG SOURCE
---------- ---------- --------------------
       156   5008.457   select sum(bytes) into reusable_var from dba_free_space;
        27    721.879   execute immediate dml_str USING  current_time
      2113    282.717   select OBJ#, TYPE# from SYS.OBJ$;
        89    138.565   OBJ_TYPES(BOBJ(I)) := BTYP(I);
      2002    112.863   select count(*) into reusable_var from dba_objects
      1233     94.984   delete from pkk_daily_activity

200 of 129
        61     94.984   update_stats_table(33, reusable_var, null);
       866     94.984   latest_executions := reusable_var - total_executions;
       481     92.749   time_number := hours + round(minutes * 100/60/100,2);
       990     90.514   update_stats_table(45, LOBS, null); 
 
10 rows selected.

Notice from the output above that most of the information needed to diagnose and fix
PL/SQL performance issues is provided.  For lines containing SQL statements, the
tuner can optimize the SQL perhaps by adding optimizer hints, eliminating full table
scans, etc.  Consult Chapter 5 for more details on using tkprof utility to diagnose SQL
issues.

Other useful scripts that are hidden within the Oracle directory structure
($ORACLE_HOME/PLSQL/DEMO) include a few gems that help report and analyze
profiler information.  

·     profdemo.sql -A demo script for collecting PL/SQL profiler data.

·     profsum.sql - A collection of useful SQL scripts that are executed against profiler
tables. 

·     profrep.sql - Creates views and a package (unwrapped) that populates the views
based on the three underlying profiler tables. 

Best Practices for Using dbms_profiler  Everywhere

·     Wrap only for production - Wrapping code is desired for production
environments but not for profiling.  It is much easier to see the unencrypted form of the
text in our reports than it is to connect line numbers to source versions.  Use
dbms_profiler before you wrap your code in a test environment, wrap it, and then put it
in production.     

·     Eliminate system packages most of the time - Knowing the performance data for
internal Oracle processing does not buy you much since you cannot change anything. 
However, knowing the performance problem is within the system packages will save
you some time of trying to tune your own code when the problem is elsewhere.

·     When analyzing lines of code, it is best to concentrate on the following:

·     Lines of code that are frequently executed - For example, a loop that executes
5000 times is a great candidate for tuning.  Guru Oracle tuners typically look for that
"low hanging fruit" in which one line or a group of lines of code are executed much

201 of 129
more than others.  The benefits of tuning one line of code that is executed often far
outweigh tuning those lines that may cost more yet are executed infrequently in
comparison.

·     Lines of code with a high value for average time executed - The minimum and
maximum values of execution time are interesting although not as useful as the
average execution time.  Min and max only tell us how much the execution time varies
depending on database activity.  Line by line, a PL/SQL developer should focus on
those lines that cost the most on an average execution basis.  dbms_profiler does not
provide the average, but it does provide enough data to allow it to be computed (Total
Execution Time / # Times Executed).

·     Lines of code that contain SQL syntax - The main resource consumers are those
lines that execute SQL.  Once the data is sorted by average execution time, the
statements that are the worst usually contain SQL.  Optimize and tune the SQL through
utilities, such as Explain Plan, tkprof, and third party software.

DBLINK

In Oracle PL/SQL, the CREATE DATABASE LINK statement creates a schema


object in one database that enables you to access objects on another database.

(The other database does not have be an Oracle Database system, but if you intend to
access non-Oracle systems you'll need to use Oracle Heterogeneous Services.)

Example Syntax:

CREATE [PUBLIC] DATABASE LINK <link_name>


CONNECT TO <user_name>
IDENTIFIED BY <password>
USING '<service_name>';

202 of 129
Example Usage:

CREATE DATABASE LINK test


CONNECT TO jim IDENTIFIED BY jim
USING 'test';

In the example above, user jim on the remote database defines a fixed-user database
link named test to the jim schema on the local database.

General Information
link$    
gv_$session_connect_in repcat$_repprop_dblink_ho
all_db_links
fo w
Related Data dba_db_links ku$_dblink_t user_db_links
Dictionary
Objects dbms_dblink ku$_dblink_view wmp_api_dblink
dbms_dblink_l
ku$_10_1_dblink_view wmp_db_links_v
ib
gv_$dblink ora_kglr7_db_links  

Related Files $ORACLE_HOME/rdbms/admin/caths.sql


create database link
System
create public database link
Privileges
drop public database link
global_names (required to be TRUE for replication. If the value of the
Init.ora GLOBAL_NAMES initialization parameter is TRUE, then the
parameters database link must have the same name as the database to which it
related to connects.
Database Links
global_names
open_links
open_links_per_instance

203 of 129
conn / as sysdba

set linesize 121


col name format a30
col value format a30

SELECT name, value


FROM gv$parameter
WHERE (name LIKE '%link%')
OR (name IN ('global_names', 'dblink_encrypt_login'));
The global_name is made up of the db_name and the db_domain, and
GLOBAL_NA the first element (before the first . in a global name is treated as the
MES 'db_name' and the rest of the global_name is treated as the
'db_domain'.

~ Sybrand Bakker
set linesize 121
col name format a30
col value format a30

SELECT name, value


FROM gv$parameter
WHERE name IN ('db_name', 'db_domain');

col value$ format a40


col comment$ format a40

SELECT *
FROM props$
WHERE name LIKE '%GLOBAL%';

ALTER DATABASE RENAME GLOBAL_NAME TO


orabase.psoug.org;

SELECT *
FROM props$
WHERE name LIKE '%GLOBAL%';
Notes:

204 of 129
 The single quotes around the service name are mandatory
 The service name must be in the TNSNAMES.ORA file on the server

 
Create Database Link
CREATE [SHARED] [PUBLIC] DATABASE LINK <link_name>
Connected User CONNECT TO CURRENT_USER
Link USING '<service_name>';
-- create tnsnames entry for conn_link
conn_link =
  (DESCRIPTION =
    (ADDRESS_LIST =
      (ADDRESS = (PROTOCOL = TCP)(HOST = perrito2)(PORT =
1521))
    )
    (CONNECT_DATA =
      (SERVICE_NAME = orabase)
    )
 )

conn uwclass/uwclass

CREATE DATABASE LINK conn_user


USING 'conn_link';

desc user_db_links

set linesize 121


col db_link format a20
col username format a20
col password format a20
col host format a20

SELECT * FROM user_db_links;

SELECT * FROM all_db_links;

205 of 129
SELECT table_name, tablespace_name FROM
user_tables@conn_user;
CREATE [PUBLIC] DATABASE LINK <link_name>
Current User CONNECT TO CURRENT_USER
Link USING '<service_name>';
CREATE DATABASE LINK curr_user
CONNECT TO CURRENT_USER
USING 'conn_link';

desc user_db_links

set linesize 121


col db_link format a20
col username format a20
col password format a20
col host format a20

SELECT * FROM user_db_links;

SELECT * FROM all_db_links;

SELECT table_name, tablespace_name FROM


user_tables@curr_user;

-- The user who issues this statement must be a global user 


-- registered with the LDAP directory service.
CREATE [PUBLIC] DATABASE LINK <link_name>
Fixed User CONNECT TO <user_name>
Link IDENTIFIED BY <password>
USING '<service_name>';
CREATE DATABASE LINK fixed_user
CONNECT TO hr IDENTIFIED BY hr
USING 'conn_link';

SELECT * FROM all_db_links;

desc gv$session_connect_info

206 of 129
set linesize 121
set pagesize 60
col authentication_type format a10
col osuser format a25
col network_service_banner format a50 word wrap

SELECT DISTINCT sid


FROM gv$mystat;

SELECT authentication_type, osuser, network_service_banner


FROM gv$session_connect_info
WHERE sid = 143;

SELECT table_name, tablespace_name FROM


user_tables@fixed_user;
CREATE SHARED DATABASE LINK <link_name>
Shared Link AUTHENTICATED BY <schema_name> IDENTIFIED BY
<password>
USING '<service_name>';
conn uwclass/uwclass

CREATE SHARED DATABASE LINK shared


CONNECT TO scott IDENTIFIED BY tiger
AUTHENTICATED BY uwclass IDENTIFIED BY uwclass
USING 'conn_link';

SELECT * FROM user_db_links;

SELECT table_name, tablespace_name FROM user_tables@shared;


CREATE PUBLIC DATABASE LINK <link_name>
Public Link USING '<service_name>';
conn / as sysdba

CREATE PUBLIC DATABASE LINK publink


USING 'conn_link';

SELECT * FROM dba_db_links;

207 of 129
conn scott/tiger

SELECT table_name, tablespace_name FROM user_tables@publink;

conn sh/sh

SELECT table_name, tablespace_name FROM user_tables@publink;

conn uwclass/uwclass

SELECT table_name, tablespace_name FROM user_tables@publink;


 
Close Database Link
ALTER SESSION CLOSE DATABASE LINK <link_name>;
Close Link
ALTER SESSION CLOSE DATABASE LINK curr_user;
 
Drop Database Link

Drop Standard DROP DATABASE LINK <link_name>;


Link DROP DATABASE LINK test_link;

Drop Public DROP PUBLIC DATABASE LINK <link_name>;


Link DROP PUBLIC DATABASE LINK test_link;

• Logical/Physical Data Model Design

• Oracle Performance Tuning

• Standardizing Enterprise Data Architecture Processes

• Database Schema/Instance Analysis

• Certified in Information Management:- Data Quality DQ , Data Profiling,


Information Lifecycle Management ILM , Metadata Management, Master Data
Management MDM , Data Migration , Big Data Exadata

208 of 129
• Prepared Artifacts on Oracle Database 11gr2 Automatic SQL Tuning, SQL/PLSQL
Function Result Cache, Oracle Automatic Parallelism

• Oracle Exadata: Have knowledge and attended presentations in Exadata Architecture,


Exadata Features: Smart Scan, Smart Flash Cache, Hybrid Columnar Compression,
Infiniband Network Storage Indexes

• Have undergone training from Wipro Architect Academy on MDM benefits/drivers,


MDM architecture styles, Data Governance MDM life cycle prepared case studies for
MDM implementation in large Information Management Pharmatectual Companies

• Conducted Training Sessions on DB Model Design Performance Tuning on


following topics:

• Normalization, Dimensional Modeling

• Erwin Reverse Engineering Complete Compare operations

• SQL Tuning strategies. Oracle DB Parameter tuning

• DB Performance Reports ADDM,ASH,AWR,SQL Tuning Advisor, Segment


Advisor

• Prepared standard process documentations for Schema Placement Schema Creation


Processes

• Prepared standard process documentation for executing 3rd Party and Performance
Tuning Projects

• Prepared Logical Physical Data Modeling Guides

• Coded PLSQL Package for performing DDL activities like disabling constraints,
materialized view refreshes, statistics gathering , truncating tables from Functional
User Accounts used by Scheduling/ETL Tools

• Prepared standard documentation for use of Private/Public DB Links

209 of 129

You might also like