Tricky PLSQL Notes
Tricky PLSQL Notes
1. ORACLE ARCHITECHTURE
2. NORMALIZATION
3. ER DIAGRAM
4. OOPS CONCEPTS IN ORACLE
5. CURSOR
6. EXECEPTION HANDLING
7. PROCEDURE
8. FUNCTION
9. PACKAGE
10.TRIGGER
11.COLLECTION
12.PARTITIONING TABLE
13.PRAGMA
14.INDEX
15.HIRARCHICAL QUERY
16.GLOBAL TEMPORARY TABLE
17.EXTERNAL TABLE
18.GRANT & REVOKE
19.BULK COLLECT & FOR ALL
20.DYNAMIC SQL
21.FLASH BACK QUERY
22.SQL LOADER
23.NO COPY
24.Materialized View
25.Analytical Functions
26.PERFORMANCE TUNNING
27.DBMS_PROFILER
ORACLE ARACHITECTURE
https://docs.oracle.com/cd/E18283_01/server.112/e16508/process.htm
In Oracle Architecture you will cover following topics related to Oracle Architecture.
1. Oracle Memory Structure
2. Oracle Background Process
3. Oracle Disk Utilization Structure
2 of 129
PGA is an area in memory that helps user to execute such as,
o BIND Variable Info
o SORT Area,
o And other area of Cursor Handling.
From our prior discussion of shared pool, DBA should know database already
stored Parse Tree for recently executed query in shared area called Library
Cache. So, why user need their own area? The reason behind that is to hold/store
real value of bind variable for execution of SQL statements.
o DBWR:-
Primary job is to keep database buffer clean.
3 of 129
DBWR will write to disk when,
Server process cannot find clean buffer.
Timeout occur.
Checkpoint occur.
o LGWR:-
Primary job is to keep Redo log buffer clean.
LGWR write to the disk when,
Transaction is committed.
Timeout Occurs.
The Redo Log Buffer is 1/3 full.
o SMON:-
SMON primarily clean up Server Side Failure.
It Wakeups regularly to check whether it is needed or not.
It will recover the transaction marked as DEAD during Instance
recovery.
All Non-Committed work will be rollback by SMON.
o PMON:-
PMON primarily clean up Client Side Failure.
It will wake up regularly and check whether it is require or not.
It will detects the both server & client aborted process.
o RECO:-
It will handle the recovery of distributed transaction against the
database.
o ARCH:-
It will archive online redo log.
o CKPT:-
It will handle the process of writing log sequence number to data
file, control file.
It is an alternative of LGWR.
4 of 129
o Segment
o Extend
An Oracle database consists of one or more logical storage units called
tablespaces, which collectively store all of the database's data.
Each tablespace in an Oracle database consists of one or more files called
datafiles, which are physical structures that conform to the operating system in
which Oracle is running.
A segment is a set of Extents.
Extends contains all the data which stored within a tablespace.
NORMALIZATION
5 of 129
INSERT
UPDATE
DELETE
From which database may suffer.
Normalization is the process of organizing the data in database in such a way that it
will reduce REDANDANCY & above 3 types of anomalies.
For e.g.
Suppose we have following table,
Student_Courses(Sid PK,Sname,Phone,Course_Taken)
Where,
SID is Student Id which is Primary Key
Sname is Student Name
Phone is Student Phone Number
Course_Taken is itself is Table which contain,
o Course_Id
o Course_Description
o Credit_Hours
o Grade
Student_Courses
6 of 129
If we want to delete the data of any student & only that student holds particular
course. In such case if we DELETE the data for that student then we will loss
respective Couse Data also.
According to 1NF rule above table is not 1NF, so we decompose above table as below.
Course- Credit-
Sid Sname Phone Course-id Grade
description hours
Database
100 John 487 2454 IS380 3 A
Concepts
Unix Operating
100 John 487 2454 IS416 3 B
System
Database
200 Smith 671 8120 IS380 3 B
Concepts
Unix Operating
200 Smith 671 8120 IS416 3 B
System
200 Smith 671 8120 IS420 Data Net Work 3 C
300 Russell 871 2356 IS417 System Analysis 3 A
Examination of the above Student-courses relation reveals that Sid does not uniquely
identify a row (tuple) in the relation hence cannot be the primary key. For the same
reason Course-id cannot be the primary key. However the combination of Sid and
Course-id uniquely identifies a row in Student-courses, Therefore (Sid, Course-id) is
the primary key of the above relation.
The primary key determines every attribute. For example if you know both Sid and
Course-id for any student you will be able to retrieve Sname, Phone, Course-
description, Credit-hours and Grade, because these attributes are dependent on the
primary key. Figure 1 below is the graphical representation of the functional
dependency between the primary key and attributes of the above relation.
7 of 129
8 of 129
Note that the attribute to the right of the arrow is functionally dependent on the
attribute in the left of the arrow. Thus the combination (Sid, Course-id) is the
determinant (that determines other attributes) and attributes Sname, Phone, Course-
description, Credit-hours and Grade are dependent attributes.
Formally speaking a determinant is an attribute or a group of attributes determine the
value of other attributes. In addition to the (Sid, Course-id) there are two other
determinants in the above Student-courses relation. These are; Sid and Course-id
attributes. Note that Sid alone determines both Sname and Phone, and attribute
Course-id alone determines both Credit-hours and Course_description attributes.
9 of 129
1
Attribute Grade is fully functionally dependent on the primary key (Sid, Course-id)
because both parts of the primary keys are needed to determine Grade. On the
other hand both Sname, and Phone attributes are not fully functionally dependent on
the primary key, because only a part of the primary key namely Sid is needed to
determine both Sname and Phone. Also attributes Credit-hours and Course-
Description are not fully functionally dependent on the primary key because only
Course-id is needed to determine their values.
The new relation Student-courses still suffers from all three anomalies for the
following reasons:
1. The relation contains redundant data (Note Database_Concepts as the
course
description for IS380 appears in more than one place).
2. The relation contains information about two entities Student and course.
Following is the detail description of the anomalies that relation Student-courses
suffers from.
1. Insertion anomaly: We cannot add a new course such as IS247 with course
description programming techniques to the database unless we add a student
who to take the course.
2. Update anomaly: If we change the course description for IS380 from Database
Concepts to New_Database_Concepts we have to make changes in more than
one place or else the database will be inconsistent. In other words in some places
the course description will be New_Database_Concepts and in any place were
we forgot to make the changes the description still will be Database_Concepts.
3. Deletion anomaly: If student Russell is deleted from the database we also loose
information that we had on course IS417 with description System_Analysis.
The above discussion indicates that having a single table Student-courses for our
database causing problems (anomalies). Therefore we break the table to smaller table
to get a higher normal form relation. Before doing that let us define the second normal
form.
Second normal relation: A first normal form relation is in second normal form if all
its non-primary attributes are fully functionally dependent on the primary key.
10 of 129
Note that primary attributes are those attributes, which are parts of the primary key,
and non-primary attributes do not participate in the primary key. In Student-courses
relation both Sid and Course-id are primary attributes because they are components of
the primary key. However attributes Sname, Phone, Course-description, Credit-hours
and Grade all are non primary attributes because non of them is a component of the
primary key.
To convert Student-courses to second normal relations we have to make all non-
primary attributes to be fully functionally dependent on the primary key. To do that we
need to project (that is we break it down to two or more relations) Student-courses
table into two or more tables. However projections may cause problems. To avoid such
problems it is important to keep attributes, which are dependent on each other in the
same table, when a relation is projected to smaller relations. Following this principle
and examination of Figure-1 indicate that we should divide Student-courses relation
into following three relations:
PROJECT Student-courses ON (Sid, Sname, Phone) creates a table call it Student.
The relation Student will be Student (Sid:pk, Sname, Phone) and
PROJECT Student-courses ON (Sid, Course-id, Grade) creates a table call it
Student-grade. The relation Student-grade will be
Student-grade (Sid:pk1:fk:Student, Course-id::pk2:fk:Courses, Grade) and
Projects Student-courses ON (Course-id, Course-Description, Credit-hours) create a
table call it Courses. Following are these three relations and their contents:
11 of 129
Sid Course-id Grade
100 IS380 A
100 IS416 B
200 IS380 B
200 IS416 B
200 IS420 C
300 IS417 A
All these three relations are in second normal form. Examination of these relations
shows that we have eliminated the redundancy in the database. Now relation Student
contains information only related to the entity student, relation Courses contains
information related to entity Courses only, and the relation Student-grade contains
information related to the relationship between these two entity.
Further these three sets are free from all anomalies. Let us clarify this in more detail.
Insertion anomaly: Now a new Course with course-id IS247 and Course-description
can be inserted to the table Course. Equally we can add any new students to the
database by adding their id, name and phone to Student table. Therefore our database,
which made up of these three tables does not suffer from insertion anomaly.
Update anomaly: Since redundancy of the data was eliminated no update anomaly can
occur. To change the course-description for IS380 only one change is needed in table
Courses.
Deletion anomaly: the deletion of student Russell from the database is achieved by
deleting Russell's records from both Student and Student-grade relations and this
does not have any side effect because the course IS417 untouched in the table Courses.
Third Normal Form: A second normal form relation is in third normal form if all non-
primary attributes (that is attributes that are not parts of the primary key or of any
candidate key) have non-transitivity dependency on the primary key.
Assume the relation:
12 of 129
STUDENT (Sid: pk, Activity, fee)
Further Activity ------------> fee that is the Activity determine the fee
Table STUDENT is in first normal form because all its attributes are simple. Also
STUDENT is in second normal form because all its non-primary attributes are fully
functionally dependent on the primary key (Sid). Notice that a first normal relation
with non-composite (that is simple) primary key automatically will be in second
normal form because all its non-primary attributes will be fully functionally dependent
on the primary key.
Table STUDENT suffers from all 3 anomalies; a new student can not be added to the
database unless he/she takes an activity and no activity can be inserted into the
database unless we get a student to take that activity. There is redundancy in the table
(see Swimming), therefore to change the fee for Swimming we must make changes in
more than one place and that will cause update anomaly problem. If student 300 is
deleted from the table we also loose the fact that we had Golf activity with its fee to be
300. To overcome these anomalies STUDENT table should be converted to smaller
tables. Consider the following three projection of the STUDENT relation:
PROJECT STUDENT on [Sid, Activity] and we get a relation name it
STUD-AVT (Sid:pk, Activity) with the following data :
STUD_ACT
13 of 129
Error! Bookmark not defined.Error! Bookmark not defined.Error! Bookmark
not defined.Error! Bookmark not defined.
ER DIAGRAM
The ER model defines the conceptual view of a database. It works around real-world entities
and the associations among them. At view level, the ER model is considered a good option for
designing databases.
Entity
An entity can be a real-world object, either animate or inanimate, that can be easily
identifiable. For example, in a school database, students, teachers, classes, and courses offered
can be considered as entities. All these entities have some attributes or properties that give
them their identity.
An entity set is a collection of similar types of entities. An entity set may contain entities with
attribute sharing similar values. For example, a Students set may contain all the students of a
school; likewise a Teachers set may contain all the teachers of a school from all faculties.
Entity sets need not be disjoint.
Attributes
Entities are represented by means of their properties, called attributes. All attributes have
values. For example, a student entity may have name, class, and age as attributes.
There exists a domain or range of values that can be assigned to attributes. For example, a
student's name cannot be a numeric value. It has to be alphabetic. A student's age cannot be
negative, etc.
Types of Attributes
14 of 129
Simple attribute − Simple attributes are atomic values, which cannot be divided
further. For example, a student's phone number is an atomic value of 10 digits.
Composite attribute − Composite attributes are made of more than one simple
attribute. For example, a student's complete name may have first_name and last_name.
Derived attribute − Derived attributes are the attributes that do not exist in the
physical database, but their values are derived from other attributes present in the
database. For example, average_salary in a department should not be saved directly in
the database, instead it can be derived. For another example, age can be derived from
data_of_birth.
Multi-value attribute − Multi-value attributes may contain more than one values.
For example, a person can have more than one phone number, email_address, etc.
For example, the roll_number of a student makes him/her identifiable among students.
Super Key − A set of attributes (one or more) that collectively identifies an entity in
an entity set.
Candidate Key − A minimal super key is called a candidate key. An entity set may
have more than one candidate key.
Primary Key − A primary key is one of the candidate keys chosen by the database
designer to uniquely identify the entity set.
Relationship
15 of 129
The association among entities is called a relationship. For example, an employee works_at a
department, a student enrolls in a course. Here, Works_at and Enrolls are called relationships.
Relationship Set
A set of relationships of similar type is called a relationship set. Like entities, a relationship
too can have attributes. These attributes are called descriptive attributes.
Degree of Relationship
The number of participating entities in a relationship defines the degree of the relationship.
Binary = degree 2
Ternary = degree 3
n-ary = degree
Mapping Cardinalities
Cardinality defines the number of entities in one entity set, which can be associated with the
number of entities of other set via relationship set.
One-to-one − One entity from entity set A can be associated with at most one entity
of entity set B and vice versa.
One-to-many − One entity from entity set A can be associated with more than one
entities of entity set B however an entity from entity set B, can be associated with at
most one entity.
Many-to-one − More than one entities from entity set A can be associated with at
most one entity of entity set B, however an entity from entity set B can be associated
with more than one entity from entity set A.
Many-to-many − One entity from A can be associated with more than one entity
from B and vice versa.
CURSOR
Cursor is an area given by Oracle to perform SQL queries. There are ideally two types
of cursor.
1. Implicit Cursor
2. Explicit Cursor
Implicit Cursor create by oracle itself. For e.g. any SELECT statement.
Explicit Cursor create by user.
16 of 129
%ISOPEN: - Check whether cursor is open or not. If Cursor is open then it will return
TRUE else it will return FALSE.
%FOUND: - Returns TRUE if DML statement affect one or more rows or SELECT
statement returns one or more rows. Otherwise, it returns FALSE.
declare
CURSOR c1
IS
SELECT eno,dno FROM sag_test_1;
v_eno NUMBER;
v_dno NUMBER;
begin
OPEN c1;
LOOP
FETCH c1 INTO v_eno,v_dno;
EXIT WHEN c1%NOTFOUND;
dbms_output.put_line('EMP_NO:='||v_eno||'DEPT_NO:='||v_dno);
END LOOP;
OPEN c1;
LOOP
FETCH c1 INTO v_eno,v_dno;
EXIT WHEN c1%NOTFOUND;
dbms_output.put_line('EMP_NO:='||v_eno||'DEPT_NO:='||v_dno);
17 of 129
END LOOP;
CLOSE c1;
CLOSE c1;
end;
declare
CURSOR c1
IS
SELECT eno,dno FROM sag_test_1;
CURSOR c2 IS
SELECT ename,sal
FROM sag_test_1;
v_eno NUMBER;
v_dno NUMBER;
v_ename VARCHAR2(10);
v_sal NUMBER;
begin
OPEN c1;
18 of 129
LOOP
FETCH c1 INTO v_eno,v_dno;
EXIT WHEN c1%NOTFOUND;
dbms_output.put_line('EMP_NO:='||v_eno||'DEPT_NO:='||v_dno);
END LOOP;
OPEN c2;
LOOP
FETCH c2 INTO v_ename,v_sal;
EXIT WHEN c2%NOTFOUND;
dbms_output.put_line('Ename:='||v_ename||'Salary:='||v_sal);
END LOOP;
CLOSE c2;
CLOSE c1;
end;
declare
CURSOR c1
IS
SELECT eno,dno FROM sag_test_1;
CURSOR c1
IS
SELECT ename,sal FROM sag_test_1;
v_eno NUMBER;
v_dno NUMBER;
19 of 129
v_ename VARCHAR2(10);
sal NUMBER;
begin
OPEN c1;
LOOP
FETCH c1 INTO v_eno,v_dno;
EXIT WHEN c1%NOTFOUND;
dbms_output.put_line('EMP_NO:='||v_eno||'DEPT_NO:='||v_dno);
END LOOP;
OPEN c1;
LOOP
FETCH c1 INTO v_ename,sal;
EXIT WHEN c1%NOTFOUND;
dbms_output.put_line('EMP_Name:='||v_ename||'Salary:='||sal);
END LOOP;
CLOSE c1;
CLOSE c1;
end;
20 of 129
Can we open same cursor but with different input parameter?
- >No it is not possible
We will get an error like
21 of 129
DECLARE
V_EMPNO EMP.EMPNO%TYPE;
V_SAL EMP.SAL%TYPE;
BEGIN
OPEN C1(10);
LOOP
FETCH C1 INTO V_EMPNO,V_SAL;
DBMS_OUTPUT.PUT_LINE('EMP WHOES DEPTNO IS 10 FOR THEM EMP
NO = ' || V_EMPNO || ' SAL =' || V_SAL);
EXIT WHEN (C1%NOTFOUND=TRUE);
END LOOP;
CLOSE C1;
OPEN C1('MANAGER');
LOOP
FETCH C1 INTO V_EMPNO,V_SAL;
DBMS_OUTPUT.PUT_LINE('EMP WHOES JOB IS MANAGER FOR THEM
EMP NO = ' || V_EMPNO || ' SAL =' || V_SAL);
EXIT WHEN (C1%NOTFOUND=TRUE);
END LOOP;
CLOSE C1;
22 of 129
END;
DECLARE
V_EMPNO EMP.EMPNO%TYPE;
V_SAL EMP.SAL%TYPE;
BEGIN
OPEN C1(10);
LOOP
FETCH C1 INTO V_EMPNO,V_SAL;
DBMS_OUTPUT.PUT_LINE('EMP WHOES DEPTNO IS 10 FOR THEM EMP
NO = ' || V_EMPNO || ' SAL =' || V_SAL);
EXIT WHEN (C1%NOTFOUND=TRUE);
END LOOP;
CLOSE C1;
OPEN C2('MANAGER');
LOOP
FETCH C2 INTO V_EMPNO,V_SAL;
23 of 129
DBMS_OUTPUT.PUT_LINE('EMP WHOES JOB IS MANAGER FOR THEM
EMP NO = ' || V_EMPNO || ' SAL =' || V_SAL);
EXIT WHEN (C2%NOTFOUND=TRUE);
END LOOP;
CLOSE C2;
END;
Ref Cursor
A REF Cursor is a datatype that holds a cursor value in the same way that a
VARCHAR2 variable will hold a string value.
A REF Cursor can be opened on the server and passed to the client as a unit rather than
fetching one row at a time. One can use a Ref Cursor as target of an assignment, and it
can be passed as parameter to other program units. Ref Cursors are opened with an
OPEN FOR statement. In most other ways they behave similar to normal cursors.
History
Example
begin
24 of 129
end;
declare
t sys_refcursor;
v_empno emp.empno%type;
v_ename emp.ename%type;
v_sal emp.sal%type;
v_deptno emp.deptno%type;
begin
test_proc('10',t);
LOOP
FETCH t
INTO v_empno,v_ename,v_sal,v_deptno;
EXIT WHEN t%NOTFOUND;
DBMS_OUTPUT.PUT_LINE(v_empno|| ' | ' || v_ename || ' | ' || v_sal || ' | ' ||
v_deptno);
END LOOP;
CLOSE t;
end;
25 of 129
-->When return type included then it is called strong or static structure type
-->static ref cursor support different type of select statement but all of same
structure, but not necessary that the table should be same
2) Weak Ref Cursor:
this ref cursor allows us to any type of select statement irrespective of data
structure . i .e any table
Ref Cursor:
Syntax:
type <typename> is ref cursor [return <returntype>];
syntax for open statement:
declare
26 of 129
end loop;
close ec;
print('-------------------------------------------------------------------------'):
open ec for select * from emp;
loop
fetch ec into v_ec;
exit when ec%notfound;
print(v_Ec.empno);
print(v_ec.ename);
end loop;
close ec;
end;
declare
------------------weak cursor --------------------------
type refcur is ref cursor;
xc refcur;
v_Ec emp%rowtype;
v_dc dept%rowtype;
begin
open xc for select * from emp;
27 of 129
loop
fetch xc into v_ec;
exit when xc%notfound;
print(v_ec.ename);
print(v_Ec.empno);
end loop;
close xc;
print('--------------------------------------');
open xc for select * from dept;
loop
fetch xc into v_dc;
exit when xc%notfound;
print(v_dc.deptno);
print(v_dc.dname);
print(v_dc.loc);
end loop;
close xc;
end;
EXCEPTION HANDLING
28 of 129
CREATE OR REPLACE FUNCTION SAG_S5_FUN(F_ENO NUMBER)
RETURN NUMBER
AS
V_COUNT NUMBER;
V_NO_DATA EXCEPTION;
PRAGMA EXCEPTION_INIT(V_NO_DATA, -20009);
BEGIN
SELECT COUNT(*) INTO V_COUNT FROM SAG_TEST_EMP WHERE
EMP_NO = F_ENO;
IF (V_COUNT != 1) THEN
RAISE V_NO_DATA;
ELSE
DBMS_OUTPUT.PUT_LINE(V_COUNT);
RETURN V_COUNT;
END IF;
EXCEPTION
WHEN V_NO_DATA THEN
RAISE_APPLICATION_ERROR(-20009, 'No such emp exist');
RETURN V_COUNT;
END;
declare
child_rec exception;
pragma exception_init(child_rec,-02292);
begin
exception
29 of 129
when child_rec then
dbms_output.put_line('Child found');
end;
O/P:-
Child found
Statement processed.
0.01 seconds
declare
child_rec exception;
pragma exception_init(child_rec,-02292);
begin
exception
end;
o/p:-
ORA-20004: Child found
30 of 129
declare
child_rec exception;
pragma exception_init(child_rec,-20004);
begin
exception
end;
31 of 129
(
eno number,
ename varchar2(50),
sal number,
dno number
);
Now we want to insert data from Emp_5 table to Emp_6 table in fastest way that is by
using Bulk Collect & Forall.
Step 1:- In this step we will not handle any exception. Then check what will be the
output.
declare
type my_rec is record(eno number,ename varchar2(50),sal number,dno varchar2(50));
type my_tab is table of my_rec index by binary_integer;
t my_tab;
v_sql varchar2(100);
begin
select * --eno,ename,sal,dno
bulk collect
into t
from emp_5
order by 1;
forall I in 1..t.count
32 of 129
end;
o/p:-
ORA-01722: invalid number
select * from emp_6 order by 1;
No data found
declare
type my_rec is record(eno number,ename varchar2(50),sal number,dno varchar2(50));
type my_tab is table of my_rec index by binary_integer;
t my_tab;
v_sql varchar2(100);
begin
select * --eno,ename,sal,dno
bulk collect
into t
from emp_5
order by 1;
forall I in 1..t.count
insert into emp_6
values
(t(I).eno,t(I).ename,t(I).sal,t(I).dno);
exception
when others then
for x in 1..sql%bulk_exceptions.count
loop
dbms_output.put_line(sql%bulk_exceptions(x).error_index||'-'||sqlerrm(-sql
%bulk_exceptions(x).error_code));
33 of 129
end loop;
end;
O/p:-
Statement processed.
EN ENA DN
SAL
O ME O
1 a 1500010
2 a2 2500020
3 a3 2100010
declare
type my_rec is record(eno number,ename varchar2(50),sal number,dno varchar2(50));
type my_tab is table of my_rec index by binary_integer;
t my_tab;
v_sql varchar2(100);
begin
select * --eno,ename,sal,dno
bulk collect
into t
from emp_5
order by 1;
34 of 129
forall I in 1..t.countsave exceptions
insert into emp_6
values
(t(I).eno,t(I).ename,t(I).sal,t(I).dno);
exception
when others then
for x in 1..sql%bulk_exceptions.count
loop
dbms_output.put_line(sql%bulk_exceptions(x).error_index||'-'||sqlerrm(-sql
%bulk_exceptions(x).error_code));
end loop;
end;
O/p:-
Statement processed.
EN ENA DN
SAL
O ME O
1 a 1500010
2 a2 2500020
3 a3 2100010
5 a5 3300020
7 a7 3500020
35 of 129
PROCEDURE
Syntax for Procedure
CREATE [OR REPLACE] PROCEDURE procedure_name[ (parameter [,parameter]) ]
IS
[declaration_section]
BEGIN
executable_section
[EXCEPTION
exception_section]
END [procedure_name];
For e.g.
36 of 129
(
p_dno number,
p_output out sys_refcursor
)
as
begin
end;
declare
t sys_refcursor;
empno emp.empno%type;
ename emp.ename%type;
job emp.job%type;
sal emp.sal%type;
begin
test_proc(10,t);
loop
fetch t into empno,ename,job,sal;
exit when t%notfound;
dbms_output.put_line('Employee Number = ' || empno || ' Employee Name = ' || ename
|| ' Job = ' ||job|| ' Salary = ' || sal);
end loop;
close t;
37 of 129
end;
FUNCTION
For e.g.:-
38 of 129
By Using Object & Table Type:-
**********************************************************************
create or replace type my_obj is object
(
empno number,
ename varchar2(50),
sal number
);
**********************************************************************
create or replace type my_tab is table of my_obj;
**********************************************************************
create or replace function my_func(f_dno IN number)
return my_tab
as
t my_tab:=my_tab();
n integer:=0;
begin
dbms_output.put_line('Hii');
39 of 129
t(n):=my_obj(i.empno,i.ename,i.sal);
dbms_output.put_line('Empno:= '|| t(n).empno||' Ename:= '|| t(n).ename|| ' Salary:' ||
t(n).sal);
end loop;
return t;
end;
**********************************************************************
**********************************************************************
By Using SYS_REFCURSOR:-
return f_output;
end;
declare
t sys_refcursor;
40 of 129
x emp%rowtype;
begin
t:= my_func2(10);
loop
fetch t into x;
exit when t%notfound;
dbms_output.put_line(' Empno:= '|| x.empno ||' Emp Name:= ' || x.ename || ' Employee
Salary:= ' || x.sal);
end loop;
close t;
end;
41 of 129
PACKAGE
Package is schema object that groups logically related PL/SQL objects like TYPES,
PROCEDURE, FUNCTION, CURSOR, etc.
Package usually have two parts,
Package Specification
Package Body
Specification is like interface to an Application whereas Body is contains all definition
of objects.
Advantage of Package
Modularity:-
Modularity let you to break an application into many more small modules.
It will reduce complex problem into set of simple problem.
Easy Application Design:-
When designing Application we need interface information in package
specification. You can compile specification without body. Vice versa is not
possible.
Information Hiding:-
With PACKAGE you can define which object should be PUBLEC OR
PRIVATE. For e.g. If PACKAGE contain 4 sub programs out of which 3 are
PUBLIC and 1 of them is PRIVATE. In this case PACKAGE hide
implementation of PRIVATE sub-program so that only PACKAGE will get
affect if implementation changes.
42 of 129
Added Functionality:-
PACKAGE can have PUBLIC Variable, CURSOR, etc. so that it can be
accessible by ALL Sub-Program execute in this environment. They also allow
you to maintain the data across transaction without storing it on database.
Better Performance:-
When you call PACKAGE subprogram for the first time, the whole
package will get load into memory. So later, call related to subprogram in
package require no disk I/O.
Restriction on PACKAGE:-
You cannot reference remote packaged variables directly or indirectly. For example,
you cannot call the following procedure remotely because it references a packaged
variable in a parameter initialization clause:
For e.g.
43 of 129
commission_pct NUMBER, department_id NUMBER)
RETURN NUMBER IS new_empno NUMBER;
BEGIN
SELECT employees_seq.NEXTVAL
INTO new_empno
FROM DUAL;
INSERT INTO employees
VALUES (new_empno, 'First', 'Last','[email protected]',
'(415)555-0100','18-JUN-02','IT_PROG',90000000,00,
100,110);
tot_emps := tot_emps + 1;
RETURN(new_empno);
END;
FUNCTION create_dept(department_id NUMBER, location_id NUMBER)
RETURN NUMBER IS
new_deptno NUMBER;
BEGIN
SELECT departments_seq.NEXTVAL
INTO new_deptno
FROM dual;
INSERT INTO departments
VALUES (new_deptno, 'department name', 100, 1700);
tot_depts := tot_depts + 1;
RETURN(new_deptno);
END;
PROCEDURE remove_emp (employee_id NUMBER) IS
BEGIN
DELETE FROM employees
WHERE employees.employee_id = remove_emp.employee_id;
tot_emps := tot_emps - 1;
END;
PROCEDURE remove_dept(department_id NUMBER) IS
BEGIN
DELETE FROM departments
WHERE departments.department_id = remove_dept.department_id;
tot_depts := tot_depts - 1;
SELECT COUNT(*) INTO tot_emps FROM employees;
END;
PROCEDURE increase_sal(employee_id NUMBER, salary_incr NUMBER) IS
curr_sal NUMBER;
BEGIN
SELECT salary INTO curr_sal FROM employees
WHERE employees.employee_id = increase_sal.employee_id;
IF curr_sal IS NULL
THEN RAISE no_sal;
ELSE
UPDATE employees
SET salary = salary + salary_incr
WHERE employee_id = employee_id;
END IF;
END;
PROCEDURE increase_comm(employee_id NUMBER, comm_incr NUMBER) IS
curr_comm NUMBER;
BEGIN
SELECT commission_pct
INTO curr_comm
FROM employees
WHERE employees.employee_id = increase_comm.employee_id;
44 of 129
IF curr_comm IS NULL
THEN RAISE no_comm;
ELSE
UPDATE employees
SET commission_pct = commission_pct + comm_incr;
END IF;
END;
END emp_mgmt;
/
end;
begin
select sal into v_sal from emp where empno=f_empno;
return v_sal;
45 of 129
end;
end;
TRIGGER
Trigger in event based stored program.
They are not call directly.
They runs between the time when you issue command
Trigger monitors the changes in States of Database.
Database Trigger are different from PLSQL functions & procedures because you
cannot call them directly.
Database Triggers are fired when Triggered Event occurred in database. This
makes them powerful tool to manage database.
You can do following with Triggers:-
Control the behavior of DDL statements.
Control the behavior of DML statements.
Enforce referential integrity, complex business rules & security
policies.
* Types of Trigger:-
1. DDL Trigger
2. DML Trigger
3. Compound Trigger
4. Instead of Trigger
5. System or Database Event Trigger
1. DDL Trigger:-
This trigger are fires when you CREATE, ALTER & DROP object in
Database.
They are useful to control or monitor DDL Statement.
46 of 129
2. DML Trigger:-
This trigger fires when you INSERT, UPDATE & DELETE data from
table.
You can fire them once for all or for each row change using Statement
Level or Row Level trigger type.
You can use these triggers to controls DML statements.
3. Compound Trigger:-
This trigger act as a both "ROW LEVEL” & "STATEMENT LEVEL"
trigger when you INSERT, UPDATE & DELETE data from table.
This trigger let you to capture the information at 4 times point,
Before firing statement.
Before each row changing from firing statement.
After each row changing from firing statement.
After firing statement.
4. Instead of Trigger:-
This trigger unable you to STOP DML statements on VIEW.
This trigger allow you to maintain non updatable view.
* Limitation of Trigger:-
Trigger largest body can't be larger than 32,760 bytes. That is because trigger
body stored in LONG data type column. This means we should keep our Trigger Body
as small as possible. We can solve this problem by keeping coding logic in other
schema like Procedures, Functions & Packages. Another advantage of keeping code on
another schema is we can WRAP the code which is not possible in Trigger.
DML TRIGGER
create or replace trigger sag_test_trig
before insert or update or delete
on emp
for each row
DECLARE
47 of 129
v_user varchar2(50);
BEGIN
select user into v_user from dual;
case
when inserting then
insert into emp_log
values
(:new.empno,v_user,'INSERT',sysdate);
48 of 129
EMPNO USER_NAME OPERATION LOG_TIME
44 APEX_PUBLIC_USER DELETE 04/19/2016
44 APEX_PUBLIC_USER INSERT 04/19/2016
7839 APEX_PUBLIC_USER UPDATE 04/19/2016
DECLARE
V_COUNT NUMBER;
V_USER VARCHAR2 (100);
INVALID_DML EXCEPTION;
PRAGMA EXCEPTION_INIT (INVALID_DML,-21113);
PRAGMA AUTONOMOUS_TRANSACTION;
BEGIN
IF(V_COUNT=0)
THEN
RAISE_APPLICATION_ERROR(-21113,'You can not perform
DML(INSERT,UPDATE,DELETE) OPRATION OUT OF OFFICE TIME');
ELSE
CASE WHEN INSERTING THEN
INSERT INTO PROJECT_DML
VALUES
(
UPPER(TRIM(V_USER)),
49 of 129
SYSDATE,
UPPER(TRIM('SAG_TEST_EMP')),
UPPER(TRIM('BEFORE INSERT HAS CROSS CHECKED'))
);
WHEN UPDATING THEN
INSERT INTO PROJECT_DML
VALUES
(
UPPER(TRIM(V_USER)),
SYSDATE,
UPPER(TRIM('SAG_TEST_EMP')),
UPPER(TRIM('BEFORE UPDATE HAS CROSS CHECKED'))
);
WHEN DELETING THEN
INSERT INTO PROJECT_DML
VALUES
(
UPPER(TRIM(V_USER)),
SYSDATE,
UPPER(TRIM('SAG_TEST_EMP')),
UPPER(TRIM('BEFORE DELETE HAS CROSS CHECKED'))
);
END CASE;
END IF;
COMMIT;
END;
DECLARE
v_user VARCHAR2(100);
PRAGMA AUTONOMOUS_TRANSACTION;
BEGIN
50 of 129
SELECT USER INTO v_user FROM dual;
CASE
WHEN inserting THEN
INSERT INTO project_dml
VALUES
(
UPPER(TRIM(v_user)),
SYSDATE,
UPPER(TRIM('sag_test_emp')),
UPPER(TRIM('AFTER INsert has been cross checked'))
);
WHEN updating THEN
INSERT INTO project_dml
VALUES
(
UPPER(TRIM(v_user)),
SYSDATE,
UPPER(TRIM('sag_test_emp')),
UPPER(TRIM('AFTER Update has been cross checked'))
);
WHEN deleting THEN
INSERT INTO project_dml
VALUES
(
UPPER(TRIM(v_user)),
SYSDATE,
UPPER(TRIM('sag_test_emp')),
UPPER(TRIM('AFTER Delete has been cross checked'))
);
END CASE;
COMMIT;
END;
For e.g.:-
51 of 129
12,'l',30000,20,TRUNC(SYSDATE)
)
COMMIT;
UPDATE sag_test_emp
SET ename='Janny'
WHERE emp_no=12;
COMMIT;
52 of 129
SELECT * FROM project_dml ORDER BY 2 DESC;
DELETE sag_test_emp
WHERE emp_no=12;
COMMIT;
53 of 129
*DDL Trigger:-
Oracle provides DDL triggers to audit all schema changes and can report the exact
change, when it was made, and by which user. There are several ways to audit within
Oracle and the following auditing tools are provided:
DDL triggers: Using the Data Definition Language (DDL) triggers, the Oracle DBA
can automatically track all changes to the database, including changes to tables,
indexes, and constraints. The data from this trigger is especially useful for change
control for the Oracle DBA.
54 of 129
);
END;
/
* Compound Trigger:-
-- Global declaration.
g_global_variable VARCHAR2(10);
BEFORE STATEMENT IS
BEGIN
NULL; -- Do something here.
END BEFORE STATEMENT;
AFTER STATEMENT IS
BEGIN
NULL; -- Do something here.
END AFTER STATEMENT;
END <trigger-name>;
/
example no :- 01
55 of 129
CREATE OR REPLACE TRIGGER sag_test_5_trigg_221113
FOR INSERT ON sag_test_2
COMPOUND TRIGGER
BEFORE STATEMENT IS
BEGIN
INSERT INTO sag_test_1_logs
VALUES
(sag_test_1_seq.nextval,'BEFORE SATEMENT
INSERT','SAG_TEST_2',current_date);
END BEFORE STATEMENT;
AFTER STATEMENT IS
BEGIN
INSERT INTO sag_test_1_logs
VALUES
(sag_test_1_seq.nextval,' AFTER SATEMENT
INSERT','SAG_TEST_2',current_date);
END AFTER STATEMENT;
END;
example no:-02
56 of 129
CREATE OR REPLACE TRIGGER sag_test_6_trigg_221113
FOR INSERT OR UPDATE OR DELETE ON sag_test_2
COMPOUND TRIGGER
BEFORE STATEMENT IS
BEGIN
sag_test_6_pro_befstate_221113; -- Calling respective Procedure
END BEFORE STATEMENT;
AFTER STATEMENT IS
BEGIN
sag_test_6_pro_aftstate_221113; -- Calling respective Procedure
END AFTER STATEMENT;
END;
BEGIN
CASE
57 of 129
WHEN inserting THEN
INSERT INTO sag_test_1_logs
VALUES
(sag_test_1_seq.nextval,'BEFORE STATEMENT
INSERT','SAG_TEST_2',current_date);
END;
BEGIN
CASE
WHEN inserting THEN
INSERT INTO sag_test_1_logs
VALUES
(sag_test_1_seq.nextval,'BEFORE EACH ROW
INSERT','SAG_TEST_2',current_date);
58 of 129
WHEN deleting THEN
INSERT INTO sag_test_1_logs
VALUES
(sag_test_1_seq.nextval,'BEFORE EACH ROW
DELETE','SAG_TEST_2',current_date);
END CASE;
END;
CREATE OR REPLACE PROCEDURE sag_test_6_pro_aftechrw_221113
AS
BEGIN
CASE
WHEN inserting THEN
INSERT INTO sag_test_1_logs
VALUES
(sag_test_1_seq.nextval,'AFTER EACH ROW
INSERT','SAG_TEST_1',current_date);
END CASE;
END;
59 of 129
BEGIN
CASE
WHEN inserting THEN
INSERT INTO sag_test_1_logs
VALUES
(sag_test_1_seq.nextval,'AFTER STATEMENT
INSERT','SAG_TEST_2',current_date);
END;
* Instead of Trigger
60 of 129
);
GO
--Create a view that contains all columns from the base table.
CREATE VIEW InsteadView
AS SELECT ID, Color, Material, ComputedCol
FROM BaseTable;
GO
COLLECTION
Collection is an Ordered Group of elements of same type. There are following types of
Collection exists in Oracle,
61 of 129
Bounded: = Which has limit
62 of 129
1) Associative Array:-
Associative array is an array which can define only with PLSQL program.
Neither array structure nor data stores in database.
It hold the elements of similar data type.
Each cell of array is identified by subscript or Index or cell no.
Index can be Number or String
Example No: - 01
declare
ascii_var get_ascii;
begin
63 of 129
for I in 1..30
loop
ascii_var(I) := ascii(I);
end loop;
end;
1 = 49
2 = 50
3 = 51
4 = 52
5 = 53
6 = 54
7 = 55
.
.
.
.
So on
declare
type my_tab is table of number;
t my_tab;
v_count number:=0;
begin
end;
o/p:-
64 of 129
7369
1
7499
2
7521
3
7566
4
7654
5
7698
6
7782
7
7788
8
7839
9
7844
10
7876
11
7900
12
7902
13
7934
14
65 of 129
66 of 129
Example No: - 02
DECLARE
TYPE salary IS TABLE OF NUMBER INDEX BY VARCHAR2(20);
salary_list salary;
name VARCHAR2(20);
BEGIN
-- adding elements to the table
salary_list('Rajnish') := 62000;
salary_list('Minakshi') := 75000;
salary_list('Martin') := 100000;
salary_list('James') := 78000;
67 of 129
DECLARE
CURSOR c_customers is
select name from customers;
When the above code is executed at SQL prompt, it produces the following result:
Customer(1): Ramesh
Customer(2): Khilan
Customer(3): kaushik
Customer(4): Chaitali
Customer(5): Hardik
Customer(6): Komal
2) Nested Table:-
68 of 129
Nested table is a persistent form of collection which can created in Database and
in PLSQL also.
It is an unbounded form of collection in which index is maintain by oracle.
Oracle automatically marked minimum index as 1 & later goes on.
When Nested table declared in PLSQL they behave as a ONE DIMENTIONAL
ARRAY.
Nested table type column in table reassembled table with in table, but oracle
draw out of line storage to hold nested table data.
69 of 129
VALUES
(1,'Sam', nest_tab_1('Pune','Maharashatra'));
COMMIT;
o/p:-
Update
operation
on Nested Table
UPDATE sag_test_1
SET addres =nest_tab_1('Mumbai','Maharashtra')
WHERE eno=1;
commit;
DELETE sag_test_1
WHERE eno=1;
70 of 129
(1,'Sam',nest_tab_1('Build No:-53','Room No:- 103','Complex Name:- River Wood
Park','Road:- Kalyan Shill Road',
'Landmark:- Opp. Desai Naka','Post_Box:- Padale','Pincode-421204'));
COMMIT;
Commit;
71 of 129
3) VARRAY:-
VARRAY is a modified form of NESTED TABLE.
VARRAY (Variable Size of Array) is a Bounded & Persistent form of
collection.
VARRAY declaration define the limit of element VARRAY can accommodate.
Minimum bound is 1 & maximum = size of VARRAY.
Like a Nested Table VARRAY can create on database & in PLSQL.
VARRAY stored in the line with their parent record as row value in parent table.
For e.g.
CREATE OR REPLACE TYPE varray_test_1 IS VARRAY(5) OF NUMBER;
COMMIT;
PARTITIONING TABLE
72 of 129
As the number of rows increase in table as a result management & performance will
get decrease. To overcome this problem ORACLE introduced Partitioning table. In
partition table, huge data of single table is divide into multiple partitions. With the help
of Partition Table we can achieved following goals,
Performance improves: - Since Oracle have to search in respective partition
instead of searching in entire table.
Easy of Management: - Since loading & deletion of data become easy for
partition rather than entire table.
Easy for Backup & Recovery:-Because of partition table we gets many options
for backup recovery rather than large table.
Case Study
Types of Partition:-
Oracle have following types of partitions
Single Level
1. Range Partition
2. List Partition
3. Hash Partition
Composite Partition
Oracle support following Composite Partition
1. Range Hash Partition
2. Range List Partition
For Imagination you can take the help of following diagram which I have taken from
below link,
http://docs.oracle.com/cd/B28359_01/server.111/b32024/partition.htm#CACFECJC
73 of 129
1. Single Level Partition:-
Range Partition:-
for e.g.:-
When you are going to perform any DML operation on such partition table then it will
effect on respective partition rather on Entire table.
For e.g.:- 01
Commit;
74 of 129
For e.g.:- 02
INSERT INTO sag_emp_1
VALUES
(1,'B',26000,10);
commit;
for e.g.:- 03
2 List Partition:-
In this partition techniques you have to specify list of values to partition key as a
description for each partition.
for e.g.:-
CREATE TABLE sag_emp_2
(
eno NUMBER,
ename VARCHAR2(100),
sal NUMBER,
dno NUMBER,
designation VARCHAR2(100)
)
PARTITION BY LIST (designation)
(
PARTITION IT_field VALUES
('Trainee_Engineer','Oracle_Developer','Java_Developer','Dot_Net_Developer',
'Software_Developer','IT_Project_Manager'),
PARTITION Electronics_field VALUES ('Electrition','Technition','QA','Superwiser'),
PARTITION Teaching_field VALUES
('Lectural','Class_teacher','HOD','Wise_Principle','Principle','Trusty')
75 of 129
);
COMMIT;
COMMIT;
COMMIT;
76 of 129
Hash Partition
HASH partition maps the data among the partition based on Hashing algorithm.
Hash algorithm will distributes the data among partitions by giving partition
approximately same size.
HASH partition is used to even distribution of data among predefined number of
partitions.
With RANGE & LIST you need to specify which value should go in which
partition. Where as in HASH partition it is handle by MYSQL.
To partition a table by using HASH function it is necessary to append CREATE
TABLE statement with PARTITION BY HASH (expr) clause. Where expr is name
of column.
After this statement we need to write PARTITION num. Where num is number of
partition into which table is going to divide.
The following statement creates a table that uses hashing on the store_id column and
is divided into 4 partitions:
77 of 129
ALTER TABLE PARTITION Option:-
We can use ALTER TABLE Statement with partitioned table for repartitioning,
for adding, for dropping, for merging & for splitting the partition.
You can add a new partition p3 to this table for storing values less than 2002 as
follows:
ALTER TABLE t1
ADD PARTITION
(
PARTITION p3 VALUES LESS THAN (2002)
);
DROP PARTITION can be used to drop one or more RANGE or LIST partitions.
This statement cannot be used with HASH or KEY partitions; instead, use
COALESCE PARTITION (see below). Any data that was stored in the dropped
partitions named in the partition_names list is discarded. For example, given the
table t1 defined previously, you can drop the partitions named p0 and p1 as shown
here:
78 of 129
ALTER TABLE t1
DROP PARTITION p0, p1;
It is also possible to delete the rows from selected partition using TRUNCATE
PARTITION option.
To DELETE the rows of partition P0 we can use following command,
ALTER TABLE T1
TRUNCATE PARTITION p0;
The statement just shown has the same effect as the following DELETE statement:
For example, this statement deletes all rows from partitions p1 and p3:
You can also use the ALL keyword in place of the list of partition names; in this case,
the statement acts on all partitions in the table.
You can verify that the rows were dropped by checking the
INFORMATION_SCHEMA.PARTITIONS table, using a query such as this one:
79 of 129
You can reduce the number of partitions used by t2 from 6 to 4 using the following
statement:
The data contained in the last number partitions will be merged into the remaining
partitions. In this case, partitions 4 and 5 will be merged into the first 4 partitions
(the partitions numbered 0, 1, 2, and 3).
80 of 129
PRAGMA
PRAGMA is Compiler Directive Keyword.
It is used to provide instruction to compiler.
It is define in DECLARE section of PLSQL block.
Till now there are 5 types of PRAGMA.
1. PRAGMA AUTONOMOUS_TRANSACTION
2. PRAGMA EXCEPTION_INIT
3. PRAGMA RESTRICT_REFERENCES
4. PRAGMA SERIALLY_REUSABLE
5. PRAGMA INLINE
1. PRAGMA AUTONOMOUS_TRANSACTION:-
Prior to ORACLE 8.1, each ORACLE session have at most one active transaction at a
time. In other words, Changes were all or nothing. ORACLE 8i address this issue &
comes up with solution called "AUTONOMOUS TRANSACTION".
For instance, if we perform COMMIT or ROLLBACK within the block then it should
not affect the transaction outside of block. In such scenario PRAGMA
AUTONOMOUS_TRANDACTION is use.
2. PRAGMA EXCEPTION_INIT:-
This type of PRAGMA is used to bind user defined exception with particular error
number.
Defines the purity level of a packaged program. This is not required starting with
Oracle8i.Prior to Oracle8i if you were to invoke a function within a package
specification from a SQL statement, you would have to provide a
RESTRICT_REFERENCE directive to the PL/SQL engine for that function. This
pragma confirms to Oracle database that the function as the specified side-effects or
ensures that it lacks any such side-effects.
Usage is as follows:
81 of 129
PRAGMA RESTRICT_REFERENCES (function name, WNDS [, WNPS] [, RNDS],
[, RNPS])
WNDS: Writes No Database State. States that the function will not perform any
DMLs.
WNPS: Writes No Package State. States that the function will not modify any Package
variables.
RNDS: Reads No Database State. Analogous to Write. This pragma affirms that the
function will not read any database tables.
RNPS: Reads No Package State. Analogous to Write. This pragma affirms that the
function will not read any package variables.
In some situations, only functions that guarantee those restrictions can be used.
The following is a simple example:
Let’s define a package made of a single function that updates a db table and returns a
number:
If we try to use the function pack. a in a query statement we’ll get an error:
82 of 129
6 ORA-06512: a "MAXR.PACK", line 4
PL/SQL functions can be used inside a query statement only if they don’t modify
neither the db nor packages’ variables.
This error can be discovered only at runtime, when the select statement is executed.
How can we check for this errors at compile time? We can use PRAGMA
RESTRICT_REFERENCES!
If we know that the function will be used in SQL we can define it as follows:
Declaring that the function A will not modify the database state (WNDS stands for
WRITE NO DATABASE STATE).
Once we have made this declaration, if a programmer, not knowing that the function
has to be used in a query statement, tries to write code for A that violates the
PRAGMA:
83 of 129
4. PRAGMA SERIALLY_REUSABLE:-
It tells to the compiler that the package’s variables are needed for a single use. After
this single use Oracle can free the associated memory. It’s really useful to save
memory when a packages uses large temporary space just once in the session.
Let’s see an example.
Let’s define a package with a single numeric variable “var” not initialized:
1 SQL> create or replace package pack is
2 2 var number;
3 3 end;
4 4 /
If we assign a value to var, this will preserve that value for the whole session:
1 SQL> begin
2 2 pack.var := 1;
3 3 end;
4 4 /
5
6 SQL> exec dbms_output.put_line('Var='||pack.var);
7 Var=1
If we use the PRAGMA SERIALLY_REUSABLE, var will preserve the value just
inside the program that initializes it, but is null in the following calls:
84 of 129
10 4 end;
11 5 /
12 Var=1
13
14 SQL> exec dbms_output.put_line('Var='||pack.var);
15 Var=
PRAGMA SERIALLY_REUSABLE is a way to change the default behavior of
package variables that is as useful as heavy for memory.
5. PRAGMA INLINE:-
In Oracle11g has been added a new feature that optimizer can use to get better
performances, it’s called Subprogram in lining.
Optimizer can (autonomously or on demand) choose to replace a subprogram call with
a local copy of the subprogram.
1 declare
2 total number;
3 begin
4 total := calculate_nominal + calculate_interests;
5 end;
85 of 129
16 from deals;
17
18 return s;
19 end;
1 declare
2 total number;
3 v_calculate_nominal number;
4 v_calculate_interests number;
5 begin
6 select sum(nominal)
7 into v_calculate_nominal
8 from deals;
9
10 select sum(interest)
11 into v_calculate_interests
12 from deals;
13
14 total := v_calculate_nominal + v_calculate_interests;
15 end;
PRAGMA INLINE is the tool that we own to drive this new feature.
If we don’t want such an optimization we can do:
1 declare
2 total number;
3 begin
4 PRAGMA INLINE(calculate_nominal,'NO');
5 PRAGMA INLINE(calculate_interests,'NO');
6 total := calculate_nominal + calculate_interests;
7 end;
1 declare
2 total number;
3 begin
4 PRAGMA INLINE(calculate_nominal,'YES');
86 of 129
5 total := calculate_nominal + calculate_interests;
6 end;
INDEX
o INDEX is an Oracle Object which is use to speed up the access of the
table.
87 of 129
o We should uses INDEX if there is frequent retrieval of rows (< 10 % of
complete no of rows of respective table.) & frequently retrieval of column
in WHERE clause.
o Basically there are two type of index,
Implicit Index
Explicit Index
o In Explicit Index further we have following types,
B-Tree Index
Bit Map Index
Function Base Index
Create an Index
Syntax
UNIQUE
It indicates that the combination of values in the indexed columns must be
unique.
index_name
The name to assign to the index.
table_name
The name of the table in which to create the index.
88 of 129
column1, column2, ... column_n
The columns to use in the index.
COMPUTE STATISTICS
It tells Oracle to collect statistics during the creation of the index. The statistics
are then used by the optimizer to choose a "plan of execution" when SQL
statements are executed.
Example
For example:
In this example, we've created an index on the supplier table called supplier_idx. It
consists of only one field - the supplier_name field.
We could also create an index with more than one field as in the example below:
We could also choose to collect statistics upon creation of the index as follows:
In Oracle, you are not restricted to creating indexes on only columns. You can create
function-based indexes.
Syntax
89 of 129
UNIQUE
It indicates that the combination of values in the indexed columns must be
unique.
index_name
The name to assign to the index.
table_name
The name of the table in which to create the index.
function1, function2, ... function_n
The functions to use in the index.
COMPUTE STATISTICS
It tells Oracle to collect statistics during the creation of the index. The statistics
are then used by the optimizer to choose a "plan of execution" when SQL
statements are executed.
Example
For example:
In this example, we've created an index based on the uppercase evaluation of the
supplier_name field.
However, to be sure that the Oracle optimizer uses this index when executing your
SQL statements, be sure that UPPER(supplier_name) does not evaluate to a NULL
value. To ensure this, add UPPER(supplier_name) IS NOT NULL to your WHERE
clause as follows:
90 of 129
Rename an Index
Syntax
Example
For example:
If you forgot to collect statistics on the index when you first created it or you want to
update the statistics, you can always use the ALTER INDEX command to collect
statistics at a later date.
Syntax
91 of 129
Example
For example:
In this example, we're collecting statistics for the index called supplier_idx.
Drop an Index
Syntax
Example
For example:
1. B-Tree Index:-
By default Oracle create B-Tree Index. In B-Tree, you walk through branches unless
until you won't get the node which you want.
92 of 129
For e.g. If your tree starts from 50 & you are searching for 28. Then first you will
check whether 28>50 or not. Since it is false so you will come left side of tree(50).
Suppose if you get 25 as main node, then you will check whether 28 > 25 or not since
answer is YES so you will check at right side, & so on.
ORACLE implement B-Tree in little different manner. An Oracle b-tree starts with
two nodes,
1.Header
2.Leaf
Header contain pointer to leaf node & value stored in leaf node. If header block fills
new Header block will establish, and former Header Block will become Branch Node.
This is called three level B-Tree.
We can also create multi column Index also called as "Concatenated Index" or
"Complex Index".
Index created.
select
order_number,
quantity
from
sales
where
book_key = 'B103';
Note that the lead column of the index is the book_key, so the database can use the
index in the query above. I can also use the sales_keys index in the queries below.
select
order_number,
quantity
from
sales
93 of 129
where
book_key = 'B103'
and
store_key = 'S105'
and
order_number = 'O168';
However, the database cannot use that index in the following query because the
WHERE clause does not contain the index lead column.
select
order_number,
quantity
from
sales
where
store_key = 'S105'
and
order_number = 'O168';
Also, note that in the query below, the database can answer the query from the index
and so will not access the table at all.
select
order_number
from
sales
where
store_key = 'S105'
and
book_key = 'B108';
As you can see, b-tree indexes are very powerful. You must remember that a
multicolumn index cannot skip over columns, so the lead index column must be in the
WHERE clause filters. Oracle has used b-tree indexes for many years, and they are
appropriate from most of your indexing needs. However, the Oracle database provides
specialized indexes that can provide additional capabilities; the bit-mapped index and
the function-based index.
2. Bit-Map Index:-
94 of 129
Bit-Map Index is most useful in data ware house environment because they are
generally faster when you are only selecting data.
Bit-Map index are smaller in size than B=Tee as they stored only rowed & series of
bits.
For e.g.:-
The bitmaps stored may be the following (the actual storage depends on the algorithm
used internally, which is more complex than this example):
As you can tell from the preceding example, finding all of the females by searching for
the gender bit set to a ‘1’ in the example would be easy. You can similarly find all of
those who are married or even quickly find a combination of gender and marital status.
You should use b-tree indexes when columns are unique or near-unique; you should at
least consider bitmap indexes in all other cases. Although you generally would not use
a b-tree index when retrieving 40 percent of the rows in a table, using a bitmap index
generally makes this task faster than doing a full table scan.
You can use bitmap indexes even when retrieving large percentages (20–80 percent) of
a table.
95 of 129
3. Function Based Index:-
First we build a test table and populate it with enough data so that use of an index
would be advantageous.
BEGIN
FOR cur_rec IN 1 .. 2000 LOOP
IF MOD(cur_rec, 2) = 0 THEN
INSERT INTO user_data
VALUES (cur_rec, 'John' || cur_rec, 'Doe', 'M', SYSDATE);
ELSE
INSERT INTO user_data
VALUES (cur_rec, 'Jayne' || cur_rec, 'Doe', 'F', SYSDATE);
END IF;
COMMIT;
END LOOP;
96 of 129
END;
/
At this point the table is not indexed so we would expect a full table scan for any
query.
SET AUTOTRACE ON
SELECT *
FROM user_data
WHERE UPPER(first_name) = 'JOHN2';
Execution Plan
----------------------------------------------------------
Plan hash value: 2489064024
-------------------------------------------------------------------------------
| Id | Operation | Name | Rows | Bytes | Cost (%CPU)| Time |
-------------------------------------------------------------------------------
| 0 | SELECT STATEMENT | | 20 | 540 | 5 (0)| 00:00:01 |
|* 1 | TABLE ACCESS FULL| USER_DATA | 20 | 540 | 5 (0)| 00:00:01 |
-------------------------------------------------------------------------------
2.2 Build Regular Index
If we now create a regular index on the FIRST_NAME column we see that the index is
not used.
SET AUTOTRACE ON
SELECT *
FROM user_data
WHERE UPPER(first_name) = 'JOHN2';
Execution Plan
----------------------------------------------------------
Plan hash value: 2489064024
-------------------------------------------------------------------------------
97 of 129
| Id | Operation | Name | Rows | Bytes | Cost (%CPU)| Time |
-------------------------------------------------------------------------------
| 0 | SELECT STATEMENT | | 20 | 540 | 5 (0)| 00:00:01 |
|* 1 | TABLE ACCESS FULL| USER_DATA | 20 | 540 | 5 (0)| 00:00:01 |
-------------------------------------------------------------------------------
2.3 Build Function-Based Index
If we now replace the regular index with a function-based index on the FIRST_NAME
column we see that the index is used.
SET AUTOTRACE ON
SELECT *
FROM user_data
WHERE UPPER(first_name) = 'JOHN2';
Execution Plan
----------------------------------------------------------
Plan hash value: 1309354431
----------------------------------------------------------------------------------------------
| Id | Operation | Name | Rows | Bytes | Cost (%CPU)| Time |
----------------------------------------------------------------------------------------------
| 0 | SELECT STATEMENT | | 1 | 36 | 2 (0)| 00:00:01 |
| 1 | TABLE ACCESS BY INDEX ROWID| USER_DATA | 1 | 36 | 2 (0)|
00:00:01 |
|* 2 | INDEX RANGE SCAN | FIRST_NAME_IDX | 1 | | 1 (0)|
00:00:01 |
----------------------------------------------------------------------------------------------
98 of 129
2.4 Concatenated Columns
SET AUTOTRACE ON
SELECT *
FROM user_data
WHERE gender = 'M'
AND UPPER(first_name) = 'JOHN2';
Execution Plan
----------------------------------------------------------
Plan hash value: 1309354431
----------------------------------------------------------------------------------------------
| Id | Operation | Name | Rows | Bytes | Cost (%CPU)| Time |
----------------------------------------------------------------------------------------------
| 0 | SELECT STATEMENT | | 1 | 36 | 3 (0)| 00:00:01 |
| 1 | TABLE ACCESS BY INDEX ROWID| USER_DATA | 1 | 36 | 3 (0)|
00:00:01 |
|* 2 | INDEX RANGE SCAN | FIRST_NAME_IDX | 1 | | 2 (0)|
00:00:01 |
----------------------------------------------------------------------------------------------
HIRARCHICAL QUERY
99 of 129
LEVEL must be used with CONNECT BY queries.
In Hierarchical query we can either go from top to bottom (Top down Approach)
or we can go from bottom to top (Bottom up Approach).
TOP-DOWN Approach:-
100 of 129
Bottom Up Approach:-
101 of 129
GLOBAL TEMPORARY TABLE
Data stored in GTT is private, such that data inserted by the session can be access for
that session.
ON COMMIT DELETE ROWS indicate that data will be DELETED at the end of
transaction or at the end of session.
COUNT(*)
----------
1
SQL>
102 of 129
COMMIT;
COUNT(*)
----------
0
SQL>
In contrast, the ON COMMIT PRESERVE ROWS clause indicates that rows should
persist beyond the end of the transaction. They will only be removed at the end of the
session.
COUNT(*)
----------
1
103 of 129
SQL>
COUNT(*)
----------
0
We can also create Index on temporary table. Scope of index is same as database
session.
Import/Export can be used with GTT to transfer table definition but no data row
will process.
104 of 129
EXTERNAL TABLE
External Table is like complementary to existing SQL Loader function. It enable you
to access the data from external source. Prior to Oracle 10g we could perform Read
Only Operation with External Table but from Oracle 10g onwards we can perform
write operation to External Table.
External tables are create by using CREATE TABLE ….. ORGANIZED EXTERNAL
statement.
When you are creating External table you are specifying following attributes:-
TYPE
o ORACLE_LOADER:- For loading.
o ORACLE_DATADUMP:- For load & unload.
Default Directory
Access Parameter
Location
Execute the following SQL statements to set up a default directory (which contains the
data source) and to grant access to it:
105 of 129
Create a traditional table named emp:
Load the data from the external table emp_load into the table emp:
106 of 129
35. SQL> INSERT INTO emp (emp_no,
36. 2 first_name,
37. 3 middle_initial,
38. 4 last_name,
39. 5 hire_date,
40. 6 dob)
41. 7 (SELECT employee_number,
42. 8 employee_first_name,
43. 9 substr(employee_middle_name, 1, 1),
44. 10 employee_last_name,
45. 11 employee_hire_date,
46. 12 to_date(employee_dob,'month, dd, yyyy')
47. 13 FROM emp_load);
48.
49. 2 rows created.
50.
Perform the following select operation to verify that the information in the .dat file
was loaded into the emp table:
Note:
Data can only be unloaded using the ORACLE_DATAPUMP
access driver.
107 of 129
Loading Data
When data is loaded, the data stream is read from the files specified by the
LOCATION and DEFAULT DIRECTORY clauses. The INSERT statement
generates a flow of data from the external data source to the Oracle SQL engine, where
data is processed. As data from the external source is parsed by the access driver and
provided to the external table interface, it is converted from its external representation
to its Oracle internal datatype.
Unloading Data Using the ORACLE_DATAPUMP Access Driver
To unload data, you use the ORACLE_DATAPUMP access driver. The data stream
that is unloaded is in a proprietary format and contains all the column data for every
row being unloaded.
An unload operation also creates a metadata stream that describes the contents of the
data stream. The information in the metadata stream is required for loading the data
stream. Therefore, the metadata stream is written to the datafile and placed before the
data stream.
Dealing with Column Objects
When the external table is accessed through a SQL statement, the fields of the external
table can be used just like any other field in a normal table. In particular, the fields can
be used as arguments for any SQL built-in function, PL/SQL function, or Java
function. This enables you to manipulate the data from the external source.
Although external tables cannot contain a column object, you can use constructor
functions to build a column object from attributes in the external table. For example,
assume a table in the database is defined as follows:
108 of 129
grade CHAR(2))
ORGANIZATION EXTERNAL (TYPE ORACLE_LOADER DEFAULT
DIRECTORY ext_tab_dir
ACCESS PARAMETERS (FIELDS TERMINATED BY ',')
LOCATION ('info.dat'));
To load table roster from roster_data, you would specify something similar to the
following:
109 of 129
When data is unloaded into an external table, data conversion occurs if the datatype of
a column in the source table does not match the datatype of the column in the external
table. If a conversion error occurs, then the datafile may not contain all the rows that
were processed up to that point and the datafile will not be readable. To avoid
problems with conversion errors causing the operation to fail, the datatype of the
column in the external table should match the datatype of the column in the database.
This is not always possible, because external tables do not support all datatypes. In
these cases, the unsupported datatypes in the source table must be converted into a
datatype that the external table can support. For example, if a source table has a LONG
column, the corresponding column in the external table must be a CLOB and the
SELECT subquery that is used to populate the external table must use the TO_LOB
operator to load the column. For example:
110 of 129
number of files specified. If there are more files than the degree of parallelism
specified, then the extra files will not be used.
In addition to unloading data, the ORACLE_DATAPUMP access driver can also load
data. Parallel processes can read multiple dump files or even chunks of the same dump
file concurrently. Thus, data can be loaded in parallel even if there is only one dump
file, as long as that file is large enough to contain multiple file offsets. This is because
when the ORACLE_DATAPUMP access driver unloads data, it periodically
remembers the offset into the dump file of the start of a new data chunk and writes that
information into the file when the unload completes. For nonparallel loads, file offsets
are ignored because only one process at a time can access a file. For parallel loads, file
offsets are distributed among parallel processes for multiple concurrent processing on a
file or within a set of files.
111 of 129
columns of an external table, thus minimizing the amount of data conversion and data
handling required to execute a query. In this case, a row that is rejected because a
column in the row causes a datatype conversion error will not get rejected in a different
query if the query does not reference that column. You can change this column-
processing behavior with the ALTER TABLE command.
An external table cannot load data into a LONG column.
When identifiers (for example, column or table names) are specified in the
external table access parameters, certain values are considered to be reserved words by
the access parameter parser. If a reserved word is used as an identifier, it must be
enclosed in double quotation marks.
You can GRANT and REVOKE privileges on various database objects in Oracle.
We'll first look at how to grant and revoke privileges on tables and then how to
grant and revoke privileges on functions and procedures in Oracle.
112 of 129
Grant Privileges on Table
You can grant users various privileges to tables. These privileges can be any
combination of SELECT, INSERT, UPDATE, DELETE, REFERENCES, ALTER,
INDEX, or ALL.
Syntax
Privilege Description
REFEREN
Ability to create a constraint that refers to the table.
CES
113 of 129
Example
For example, if you wanted to grant SELECT, INSERT, UPDATE, and DELETE
privileges on a table called suppliers to a user name smithj, you would run the
following GRANT statement:
You can also use the ALL keyword to indicate that you wish ALL permissions to
be granted for a user named smithj. For example:
If you wanted to grant only SELECT access on your table to all users, you could
grant the privileges to the public keyword. For example:
Once you have granted privileges, you may need to revoke some or all of these
privileges. To do this, you can run a revoke command. You can revoke any
combination of SELECT, INSERT, UPDATE, DELETE, REFERENCES, ALTER,
INDEX, or ALL.
Syntax
Privilege Description
114 of 129
INSERT Ability to perform INSERT statements on the table.
REFEREN
Ability to create a constraint that refers to the table.
CES
Example
If you wanted to revoke ALL privileges on a table for a user named anderson, you
could use the ALL keyword as follows:
115 of 129
If you had granted ALL privileges to public (all users) on the suppliers table and
you wanted to revoke these privileges, you could run the following REVOKE
statement:
When dealing with functions and procedures, you can grant users the ability to
EXECUTE these functions and procedures.
Syntax
Example
For example, if you had a function called Find_Value and you wanted to grant
EXECUTE access to the user named smithj, you would run the following GRANT
statement:
If you wanted to grant ALL users the ability to EXECUTE this function, you would
run the following GRANT statement:
Once you have granted EXECUTE privileges on a function or procedure, you may
need to REVOKE these privileges from a user. To do this, you can execute a
REVOKE command.
116 of 129
Syntax
The syntax for the revoking privileges on a function or procedure in Oracle is:
Example
If you had granted EXECUTE privileges to public (all users) on the function called
Find_Value and you wanted to revoke these EXECUTE privileges, you could run
the following REVOKE statement:
FOR ALL:-
INSERT, UPDATE & DELETE that uses collection to change multiple rows of
data very quickly.
PL/SQL statements are run by PL/SQL statement executor. SQL statements are run by
SQL statement executor. When PL/SQL runtime engine encounter SQL statement then
117 of 129
it will STOP and pass the SQL statement to SQL engine. SQL engine will execute
SQL statement and return back information to PL/SQL engine. This transfer of control
called “Context Switching”. Each context switch incurs overhead that slowdown the
overall performance of your program.
118 of 129
IN (SELECT employee_id
FROM employees
WHERE department_id =
increase_salary.department_id_in)
LOOP
UPDATE employees emp
SET emp.salary = emp.salary +
emp.salary * increase_salary.increase_pct_in
WHERE emp.employee_id = employee_rec.employee_id;
END LOOP;
END increase_salary;
Suppose there are 100 employees in department 15. When I execute this block,
BEGIN
increase_salary (15, .10);
END;
the PL/SQL engine will “switch” over to the SQL engine 100 times, once for each row
being updated.
Take another look at the increase_salary procedure. The SELECT statement identifies
all the employees in a department. The UPDATE statement executes for each of those
employees, applying the same percentage increase to all. In such a simple scenario, a
cursor FOR loop is not needed at all. I can simplify this procedure to nothing more
than the code in Listing 2.
Code Listing 2: Simplified increase_salary procedure without FOR loop
PROCEDURE increase_salary (
department_id_in IN employees.department_id%TYPE,
increase_pct_in IN NUMBER)
IS
BEGIN
UPDATE employees emp
SET emp.salary =
emp.salary
119 of 129
+ emp.salary * increase_salary.increase_pct_in
WHERE emp.department_id =
increase_salary.department_id_in;
END increase_salary;
The bulk processing features of PL/SQL are designed specifically to reduce the
number of context switches required to communicate from the PL/SQL engine to the
SQL engine.
Use the BULK COLLECT clause to fetch multiple rows into one or more collections
with a single context switch.
Use the FORALL statement when you need to execute the same DML statement
repeatedly for different bind variable values. The UPDATE statement in the
increase_salary procedure fits this scenario; the only thing that changes with each new
execution of the statement is the employee ID.
120 of 129
26 l_employee_ids (indx);
27 END IF;
28 END LOOP;
29
30 FORALL indx IN 1 .. l_eligible_ids.COUNT
31 UPDATE employees emp
32 SET emp.salary =
33 emp.salary
34 + emp.salary * increase_salary.increase_pct_in
35 WHERE emp.employee_id = l_eligible_ids (indx);
36 END increase_salary;
l_employees employee_info_t;
BEGIN
SELECT employee_id, salary
BULK COLLECT INTO l_employees
FROM employees
WHERE department_id = 10;
121 of 129
END;
If you are fetching lots of rows, the collection that is being filled could consume too
much session memory and raise an error. To help you avoid such errors, Oracle
Database offers a LIMIT clause for BULK COLLECT. Suppose that, for example,
there could be tens of thousands of employees in a single department and my session
does not have enough memory available to store 20,000 employee IDs in a collection.
Instead I use the approach in Listing 6.
Code Listing 6: Fetching up to the number of rows specified
DECLARE
c_limit PLS_INTEGER := 100;
CURSOR employees_cur
IS
SELECT employee_id
FROM employees
WHERE department_id = department_id_in;
l_employee_ids employee_ids_t;
BEGIN
OPEN employees_cur;
LOOP
FETCH employees_cur
BULK COLLECT INTO l_employee_ids
LIMIT c_limit;
About FORALL
Whenever you execute a DML statement inside of a loop, you should convert that code
to use FORALL. The performance improvement will amaze you and please your users.
122 of 129
The FORALL statement is not a loop; it is a declarative statement to the PL/SQL
engine: “Generate all the DML statements that would have been executed one row at a
time, and send them all across to the SQL engine with one context switch.”
123 of 129
SQL%BULK_EXCEPTIONS (indx).ERROR_INDEX
|| ‘: ‘
|| SQL%BULK_EXCEPTIONS (indx).ERROR_CODE);
END LOOP;
ELSE
RAISE;
END IF;
END increase_salary;
DYNAMIC SQL
Dynamic SQL is a programming method that will CREATE or RUN SQL
statement at run time.
It is useful for Ad-hoc Query or when you are not aware of complete SQL
statement.
PL/SQL have two ways to write Dynamic SQL
o Native Dynamic SQL ( EXECUTE IMMEDIATE )
o DBMS_SQL Package
EXECUTE IMMEDIATE is the replacement of DBMS_SQL package.
It will PARSE & Immediately EXECUTE SQL statements.
EXECUTE IMMEDIATE will not COMMIT DML transaction, Explicitly
COMMIT should be done.
Multi Row query are not supported for returning value, Alternative is to use
Temporary Table or Ref Cursor to store the records.
Do not use Semi-Colon when executing SQL statement. Use Semi-Colon at the
end when executing PL/SQL blocks.
begin
execute immediate 'set role all';
end;
124 of 129
2. To pass values to a dynamic statement (USING clause).
declare
l_depnam varchar2(20) := 'testing';
l_loc varchar2(10) := 'Dubai';
begin
execute immediate 'insert into dept values (:1, :2, :3)'
using 50, l_depnam, l_loc;
commit;
end;
declare
l_cnt varchar2(20);
begin
execute immediate 'select count(1) from emp'
into l_cnt;
dbms_output.put_line(l_cnt);
end;
4. To call a routine dynamically: The bind variables used for parameters of the routine
have to be specified along with the parameter type. IN type is the default, others have
to be specified explicitly.
declare
l_routin varchar2(100) := 'gen2161.get_rowcnt';
l_tblnam varchar2(20) := 'emp';
l_cnt number;
l_status varchar2(200);
begin
execute immediate 'begin ' || l_routin || '(:2, :3, :4); end;'
using in l_tblnam, out l_cnt, in out l_status;
5. To return value into a PL/SQL record type: The same option can be used for
%rowtype variables also.
125 of 129
declare
type empdtlrec is record (empno number(4),
ename varchar2(20),
deptno number(2));
empdtl empdtlrec;
begin
execute immediate 'select empno, ename, deptno ' ||
'from emp where empno = 7934'
into empdtl;
end;
6. To pass and retrieve values: The INTO clause should precede the USING clause.
declare
l_dept pls_integer := 20;
l_nam varchar2(20);
l_loc varchar2(20);
begin
execute immediate 'select dname, loc from dept where deptno = :1'
into l_nam, l_loc
using l_dept ;
end;
7. Multi-row query option. Use the insert statement to populate a temp table for this
option. Use the temporary table to carry out further processing. Alternatively, you may
use REF cursors to by-pass this drawback.
declare
l_sal pls_integer := 2000;
begin
execute immediate 'insert into temp(empno, ename) ' ||
' select empno, ename from emp ' ||
' where sal > :1'
using l_sal;
commit;
end;
126 of 129
FLASH BACK QUERY
https://docs.oracle.com/cd/B13789_01/appdev.101/b10795/adfns_fl.htm
FLASH BACK provide the way to view PAST state of Database Objects.
We can use FLASHBACK for following,
o Perform query that return past data
o Perform query that return Metadata which show the details history of
change of database.
o We can recover table or Individual row to previous point in time.
FLASHBACK use Automatic Undo Management (AUM) system to obtain
metadata and historical data for transaction.
They rely on Undo data.
Beside this Oracle can use FLASHBACK for following,
o To Rollback active transaction.
o To Recover terminated transaction using database recovery process,
o Provide READ consistency for SQL query.
127 of 129
o It retrieve metadata or historical data for given transaction or for all
transactions within given time interval.
o You can also obtain SQL code to UNDO the changes of particular row
affect by transaction.
o You can use Flash Back Transaction Query with Flash Back Version
Query that provide transaction_id.
o To perform Flash Back Transaction Query, you select from
Flashback_Transaction_query view.
DBMS_FLASHBACK_PACKAGE :-
o Set the clock back to time in past to examin data at that time.
Example
This example uses a Flashback Query to examine the state of a table at a previous time.
Suppose, for instance, that a DBA discovers at 12:30 PM that data for employee JOHN
had been deleted from the employee table, and the DBA knows that at 9:30AM the
data for JOHN was correctly stored in the database. The DBA can use a Flashback
Query to examine the contents of the table at 9:30, to find out what data had been lost.
If appropriate, the DBA can then re-insert the lost data in the database.
The following query retrieves the state of the employee record for JOHN at 9:30AM,
April 4, 2003:
128 of 129
This update then restores John's information to the employee table:
129 of 129
MINUS SELECT * FROM employee);
You can use a cursor to store the results of queries into the past. To do this, open the
cursor before calling DBMS_FLASHBACK.DISABLE. After storing the results and
then calling DISABLE, you can do the following:
SQL> SELECT ora_rowscn, name, salary FROM employee WHERE empno = 7788;
130 of 129
---------- ---- ------
202553 Fudd 3000
The latest COMMIT operation for the row took place at approximately SCN 202553.
(You can use function SCN_TO_TIMESTAMP to convert an SCN, like
ORA_ROWSCN, to the corresponding TIMESTAMP value.)
ORA_SCN is in fact a conservative upper bound of the latest commit time: the actual
commit SCN can be somewhat earlier. ORA_SCN is more precise (closer to the actual
commit SCN) for a row-dependent table (created using CREATE TABLE with the
ROWDEPENDENCIES clause).
0 rows updated.
The conditional update fails in this case, because the ORA_ROWSCN is no longer
202553. This means that some user or another application changed the row and
performed a COMMIT more recently than the recorded ORA_ROWSCN.
Your application queries again to obtain the new row data and ORA_ROWSCN.
Suppose that the ORA_ROWSCN is now 415639. The application tries the conditional
update again, using the new ORA_ROWSCN. This time, the update succeeds, and it is
committed. Here is an interactive equivalent:
1 row updated.
131 of 129
SQL> COMMIT;
Commit complete.
SQL> SELECT ora_rowscn, name, salary FROM employee WHERE empno = 7788;
Besides using ORA_ROWSCN in an UPDATE statement WHERE clause, you can use
it in a DELETE statement WHERE clause or the AS OF clause of a Flashback Query.
See Also:
You use a Flashback Version Query to retrieve the different versions of specific rows
that existed during a given time interval. A new row version is created whenever a
COMMIT statement is executed.
You specify a Flashback Version Query using the VERSIONS BETWEEN clauses of
the SELECT statement. Here is the syntax:
Where start and end are expressions representing the start and end of the time interval
to be queried, respectively? The interval is closed at both ends: the upper and lower
limits specified (start and end) are both included in the time interval.
The Flashback Version Query returns a table with a row for each version of the row
that existed at any time during the time interval you specify. Each row in the table
includes pseudo columns of metadata about the row version, described in Table 15-1.
This information can reveal when and how a particular change (perhaps erroneous)
occurred to your database.
132 of 129
2.6.1.1.1 Table 15-1 Flashback Version Query Row Data Pseudo columns
Pseudo column Name Description
A given row version is valid starting at its time VERSIONS_START* up to, but not
including, its time VERSIONS_END*. That is, it is valid for any time t such that
VERSIONS_START* <= t < VERSIONS_END*. For example, the following output
indicates that the salary was 10243 from September 9, 2002, included, to November
25, 2003, not included.
133 of 129
Here is a typical Flashback Version Query:
See Also:
See Also:
134 of 129
the transaction ID, the operation, the operation start and end SCNs, the user
responsible for the operation, and the SQL code to undo the operation:
In this example, a DBA carries out the following series of actions in SQL*Plus:
connect hr/hr
CREATE TABLE emp
(empno number primary key, empname varchar2(16), salary number);
INSERT INTO emp VALUES (111, 'Mike', 555);
COMMIT;
At this point, emp and dept have one row each. In terms of row versions, each table has
one version of one row. Next, suppose that an erroneous transaction deletes employee
id 111 from table emp:
135 of 129
DELETE FROM emp WHERE empno = 111;
COMMIT;
Subsequently, a new transaction reinserts employee id 111 with a new employee name
into the emp table.
At this point, the DBA detects the application error and needs to diagnose the problem.
The DBA issues the following query to retrieve versions of the rows in the emp table
that correspond to empno 111. The query uses Flashback Version Query
pseudocolumns.
connect dba_name/password
SELECT versions_xid XID, versions_startscn START_SCN,
versions_endscn END_SCN, versions_operation OPERATION,
empname, salary FROM hr.emp
VERSIONS BETWEEN SCN MINVALUE AND MAXVALUE
where empno = 111;
The results table reads chronologically, from bottom to top. The third row corresponds
to the version of the row in emp that was originally inserted in the table when the table
was created. The second row corresponds to the row in emp that was deleted by the
erroneous transaction. The first row corresponds to the version of the row in emp that
was reinserted with a new employee name.
136 of 129
SELECT xid, start_scn START, commit_scn COMMIT,
operation OP, logon_user USER,
undo_sql FROM flashback_transaction_query
WHERE xid = HEXTORAW('000200030000002D');
4 rows selected
The rightmost column (undo_sql) contains the SQL code that will undo the
corresponding change operation. The DBA can execute this code to undo the changes
made by that transaction. The USER column (logon_user) shows the user responsible
for the transaction.
A DBA might also be interested in knowing all changes made in a certain time
window. In our scenario, the DBA performs the following query to view the details of
all transactions that executed since the erroneous transaction identified earlier
(including the erroneous transaction itself):
137 of 129
0004000700000058 195245 195246 INSERT EMP HR
000200030000002D 195243 195244 DELETE EMP HR
000200030000002D 195243 195244 INSERT DEPT HR
000200030000002D 195243 195244 UPDATE EMP HR
6 rows selected
See Also:
138 of 129
2.8.2 Flashback Tips - General
For example, assume that the SCN values 1000 and 1005 are mapped to the
times 8:41 and 8:46 AM respectively. A query for a time between 8:41:00 and
8:45:59 AM is mapped to SCN 1000; a Flashback Query for 8:46 AM is mapped
to SCN 1005.
139 of 129
Due to this time-to-SCN mapping, if you specify a time that is slightly after a
DDL operation (such as a table creation) the database might actually use an SCN
that is just before the DDL operation. This can result in error ORA-1466.
You cannot retrieve past data from a V$ view in the data dictionary. Performing
a query on such a view always returns the current data. You can, however,
perform queries on past data in other views of the data dictionary, such as
USER_TABLES.
SQL LOADER
SQL Loader loads the data from external file to database table.
You can use SQL Loader to do following:-
o LOAD the data from multiple data files in the same session.
o LOAD the data into multiple table in same session.
o Selectively Load the data.
o Manipulate data before loading it.
o Generate unique sequence key value for specific column.
SQL Loader takes the Input from .ctl (Control) file.
Control file contain one or more data files.
Output of SQL Loader can be LOG File, BAD File, and DISCARD file.
140 of 129
SQL*Loader Parameter:-
In situations where you always use the same parameters for which the values seldom
change, it can be more efficient to specify parameters using the following methods,
rather than on the command line:
Parameters can be grouped together in a parameter file. You could then specify
the name of the parameter file on the command line using the PARFILE
parameter.
Certain parameters can also be specified within the SQL*Loader control file by
using the OPTIONS clause.
Default: none
PARFILE specifies the name of a file that contains commonly used command-line
parameters. For example, the command line could read:
sqlldr PARFILE=example.par
141 of 129
USERID=scott/tiger
CONTROL=example.ctl
ERRORS=9999
LOG=example.log
OPTIONS Clause
The following command-line parameters can be specified using the OPTIONS clause.
These parameters are described in greater detail in Chapter 7.
BINDSIZE = n
COLUMNARRAYROWS = n
DIRECT = {TRUE | FALSE}
ERRORS = n
LOAD = n
MULTITHREADING = {TRUE | FALSE}
PARALLEL = {TRUE | FALSE}
READSIZE = n
RESUMABLE = {TRUE | FALSE}
RESUMABLE_NAME = 'text string'
RESUMABLE_TIMEOUT = n
ROWS = n
SILENT = {HEADER | FEEDBACK | ERRORS | DISCARDS | PARTITIONS | ALL}
SKIP = n
SKIP_INDEX_MAINTENANCE = {TRUE | FALSE}
SKIP_UNUSABLE_INDEXES = {TRUE | FALSE}
STREAMSIZE = n
The following is an example use of the OPTIONS clause that you could use in a
SQL*Loader control file:
142 of 129
The control file is a text file written in a language that SQL*Loader understands. The
control file tells SQL*Loader where to find the data, how to parse and interpret the
data, where to insert the data, and more.
Although not precisely defined, a control file can be said to have three sections,
The second section consists of one or more INTO TABLE blocks. Each of these
blocks contains information about the table into which the data is to be loaded.
SQL*Loader reads data from one or more files (or operating system equivalents
of files) specified in the control file. From SQL*Loader's perspective, the data in the
datafile is organized as records. A particular datafile can be in fixed record format,
variable record format, or stream record format. The record format can be specified in
the control file with the INFILE parameter. If no record format is specified, the default
is stream record format.
LOB data can be lengthy enough that it makes sense to load it from a LOBFILE.
In LOBFILEs, LOB data instances are still considered to be in fields (predetermined
size, delimited, length-value), but these fields are not organized into records (the
concept of a record does not exist within LOBFILEs). Therefore, the processing
overhead of dealing with records is avoided. This type of organization of data is ideal
for LOB loading.
143 of 129
For example, you might use LOBFILEs to load employee names, employee IDs, and
employee resumes. You could read the employee names and IDs from the main
datafiles and you could read the resumes, which can be quite lengthy, from LOBFILEs.
You might also use LOBFILEs to facilitate the loading of XML data. You can use
XML columns to hold data that models structured and semistructured data. Such data
can be quite lengthy.
SDFs are specified using the SDF parameter. The SDF parameter can be
followed by either the file specification string, or a FILLER field that is mapped to a
data field containing one or more file specification strings.
A LOB is a large object type. This release of SQL*Loader supports loading of four
LOB types:
144 of 129
View
145 of 129
If you think of all above scenario you will come to know the use of view.
If you have complex SQL query & you want to use that as simple query at
application level. Then in such scenario you can use view. You can put that
complex query in VIEW & after than you can use that view as simple query at
application level.
View is nothing more than stored query. It will run no slower nor faster than
query directly against base tables.
AS
SELECT…..;
Force VIEW
A view can be created even if the defining query of the view cannot be executed, as
long as the CREATE VIEW command has no syntax errors. We call such a view a
view with errors. For example, if a view refers to a non-existent table or an invalid
column of an existing table, or if the owner of the view does not have the required
privileges, then the view can still be created and entered into the data dictionary. You
can only create a view with errors by using the FORCE option of the CREATE VIEW
command:
AS
SELECT …;
When a view is created with errors, Oracle returns a message and leaves the status of
the view as INVALID. If conditions later change so that the query of an invalid view
can be executed, then the view can be recompiled and become valid. Oracle
dynamically compiles the invalid view if you attempt to use it
146 of 129
MATERIALIZED VIEW
Note:
147 of 129
The keyword SNAPSHOT is supported in place of MATERIALIZED VIEW for
backward compatibility.
When DML changes are made to master table data, Oracle Database stores rows
describing those changes in the materialized view log and then uses the materialized
view log to refresh materialized views based on the master table. This process is called
incremental or fast refresh. Without a materialized view log, Oracle Database must
re-execute the materialized view query to refresh the materialized view. This process is
called a complete refresh. Usually, a fast refresh takes less time than a complete
refresh.
A materialized view log is located in the master database in the same schema as the
master table. A master table can have only one materialized view log defined on it.
FAST :
o FAST indicate incremental refresh method which perform the refresh
according to the changes that occurs to the master table.
COMPLETE :-
o Oracle will perform COMPLETE refresh even though FAST refresh is
possible.
FORCE :-
o It’s by default refresh method.
148 of 129
When Materialized View is with FAST Refresh Oracle must examine the last refresh
time of Master Table or Master Materialized View.
The DBMS_MVIEW package contains three APIs for performing refresh operations:
DBMS_MVIEW.REFRESH
DBMS_MVIEW.REFRESH_ALL_MVIEWS
DBMS_MVIEW.REFRESH_DEPENDENT
build deferred
refresh complete
on demand
as
begin
dbms_mview.refresh('EMP_6_MV');
end;
149 of 129
drop materialized view emp_6_mv;
Step no:-1
150 of 129
Step no:-2
Step no:-3
If you want to drop materialized view log then use below command,
151 of 129
Before & after refreshing MV manually the o/p of MV as follows,
152 of 129
Analytical Function
FIRST_VALUE:-
It will return first value of in order set of value from analytical window.
For e.g.
153 of 129
select empno,ename,sal
from emp
from emp;
154 of 129
7839 KING 42000
7902 FORD 42000
7788 SCOTT 42000
7566 JONES 42000
7698 BLAKE 42000
7499 ALLEN 42000
7844 TURNER 42000
7521 WARD 42000
7654 MARTIN 42000
7876 ADAMS 42000
7900 JAMES 42000
7369 SMITH 42000
from emp;
HIGHEST_
SAL
42000
DEPT HIGHEST_
NO SAL
10 42000
20 3000
30 2850
- 3000
LAST_VALUES:-
It will return last value from order set of value of analytical window.
For e.g.
155 of 129
select distinct LAST_VALUE(sal) over(order by sal desc RANGE BETWEEN
UNBOUNDED PRECEDING AND UNBOUNDED FOLLOWING) as lowest_sal
from emp;
LOWEST_
SAL
800
FROM emp;
DEPTNO lowest
10 32000
20 800
30 950
- 3000
Nth value:-
select empno,ename,sal,deptno
156 of 129
EMPNO ENAME SAL DEPTNO
7934 MILLER 42000 10
7782 CLARK 32000 10
7839 KING 3000 -
7902 FORD 3000 20
7788 SCOTT 3000 20
7566 JONES 2975 20
7698 BLAKE 2850 30
7499 ALLEN 1600 30
7844 TURNER 1500 30
7521 WARD 1250 30
7654 MARTIN 1250 30
7876 ADAMS 1100 20
7900 JAMES 950 30
7369 SMITH 800 20
from emp;
RK
32000
RANK():-
RANK() gives you ranking with your ordered partition.
It will give same RANK if order output value is same & will skip that much next
ranking.
157 of 129
For details look into given output.
Dense_rank():-
DENSE_RANK() gives you ranking with your ordered partition.
It will give same RANK if order output value is same & will not skip that much
next ranking.
For details look into given output.
LEAD ():-
It let you to query on more than one row in the table at a time without JOIN
table to itself.
It return the value from NEXT row.
158 of 129
To return the value from previous row we should use LAG ().
2.9 Example
Let's look at an example. If we had an orders table that contained the following data:
ORDER_DA PRODUCT_ QT
TE ID Y
2007/09/25 1000 20
2007/09/26 2000 15
2007/09/27 1000 8
2007/09/28 2000 12
2007/09/29 2000 2
2007/09/30 1000 4
159 of 129
PRODUCT_ ORDER_DA NEXT_ORDER_D
ID TE ATE
Now let's look at a more complex example where we use a query partition clause to
return the next order_date for each product_id.
160 of 129
PRODUCT_ ORDER_DA NEXT_ORDER_D
ID TE ATE
LAG() :-
Lets you query more than one row in a table at a time without having to join the
table to itself.
It returns values from a previous row in the table.
To return a value from the next row, try using the LEAD function.
1.1 Example
Let's look at an example. If we had an orders table that contained the following data:
ORDER_DA PRODUCT_ QT
TE ID Y
2007/09/25 1000 20
2007/09/26 2000 15
2007/09/27 1000 8
2007/09/28 2000 12
2007/09/29 2000 2
2007/09/30 1000 4
161 of 129
ORDER_DA PRODUCT_ QT
TE ID Y
In this example, the LAG function will sort in ascending order all of the order_date
values in the orders table and then return the previous order_date since we used an
offset of 1.
If we had used an offset of 2 instead, it would have returned the order_date from 2
orders earlier. If we had used an offset of 3, it would have returned the order_date from
3 orders earlier....and so on.
162 of 129
2.9.2 Using Partitions
Now let's look at a more complex example where we use a query partition clause to
return the previous order_date for each product_id.
LISTAGG ():-
163 of 129
INPUT.
EMP_ ENA
NO ME SAL DNO
1 A 15000 10
2 B 17000 10
3 C 22000 20
4 D 24000 30
5 E 30000 30
6 F 35000 20
7 G 50000 10
12232
8 H 1 40
DESIRED OUTPUT.
DN LIST_OF_EMP
O LOYEE
10 1,2,7
20 3,6
30 4,5
40 8
Performance Tuning
Performance Tuning include following topics,
164 of 129
Performance Planning
Instance Tuning
SQL Tuning
Performance Planning:-
Instance Tuning:-
SQL Tuning:-
Many Application Programmer consider SQL for query issue to get data.
When SQL statement execute Query Optimizer will determine most efficient
plan for execution of query.
It plays most important role since it directly effect on execution time.
You can Override Execution Plan of query optimizer with HINT insert into SQL
statement.
It involve Identify bottleneck & fix them. Removing bottleneck may not lead to
Performance Improvement immediately because another bottleneck may be revealed.
Below are the steps to Improve Oracle Performance.
165 of 129
Get the feedback from user. Determine project scope and
performance goal for future.
Get the full set of operating system, database and application
statistics from system when performance is both GOOD and BAD.
Perform Sanity-check of OS for all system those are involve in user
performance.
2. Check for Top Ten most common mistake with Oracle Database.
Bad Connection
Bad use of cursor and shared pool
Bad SQL
Use of nonstandard initialization parameter
Getting DB I/O wrong
Online Redo Log setup problem
Serialization of data due to lack of Free List, Free List Group,
Transaction Slot or Shortage of Rollback Segment
Full Table Scan
High amount of recursive SQL
Deployment and migration error
5. Validate the changes & see the user perception has improve or not.
Otherwise look for other bottleneck until your understanding of
application become more accurate.
6. Perform last three steps until performance goal is met.
Data Collection & Analysis is essential for identifying & correcting performance
problem.
Oracle Database provide various tool to monitor performance, diagnose the
problem & tune the application.
In Oracle Database Information Gathering & Monitoring process is Automatic,
which is manage my Oracle Background Processes.
166 of 129
To enable statistics collection & automatic performance feature we need to
STATISTIC_LEVEL = TYPICAL OR ALL.
For easy use, Oracle Enterprise Manager Database Control is recommended.
o It will collect, process & maintain the performance statistics for problem
detection & self-tuning purpose.
o Data gather from both, Memory & Database.
o Data gather can display by both View & Reports.
o Snapshot
o Baseline
o Adaptive Threshold
o Space Consumption
167 of 129
SNAPSHOT:-
o SNAPSHOT is set of historical data for specific time period that use for
performance comparison by ADDM.
o By default, Oracle Database automatically generate the snapshot after
every hour & it will retain the statistic in workload repository for next 8
days.
o Data in snapshot is analyzed by ADDM.
Baseline:-
o Baseline contain the data from specific time period which are
preserved for comparison with other similar workload period when
performance issue occurred.
o There are following types of Baselines available in Oracle Database,
o Fixed Baseline
o Moving Window Baseline
o Baseline Templates
o Fixed Baseline:-
o In this type of base line while creating baseline we specify fixed
time period.
o So we need to be very careful while choosing time period for
baseline. Since it represent the system operating at optimal time.
o In future you can refer this baseline to compare with other
baseline or snapshot during the period of poor performance.
168 of 129
o Therefore, to increase the size of Moving Window you must
first increase AWR retention period accordingly.
Baseline Template:-
Adaptive Threshold:-
169 of 129
Percentage of Maximum:-
In this case threshold value will get calculate as a
percentage of Multiple of maximum value observed
for data in movie window baseline
Significance Level:-
In this case threshold value is set in percentile.
It represent that values are unusual that are
observed above threshold value.
You can specify following percentile,
o High (.95):- Only 5 out of 100 are expected to
exceed this value.
o Very high (.99):- Only 1 out of 100 are
expected to exceed this value.
o Sever (.999):- Only 1 out of 1000 are
expected to exceed this value.
o Extream (.9999):- Only 1 out of 10000 are
expected to exceed this value.
o Percentage of Maximum threshold is useful when system is at
peak workload at that you want to be alerted.
o Significance Level threshold should be used when system is
operating normally but it might be vary over the wide range
when system perform poorly.
Space Consumption:-
170 of 129
Not having enough data can affect validity & accuracy following components,
o Automatic Database Diagnostic Monitoring ( ADDM )
o SQL Tuning Adviser.
o Undo Adviser.
o Segment Adviser.
If possible Oracle recommend you to set AWR retention period large enough to
capture at least one complete workload cycle.
Under exceptional circumstance you can turn off automatic snap shot collection
by keeping snapshot interval to zero (0). Under this condition automatic
collection of workload and statistical data will stop. In additional you cannot
manually create snapshot.
This section describes how to manage the Automatic Workload Repository and
contains the following topics:
Managing Snapshots
Managing Baselines
Transporting Automatic Workload Repository Data
Using Automatic Workload Repository Views
Generating Automatic Workload Repository Reports
Generating Active Session History Reports
171 of 129
Guide. If Oracle Enterprise Manager is unavailable, you can manage the AWR
snapshots and baselines using the DBMS_WORKLOAD_REPOSITORY package,
as described in this section.
This section contains the following topics:
Creating Snapshots
Dropping Snapshots
Modifying Snapshot Settings
See Also:
Oracle Database PL/SQL Packages and Types Reference for detailed information on
the DBMS_WORKLOAD_REPOSITORY package
5.3.1.1 Creating Snapshots
You can manually create snapshots with the CREATE_SNAPSHOT procedure if you
want to capture statistics at times different than those of the automatically generated
snapshots. For example:
BEGIN
DBMS_WORKLOAD_REPOSITORY.CREATE_SNAPSHOT ();
END;
/
In this example, a snapshot for the instance is created immediately with the flush level
specified to the default flush level of TYPICAL. You can view this snapshot in the
DBA_HIST_SNAPSHOT view.
5.3.1.2 Dropping Snapshots
You can drop a range of snapshots using the DROP_SNAPSHOT_RANGE
procedure. To view a list of the snapshot Ids along with database Ids, check the
DBA_HIST_SNAPSHOT view. For example, you can drop the following range of
snapshots:
BEGIN
DBMS_WORKLOAD_REPOSITORY.DROP_SNAPSHOT_RANGE (low_snap_id
=> 22,
high_snap_id => 32, dbid => 3310949047);
END;
/
In the example, the range of snapshot Ids to drop is specified from 22 to 32. The
optional database identifier is 3310949047. If you do not specify a value for dbid, the
local database identifier is used as the default value.
Active Session History data (ASH) that belongs to the time period specified by the
snapshot range is also purged when the DROP_SNAPSHOT_RANGE procedure is
called.
5.3.1.3 Modifying Snapshot Settings
172 of 129
You can adjust the interval, retention, and captured Top SQL of snapshot generation
for a specified database Id, but note that this can affect the precision of the Oracle
diagnostic tools.
The INTERVAL setting affects how often in minutes that snapshots are automatically
generated. The RETENTION setting affects how long in minutes that snapshots are
stored in the workload repository. The TOPNSQL setting affects the number of Top
SQL to flush for each SQL criteria (Elapsed Time, CPU Time, Parse Calls, Shareable
Memory, and Version Count). The value for this setting will not be affected by the
statistics/flush level and will override the system default behavior for the AWR SQL
collection. It is possible to set the value for this setting to MAXIMUM to capture the
complete set of SQL in the cursor cache, though by doing so (or by setting the value to
a very high number) may lead to possible space and performance issues since there
will more data to collect and store. To adjust the settings, use the
MODIFY_SNAPSHOT_SETTINGS procedure. For example:
BEGIN
DBMS_WORKLOAD_REPOSITORY.MODIFY_SNAPSHOT_SETTINGS( retentio
n => 43200,
interval => 30, topnsql => 100, dbid => 3310949047);
END;
/
In this example, the retention period is specified as 43200 minutes (30 days), the
interval between each snapshot is specified as 30 minutes, and the number of Top SQL
to flush for each SQL criteria as 100. If NULL is specified, the existing value is
preserved. The optional database identifier is 3310949047. If you do not specify a
value for dbid, the local database identifier is used as the default value. You can check
the current settings for your database instance with the DBA_HIST_WR_CONTROL
view.
5.3.2 Managing Baselines
This section describes how to manage baselines. For more information about baselines,
see "Baselines".
The primary interface for managing snapshots is Oracle Enterprise Manager.
Whenever possible, you should manage snapshots using Oracle Enterprise Manager, as
described in Oracle Database 2 Day + Performance Tuning Guide. If Oracle
Enterprise Manager is unavailable, you can manage snapshots using the
DBMS_WORKLOAD_REPOSITORY package, as described in the following
sections:
Creating a Baseline
Dropping a Baseline
5.3.2.1 Creating a Baseline
173 of 129
This section describes how to create a baseline using an existing range of snapshots.
To create a baseline:
1. Review the existing snapshots in the DBA_HIST_SNAPSHOT view to
determine the range of snapshots that you want to use.
2. Use the CREATE_BASELINE procedure to create a baseline using the desired
range of snapshots:
3. BEGIN
4. DBMS_WORKLOAD_REPOSITORY.CREATE_BASELINE (start_snap_id
=> 270,
5. end_snap_id => 280, baseline_name => 'peak baseline',
6. dbid => 3310949047, expiration => 30);
7. END;
8. /
In this example, 270 is the start snapshot sequence number and 280 is the end snapshot
sequence. The name of baseline is peak baseline. The optional database identifier is
3310949047. If you do not specify a value for dbid, the local database identifier is
used as the default value. The optional expiration parameter is set to 30, so the
baseline will expire and be dropped automatically after 30 days. If you do not specify a
value for expiration, the baseline will never expire.
The system automatically assign a unique baseline Id to the new baseline when the
baseline is created. The baseline Id and database identifier are displayed in the
DBA_HIST_BASELINE view.
5.3.2.2 Dropping a Baseline
This section describes how to drop an existing baseline. Periodically, you may want to
drop a baseline that is no longer used to conserve disk space. The snapshots associated
with a baseline are retained indefinitely until you explicitly drop the baseline or the
baseline has expired.
To drop a baseline:
1. Review the existing baselines in the DBA_HIST_BASELINE view to determine
the baseline that you want to drop.
2. Use the DROP_BASELINE procedure to drop the desired baseline:
3. BEGIN
4. DBMS_WORKLOAD_REPOSITORY.DROP_BASELINE (baseline_name
=> 'peak baseline',
5. cascade => FALSE, dbid => 3310949047);
6. END;
7. /
In the example, the name of baseline is peak baseline. The cascade parameter is set to
FALSE, which specifies that only the baseline is dropped. Setting this parameter to
174 of 129
TRUE specifies that the drop operation will also remove the snapshots associated with
the baseline. The optional dbid parameter specifies the database identifier, which in
this example is 3310949047. If you do not specify a value for dbid, the local database
identifier is used as the default value.
5.3.3 Transporting Automatic Workload Repository Data
Oracle Database enables you to transport AWR data between systems. This is useful in
cases where you want to use a separate system to perform analysis of the AWR data.
To transport AWR data, you need to first extract the AWR snapshot data from the
database on the source system, then load the data into the database on the target
system, as described in the following sections:
Extracting AWR Data
Loading AWR Data
5.3.3.1 Extracting AWR Data
The awrextr.sql script extracts the AWR data for a range of snapshots from the
database into a Data Pump export file. Once created, this dump file can be transported
to another system where the extracted data can be loaded. To run the awrextr.sql
script, you need to be connected to the database as the SYS user.
To extract AWR data:
1. At the SQL prompt, enter:
2. @$ORACLE_HOME/rdbms/admin/awrextr.sql
A list of the databases in the AWR schema is displayed.
3. Specify the database from which the AWR data will be extracted:
4. Enter value for db_id: 1377863381
In this example, the database with the database identifier of 1377863381 is selected.
5. Specify the number of days for which you want to list snapshot Ids.
6. Enter value for num_days: 2
A list of existing snapshots for the specified time range is displayed. In this example,
snapshots captured in the last 2 days are displayed.
7. Define the range of snapshots for which AWR data will be extracted by
specifying a beginning and ending snapshot Id:
8. Enter value for begin_snap: 30
9. Enter value for end_snap: 40
In this example, the snapshot with a snapshot Id of 30 is selected as the beginning
snapshot, and the snapshot with a snapshot Id of 40 is selected as the ending snapshot.
10. A list of directory objects is displayed.
Specify the directory object pointing to the directory where the export dump file will
be stored:
175 of 129
Enter value for directory_name: DATA_PUMP_DIR
In this example, the directory object DATA_PUMP_DIR is selected.
11. Specify the prefix for name of the export dump file (the .dmp suffix will be
automatically appended):
12. Enter value for file_name: awrdata_30_40
In this example, an export dump file named awrdata_30_40 will be created in the
directory corresponding to the directory object you specified:
Dump file set for SYS.SYS_EXPORT_TABLE_01 is:
C:\ORACLE\PRODUCT\11.1.0.5\DB_1\RDBMS\LOG\AWRDATA_30_40.DMP
Job "SYS"."SYS_EXPORT_TABLE_01" successfully completed at 08:58:20
Depending on the amount of AWR data that needs to be extracted, the AWR extract
operation may take a while to complete. Once the dump file is created, you can use
Data Pump to transport the file to another system.
See Also:
Oracle Database Utilities for information about using Data Pump
5.3.3.2 Loading AWR Data
Once the export dump file is transported to the target system, you can load the
extracted AWR data using the awrload.sql script. The awrload.sql script will first
create a staging schema where the snapshot data is transferred from the Data Pump file
into the database. The data is then transferred from the staging schema into the
appropriate AWR tables. To run the awrload.sql script, you need to be connected to
the database as the SYS user.
To load AWR data:
1. At the SQL prompt, enter:
2. @$ORACLE_HOME/rdbms/admin/awrload.sql
A list of directory objects is displayed.
3. Specify the directory object pointing to the directory where the export dump file
is located:
4. Enter value for directory_name: DATA_PUMP_DIR
In this example, the directory object DATA_PUMP_DIR is selected.
5. Specify the prefix for name of the export dump file (the .dmp suffix will be
automatically appended):
6. Enter value for file_name: awrdata_30_40
In this example, the export dump file named awrdata_30_40 is selected.
7. Specify the name of the staging schema where the AWR data will be loaded:
176 of 129
8. Enter value for schema_name: AWR_STAGE
In this example, a staging schema named AWR_STAGE will be created where the
AWR data will be loaded.
9. Specify the default tablespace for the staging schema:
10. Enter value for default_tablespace: SYSAUX
In this example, the SYSAUX tablespace is selected.
11. Specify the temporary tablespace for the staging schema:
12. Enter value for temporary_tablespace: TEMP
In this example, the TEMP tablespace is selected.
13. A staging schema named AWR_STAGE will be created where the AWR data
will be loaded. After the AWR data is loaded into the AWR_STAGE schema, the data
will be transferred into the AWR tables in the SYS schema:
14. Processing object type
TABLE_EXPORT/TABLE/CONSTRAINT/CONSTRAINT
15. Completed 113 CONSTRAINT objects in 11 seconds
16. Processing object type
TABLE_EXPORT/TABLE/CONSTRAINT/REF_CONSTRAINT
17. Completed 1 REF_CONSTRAINT objects in 1 seconds
18. Job "SYS"."SYS_IMPORT_FULL_03" successfully completed at 09:29:30
19. ... Dropping AWR_STAGE user
20. End of AWR Load
Depending on the amount of AWR data that needs to be loaded, the AWR load
operation may take a while to complete. After the AWR data is loaded, the staging
schema will be dropped automatically.
5.3.4 Using Automatic Workload Repository Views
Typically, you would view the AWR data through Oracle Enterprise Manager or AWR
reports. However, you can also view the statistics with the following views:
V$ACTIVE_SESSION_HISTORY
This view displays active database session activity, sampled once every second. See
"Active Session History (ASH)".
V$ metric views provide metric data to track the performance of the system
The metric views are organized into various groups, such as event, event class, system,
session, service, file, and tablespace metrics. These groups are identified in the
V$METRICGROUP view.
DBA_HIST views
The DBA_HIST views contain historical data stored in the database. This group of
views includes:
177 of 129
o DBA_HIST_ACTIVE_SESS_HISTORY displays the history of the
contents of the in-memory active session history for recent system activity.
o DBA_HIST_BASELINE displays information about the baselines
captured on the system
o DBA_HIST_DATABASE_INSTANCE displays information about the
database environment
o DBA_HIST_SNAPSHOT displays information on snapshots in the
system
o DBA_HIST_SQL_PLAN displays the SQL execution plans
o DBA_HIST_WR_CONTROL displays the settings for controlling AWR
See Also:
Oracle Database Reference for information on dynamic and static data dictionary
views
5.3.5 Generating Automatic Workload Repository Reports
An AWR report shows data captured between two snapshots (or two points in time).
The AWR reports are divided into multiple sections. The HTML report includes links
that can be used to navigate quickly between sections. The content of the report
contains the workload profile of the system for the selected range of snapshots.
The primary interface for generating AWR reports is Oracle Enterprise Manager.
Whenever possible, you should generate AWR reports using Oracle Enterprise
Manager, as described in Oracle Database 2 Day + Performance Tuning Guide. If
Oracle Enterprise Manager is unavailable, you can generate AWR reports by running
SQL scripts:
The awrrpt.sql SQL script generates an HTML or text report that displays
statistics for a range of snapshot Ids.
The awrrpti.sql SQL script generates an HTML or text report that displays
statistics for a range of snapshot Ids on a specified database and instance.
The awrsqrpt.sql SQL script generates an HTML or text report that displays
statistics of a particular SQL statement for a range of snapshot Ids. Run this report to
inspect or debug the performance of a SQL statement.
The awrsqrpi.sql SQL script generates an HTML or text report that displays
statistics of a particular SQL statement for a range of snapshot Ids on a specified
database and instance. Run this report to inspect or debug the performance of a SQL
statement on a specific database and instance.
The awrddrpt.sql SQL script generates an HTML or text report that compares
detailed performance attributes and configuration settings between two selected time
periods.
The awrddrpi.sql SQL script generates an HTML or text report that compares
detailed performance attributes and configuration settings between two selected time
periods on a specific database and instance.
Note:
178 of 129
To run these scripts, you must be granted the DBA role.
If you run a report on a database that does not have any workload activity during the
specified range of snapshots, calculated percentages for some report statistics can be
less than 0 or greater than 100. This result simply means that there is no meaningful
value for the statistic.
5.3.5.1 Running the awrrpt.sql Report
To generate an HTML or text report for a range of snapshot Ids, run the awrrpt.sql
script at the SQL prompt:
@$ORACLE_HOME/rdbms/admin/awrrpt.sql
First, you need to specify whether you want an HTML or a text report.
Enter value for report_type: text
Specify the number of days for which you want to list snapshot Ids.
Enter value for num_days: 2
After the list displays, you are prompted for the beginning and ending snapshot Id for
the workload repository report.
Enter value for begin_snap: 150
Enter value for end_snap: 160
Next, accept the default report name or enter a report name. The default name is
accepted in the following example:
Enter value for report_name:
Using the report name awrrpt_1_150_160
The workload repository report is generated.
5.3.5.2 Running the awrrpti.sql Report
To specify a database and instance before entering a range of snapshot Ids, run the
awrrpti.sql script at the SQL prompt to generate an HTML or text report:
@$ORACLE_HOME/rdbms/admin/awrrpti.sql
First, specify whether you want an HTML or a text report. After that, a list of the
database identifiers and instance numbers displays, similar to the following:
Instances in this Workload Repository schema
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
DB Id Inst Num DB Name Instance Host
----------- -------- ------------ ------------ ------------
3309173529 1 MAIN main dlsun1690
3309173529 1 TINT251 tint251 stint251
179 of 129
Enter the values for the database identifier (dbid) and instance number (inst_num) at
the prompts.
Enter value for dbid: 3309173529
Using 3309173529 for database Id
Enter value for inst_num: 1
Next you are prompted for the number of days and snapshot Ids, similar to the
awrrpt.sql script, before the text report is generated. See "Running the awrrpt.sql
Report".
5.3.5.3 Running the awrsqrpt.sql Report
To generate an HTML or text report for a particular SQL statement, run the
awrsqrpt.sql script at the SQL prompt:
@$ORACLE_HOME/rdbms/admin/awrsqrpt.sql
First, you need to specify whether you want an HTML or a text report.
Enter value for report_type: text
Specify the number of days for which you want to list snapshot Ids.
Enter value for num_days: 1
After the list displays, you are prompted for the beginning and ending snapshot Id for
the workload repository report.
Enter value for begin_snap: 146
Enter value for end_snap: 147
Specify the SQL Id of a particular SQL statement to display statistics.
Enter value for sql_id: 2b064ybzkwf1y
Next, accept the default report name or enter a report name. The default name is
accepted in the following example:
Enter value for report_name:
Using the report name awrsqlrpt_1_146_147.txt
The workload repository report is generated.
5.3.5.4 Running the awrsqrpi.sql Report
To specify a database and instance before entering a particular SQL statement Id, run
the awrsqrpi.sql script at the SQL prompt to generate an HTML or text report:
@$ORACLE_HOME/rdbms/admin/awrsqrpi.sql
First, you need to specify whether you want an HTML or a text report.
180 of 129
Enter value for report_type: text
Next, a list of the database identifiers and instance numbers displays, similar to the
following:
Instances in this Workload Repository schema
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
DB Id Inst Num DB Name Instance Host
----------- -------- ------------ ------------ ------------
3309173529 1 MAIN main dlsun1690
3309173529 1 TINT251 tint251 stint251
Enter the values for the database identifier (dbid) and instance number (inst_num) at
the prompts.
Enter value for dbid: 3309173529
Using 3309173529 for database Id
Enter value for inst_num: 1
Using 1 for instance number
Next you are prompted for the number of days, snapshot Ids, SQL Id and report name,
similar to the awrsqrpt.sql script, before the text report is generated. See "Running the
awrsqrpt.sql Report".
5.3.5.5 Running the awrddrpt.sql Report
To compare detailed performance attributes and configuration settings between two
time periods, run the awrddrpt.sql script at the SQL prompt to generate an HTML or
text report:
@$ORACLE_HOME/rdbms/admin/awrddrpt.sql
First, you need to specify whether you want an HTML or a text report.
Enter value for report_type: text
Specify the number of days for which you want to list snapshot Ids for the first time
period.
Enter value for num_days: 2
After the list displays, you are prompted for the beginning and ending snapshot Id for
the first time period.
Enter value for begin_snap: 102
Enter value for end_snap: 103
Next, specify the number of days for which you want to list snapshot Ids for the second
time period.
181 of 129
Enter value for num_days2: 1
After the list displays, you are prompted for the beginning and ending snapshot Id for
the second time period.
Enter value for begin_snap2: 126
Enter value for end_snap2: 127
Next, accept the default report name or enter a report name. The default name is
accepted in the following example:
Enter value for report_name:
Using the report name awrdiff_1_102_1_126.txt
The workload repository report is generated.
5.3.5.6 Running the awrddrpi.sql Report
To specify a database and instance before selecting time periods to compare, run the
awrddrpi.sql script at the SQL prompt to generate an HTML or text report:
@$ORACLE_HOME/rdbms/admin/awrddrpi.sql
First, you need to specify whether you want an HTML or a text report.
Enter value for report_type: text
Next, a list of the database identifiers and instance numbers displays, similar to the
following:
Instances in this Workload Repository schema
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
DB Id Inst Num DB Name Instance Host
----------- -------- ------------ ------------ ------------
3309173529 1 MAIN main dlsun1690
3309173529 1 TINT251 tint251 stint251
Enter the values for the database identifier (dbid) and instance number (inst_num) for
the first time period at the prompts.
Enter value for dbid: 3309173529
Using 3309173529 for Database Id for the first pair of snapshots
Enter value for inst_num: 1
Using 1 for Instance Number for the first pair of snapshots
Specify the number of days for which you want to list snapshot Ids for the first time
period.
Enter value for num_days: 2
182 of 129
After the list displays, you are prompted for the beginning and ending snapshot Id for
the first time period.
Enter value for begin_snap: 102
Enter value for end_snap: 103
Next, enter the values for the database identifier (dbid) and instance number
(inst_num) for the second time period at the prompts.
Enter value for dbid2: 3309173529
Using 3309173529 for Database Id for the second pair of snapshots
Enter value for inst_num2: 1
Using 1 for Instance Number for the second pair of snapshots
Specify the number of days for which you want to list snapshot Ids for the second time
period.
Enter value for num_days2: 1
After the list displays, you are prompted for the beginning and ending snapshot Id for
the second time period.
Enter value for begin_snap2: 126
Enter value for end_snap2: 127
Next, accept the default report name or enter a report name. The default name is
accepted in the following example:
Enter value for report_name:
Using the report name awrdiff_1_102_1_126.txt
The workload repository report is generated.
5.3.6 Generating Active Session History Reports
Use Active Session History (ASH) reports to perform analysis of:
Transient performance problems that typically last for a few minutes
Scoped or targeted performance analysis by various dimensions or their
combinations, such as time, session, module, action, or SQL_ID
You can view ASH reports using Enterprise Manager or by running the following SQL
scripts:
The ashrpt.sql SQL script generates an HTML or text report that displays ASH
information for a specified duration.
The ashrpti.sql SQL script generates an HTML or text report that displays ASH
information for a specified duration for a specified database and instance.
The reports are divided into multiple sections. The HTML report includes links that
can be used to navigate quickly between sections. The content of the report contains
ASH information used to identify blocker and waiter identities and their associated
183 of 129
transaction identifiers and SQL for a specified duration. For more information on ASH,
see "Active Session History (ASH)".
The primary interface for generating ASH reports is Oracle Enterprise Manager.
Whenever possible, you should generate ASH reports using Oracle Enterprise
Manager, as described in Oracle Database 2 Day + Performance Tuning Guide. If
Oracle Enterprise Manager is unavailable, you can generate ASH reports by running
SQL scripts, as described in the following sections:
Running the ashrpt.sql Report
Running the ashrpti.sql Report
5.3.6.1 Running the ashrpt.sql Report
To generate a text report of ASH information, run the ashrpt.sql script at the SQL
prompt:
@$ORACLE_HOME/rdbms/admin/ashrpt.sql
First, you need to specify whether you want an HTML or a text report.
Enter value for report_type: text
Specify the time frame to collect ASH information by first specifying the begin time in
minutes prior to the system date.
Enter value for begin_time: -10
Next, enter the duration in minutes that the report for which you want to capture ASH
information from the begin time. The default duration of system date minus begin time
is accepted in the following example:
Enter value for duration:
The report in this example will gather ASH information beginning from 10 minutes
before the current time and ending at the current time. Next, accept the default report
name or enter a report name. The default name is accepted in the following example:
Enter value for report_name:
Using the report name ashrpt_1_0310_0131.txt
The session history report is generated.
5.3.6.2 Running the ashrpti.sql Report
If you want to specify a database and instance before setting the time frame to collect
ASH information, run the ashrpti.sql report at the SQL prompt to generate a text
report:
@$ORACLE_HOME/rdbms/admin/ashrpti.sql
First, specify whether you want an HTML or a text report. After that, a list of the
database Ids and instance numbers displays, similar to the following:
184 of 129
Instances in this Workload Repository schema
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
DB Id Inst Num DB Name Instance Host
----------- -------- ------------ ------------ ------------
3309173529 1 MAIN main dlsun1690
3309173529 1 TINT251 tint251 stint251
Enter the values for the database identifier (dbid) and instance number (inst_num) at
the prompts.
Enter value for dbid: 3309173529
Using 3309173529 for database id
Enter value for inst_num: 1
Next you are prompted for the begin time and duration to capture ASH information,
similar to the ashrpt.sql script, before the report is generated. See "Running the
ashrpt.sql Report".
185 of 129
What is AWR?
AWR is Oracle Utility where Admin or Privileged User have access to Create
Database Snapshot.
AWR Data stored in SYSAUX Table space or AWR Repository as ASH Data.
DB Snapshot is an image of DB State which occurs every hour & retention
period is of 7 or 8 days (Default Setting).
DBMS Package such as DBMS_WORKLOAD_REPOSITORY is used to
CREATE, MODIFY or DROP Snapshot, Baseline, etc.
In case of DB Baseline two snapshot capture at separate time period gather the
statistics for optimal DB performance which is used to configure DB Setting
during performance tuning.
In case of Background Process MMON will gather the statistics from SGA &
then transfer the snapshot data to AWR.
Oracle features such as,
o UNDO ADVISOR.
o SEGMENT ADVISOR.
o SQL TUNNING ADVISOR.
o ADDM
186 of 129
187 of 129
How to read AWR Report?
Elapsed Time:-
o It is an amount of time spend by SQL statement for execution.
o Note that SELECT statement also include amount of time to fetch query
result.
o Oracle cannot know the actual end to end response time for particular
SQL statement because Oracle cannot measure network latency outside
the instance. Hence Oracle introduce “Elapse Time”.
o Formula to calculate Elapse Time is as follows,
elapsed time = cpu time + user i/o wait time
+ application_wait_time +
concurrency_wait_time + cluster_wait_time +
plsql_exec_time + java_exec_time
This query will show the SQL execution elapsed time duration (in hours) for long-
running SQL statements:
select query_runs.*,
round ( (end_time - start_time) * 24, 2) as duration_hrs
from ( select u.username,
ash.program,
ash.sql_id,
ash.sql_plan_hash_value as plan_hash_value,
ash.session_id as sess#,
ash.session_serial# as sess_ser,
cast (min (ash.sample_time) as date) as start_time,
cast (max (ash.sample_time) as date) as end_time
from dba_hist_active_sess_history ash, dba_users u
where u.user_id = ash.user_id and ash.sql_id = lower(trim('&sql_id'))
group by u.username,
ash.program,
ash.sql_id,
ash.sql_plan_hash_value,
ash.session_id,
188 of 129
ash.session_serial#) query_runs
order by sql_id, start_time;
While STATSPACK and AWR reports can easily show the top SQL that ran with the
longest execution time, you can run a dictionary query to see the SQL with the longest
run times:
select
sql_id,
child_number,
sql_text,
elapsed_time
from
(select
sql_id_child_number,
sql_text,
elaped_time,
cpu_time,
disk_reads,
rank ()
over
(order by elapsed_time desc)
as
sql_rank
from
v$sql)
where
sql_rank < 10;
In sum, it is important to note that the SQL elapsed time metric is not the same as the
actual response time for a SQL statement.
sys_time_model.sql
column "Statistic Name" format A40
189 of 129
column "Time (s)" format 999,999
column "Percent of Total DB Time" format 999,999
select e.stat_name "Statistic Name"
, (e.value - b.value)/1000000 "Time (s)"
, decode( e.stat_name,'DB time'
, to_number(null)
, 100*(e.value - b.value)
)
/
( select nvl((e1.value - b1.value),-1)
from dba_hist_sys_time_model e1
, dba_hist_sys_time_model b1
where b1.snap_id = b.snap_id
and e1.snap_id = e.snap_id
and b1.dbid = b.dbid
and e1.dbid = e.dbid
and b1.instance_number = b.instance_number
and e1.instance_number = e.instance_number
and b1.stat_name = 'DB time'
and b1.stat_id = e1.stat_id
)
"Percent of Total DB Time"
from dba_hist_sys_time_model e
, dba_hist_sys_time_model b
where b.snap_id = &pBgnSnap
and e.snap_id = &pEndSnap
190 of 129
DB Time:-
o DB Time is an amount of time spend to perform DB user level call.
o It does not include the time spend on instance background process such as
PMON.
o Goal for tuning Oracle Process should be to reduce minimum CPU time &
WAIT time so that more transaction can be proceed. This is done by
tuning the SQL.
o DB Time:= CPU Time + I/O Time + No-Idle Wait Time
o DB Time is total time spend by user process either by Actively Working
or Actively Waiting in DB CALL.
From this formula we can conclude that database requests are composed from CPU
(service time, performing some work) and wait time (session is waiting for resources).
select
to_char(begin_time,'dd.mm.yyy hh24:mi:ss') begin_time,
to_char(end_time,'dd.mm.yyy hh24:mi:ss') end_time,
intsize_csec interval_size,
group_id,
metric_name,
value
from
v$sysmetric
where
metric_name = 'Database Time Per Sec';
select
maxval,
minval,
average,
standard_deviation
from
v$sysmetric_summary
where
metric_name = 'Database Time Per Sec';
select
count(*) DB_TIME,
191 of 129
from
v$active_session_history
where
session_type = 'FOREGROUND'
and
sample_time between to_date('30032016 10:00:00','ddmmyyyy hh24:mi:ss')
and
to_date('30032016 10:30:00','ddmmyyyy hh24:mi:ss');
You can see the current value of DB time for the entire system by querying the
V$SYS_TIME_MODEL or you can see it for a given session by using the
V$SESS_TIME_MODEL view as seen here:
DB time
----------
109797
Query Optimizer:-
It is a built-in software that determine most efficient way to execute SQL statement. It
contain following topics,
Optimizer Operation
Component of Query Optimizer
Bind Variable Picking
Optimizer Operation:-
Database can execute SQL query in many ways such as Full Table Scan, Index
Scan, Nested Loop, Hash Join, etc. Optimizer consider many factors related to objects,
conditions in queries at the time of determining execution plan for SQL query. This is
one the important step since it affect execution time.
When user submit query for execution optimizer perform following steps.
192 of 129
After than it estimate COST of each plan based on statistic of data.
Optimizer compare the plan & chose plan with lowest cost.
Query Transformation
Estimation
Plan Generation
Query Transformation:-
DBMS_PROFILER
193 of 129
Table 73-1 Columns in Table PLSQL_PROFILER_RUNS
Column Datatype Definition
runid NUMBER PRIMARY Unique run identifier from
KEY plsql_profiler_runnumber
related_run NUMBER Runid of related run (for
client/server correlation)
run_owner VARCHAR2(32), User who started run
run_date DATE Start time of run
run_comment VARCHAR2(2047) User provided comment for
this run
run_total_time NUMBER Elapsed time for this run in
nanoseconds
run_system_info VARCHAR2(2047) Currently unused
run_comment1 VARCHAR2(2047) Additional comment
spare1 VARCHAR2(256) Unused
194 of 129
Table 73-3 Columns in Table PLSQL_PROFILER_DATA
Column Datatype Definition
runid NUMBER Primary key, unique
(generated) run identifier
unit_number NUMBER Primary key, internally
generated library unit
number
line# NUMBER Primary key, not null, line
number in unit
total_occur NUMBER Number of times line was
executed
total_time NUMBER Total time spent executing
line in nanoseconds
min_time NUMBER Minimum execution time for
this line in nanoseconds
max_time NUMBER Maximum execution time
for this line in nanoseconds
spare1 NUMBER Unused
spare2 NUMBER Unused
spare3 NUMBER Unused
spare4 NUMBER Unused
195 of 129
Using dbms_profiler
dbms_profiler.start_profiler
dbms_profiler.flush_data
dbms_profiler.stop_profiler
The basic idea behind profiling with dbms_profiler is for the developer to understand
where their code is spending the most time, so they can detect and optimize it. The
profiling utility allows Oracle to collect data in memory structures and then dumps it
into tables as application code is executed. dbms_profiler is to PL/SQL, what tkprof
and Explain Plan are to SQL.
Once you have run the profiler, Oracle will place the results inside the dbms_profiler
tables.
The dbms_profiler procedures are not a part of the base installation of Oracle. Two
tables need to be installed along with the Oracle supplied PL/SQL package. In the
$ORACLE_HOME/rdbms/admin directory, two files exist that create the environment
needed for the profiler to execute.
· proftab.sql - Creates three tables and a sequence and must be executed before the
profload.sql file.
· profload.sql - Creates the package header and package body for
DBMS_PROFILER. This script must be executed as the SYS user.
The profiler does not begin capturing performance information until the call to
start_profiler is executed.
The flush command enables the developer to dump statistics during program execution
without stopping the profiling utility. The only other time Oracle saves data to the
underlying tables is when the profiling session is stopped, as shown below:
196 of 129
SQL> exec dbms_profiler.flush_data();
Stopping a profiler execution using the Oracle dbms_profiler package is done after an
adequate period of time of gathering performance benchmarks - determined by the
developer. Once the developer stops the profiler, all the remaining (unflushed) data is
loaded into the profiler tables.
Oracle dbms_profiler package also provides procedures that suspend and resume
profiling (pause_profiler(), resume_profiler()).
197 of 129
1 1 140 0 0 0 0
1 1 141 0 0 0 0
1 1 143 0 0 0 0
1 1 146 1 2905397 2905397 2905397
1 1 152 2 1622552 574374 1048177
1 1 153 0 0 0 0
1 1 157 1 204495 204495 204495
1 1 160 0 0 0 0
The performance information for a line in a unit needs to be tied back to the line source
in user_source. Once that join is made, the developer will have all of the information
that they need to optimize, enhance, and tune their application code, as well as the
SQL.
To extract high-level data, including the length of a particular run, the script
(profiler_runs.sql) below can be executed:
198 of 129
where a.runid=b.runid
order by a.runid asc;
RUNID UNIT_NUMBER OBJECT_NAME TYPE SEC PCT
----- ----------- -------------------- --------------- --------- ------
1 1 <anonymous> .00 .0
1 2 <anonymous> 1.01 .0
1 3 BMC$PKKPKG PACKAGE BODY 6921.55 18.2
1 4 <anonymous> .02 .0
2 1 <anonymous> .00 .0
2 2 <anonymous> .01 .0
Note that anonymous PL/SQL blocks are also included in the profiler tables.
Anonymous blocks are less useful from a tuning perspective since they cannot be tied
back to a source object in user_source. Anonymous PL/SQL blocks are simply
runtime source objects and do not have a corresponding dictionary object (package,
procedure, function). For this reason, the anonymous blocks should be eliminated
from most reports.
From the data displayed above, the next step is to focus on the lines within the package
body, testproc, that are taking the longest. The script (profiler_top10_lines.sql) below
displays the line numbers and their performance benchmarks of the top 10 worst
performing lines of code.
199 of 129
2113 1 282.717 282.717 282.717
89 1 138.565 138.565 138.565
2002 1 112.863 112.863 112.863
1233 1 94.984 94.984 94.984
61 1 94.984 94.984 94.984
866 1 94.984 94.984 94.984
481 1 92.749 92.749 92.749
990 1 90.514 90.514 90.514
10 rows selected.
Taking it one step further, the query below (profiler_line_source.sql) will extract the
actual source code for the top 10 worst performing lines.
select line#,
decode (a.total_occur,null,0,0,0,
a.total_time/a.total_occur/1000) as Avg,
200 of 129
61 94.984 update_stats_table(33, reusable_var, null);
866 94.984 latest_executions := reusable_var - total_executions;
481 92.749 time_number := hours + round(minutes * 100/60/100,2);
990 90.514 update_stats_table(45, LOBS, null);
10 rows selected.
Notice from the output above that most of the information needed to diagnose and fix
PL/SQL performance issues is provided. For lines containing SQL statements, the
tuner can optimize the SQL perhaps by adding optimizer hints, eliminating full table
scans, etc. Consult Chapter 5 for more details on using tkprof utility to diagnose SQL
issues.
Other useful scripts that are hidden within the Oracle directory structure
($ORACLE_HOME/PLSQL/DEMO) include a few gems that help report and analyze
profiler information.
· profsum.sql - A collection of useful SQL scripts that are executed against profiler
tables.
· profrep.sql - Creates views and a package (unwrapped) that populates the views
based on the three underlying profiler tables.
· Wrap only for production - Wrapping code is desired for production
environments but not for profiling. It is much easier to see the unencrypted form of the
text in our reports than it is to connect line numbers to source versions. Use
dbms_profiler before you wrap your code in a test environment, wrap it, and then put it
in production.
· Eliminate system packages most of the time - Knowing the performance data for
internal Oracle processing does not buy you much since you cannot change anything.
However, knowing the performance problem is within the system packages will save
you some time of trying to tune your own code when the problem is elsewhere.
· Lines of code that are frequently executed - For example, a loop that executes
5000 times is a great candidate for tuning. Guru Oracle tuners typically look for that
"low hanging fruit" in which one line or a group of lines of code are executed much
201 of 129
more than others. The benefits of tuning one line of code that is executed often far
outweigh tuning those lines that may cost more yet are executed infrequently in
comparison.
· Lines of code with a high value for average time executed - The minimum and
maximum values of execution time are interesting although not as useful as the
average execution time. Min and max only tell us how much the execution time varies
depending on database activity. Line by line, a PL/SQL developer should focus on
those lines that cost the most on an average execution basis. dbms_profiler does not
provide the average, but it does provide enough data to allow it to be computed (Total
Execution Time / # Times Executed).
· Lines of code that contain SQL syntax - The main resource consumers are those
lines that execute SQL. Once the data is sorted by average execution time, the
statements that are the worst usually contain SQL. Optimize and tune the SQL through
utilities, such as Explain Plan, tkprof, and third party software.
DBLINK
(The other database does not have be an Oracle Database system, but if you intend to
access non-Oracle systems you'll need to use Oracle Heterogeneous Services.)
Example Syntax:
202 of 129
Example Usage:
In the example above, user jim on the remote database defines a fixed-user database
link named test to the jim schema on the local database.
General Information
link$
gv_$session_connect_in repcat$_repprop_dblink_ho
all_db_links
fo w
Related Data dba_db_links ku$_dblink_t user_db_links
Dictionary
Objects dbms_dblink ku$_dblink_view wmp_api_dblink
dbms_dblink_l
ku$_10_1_dblink_view wmp_db_links_v
ib
gv_$dblink ora_kglr7_db_links
203 of 129
conn / as sysdba
~ Sybrand Bakker
set linesize 121
col name format a30
col value format a30
SELECT *
FROM props$
WHERE name LIKE '%GLOBAL%';
SELECT *
FROM props$
WHERE name LIKE '%GLOBAL%';
Notes:
204 of 129
The single quotes around the service name are mandatory
The service name must be in the TNSNAMES.ORA file on the server
Create Database Link
CREATE [SHARED] [PUBLIC] DATABASE LINK <link_name>
Connected User CONNECT TO CURRENT_USER
Link USING '<service_name>';
-- create tnsnames entry for conn_link
conn_link =
(DESCRIPTION =
(ADDRESS_LIST =
(ADDRESS = (PROTOCOL = TCP)(HOST = perrito2)(PORT =
1521))
)
(CONNECT_DATA =
(SERVICE_NAME = orabase)
)
)
conn uwclass/uwclass
desc user_db_links
205 of 129
SELECT table_name, tablespace_name FROM
user_tables@conn_user;
CREATE [PUBLIC] DATABASE LINK <link_name>
Current User CONNECT TO CURRENT_USER
Link USING '<service_name>';
CREATE DATABASE LINK curr_user
CONNECT TO CURRENT_USER
USING 'conn_link';
desc user_db_links
desc gv$session_connect_info
206 of 129
set linesize 121
set pagesize 60
col authentication_type format a10
col osuser format a25
col network_service_banner format a50 word wrap
207 of 129
conn scott/tiger
conn sh/sh
conn uwclass/uwclass
208 of 129
• Prepared Artifacts on Oracle Database 11gr2 Automatic SQL Tuning, SQL/PLSQL
Function Result Cache, Oracle Automatic Parallelism
• Prepared standard process documentation for executing 3rd Party and Performance
Tuning Projects
• Coded PLSQL Package for performing DDL activities like disabling constraints,
materialized view refreshes, statistics gathering , truncating tables from Functional
User Accounts used by Scheduling/ETL Tools
209 of 129