0% found this document useful (0 votes)
152 views

1.6 PPT - Query Optimization

The document discusses query optimization. It covers: 1. Alternative ways of evaluating a query and the need to estimate costs to choose the most efficient plan. Statistical information about relations is required for cost estimation. 2. Equivalence rules that can transform relational expressions into logically equivalent ones with different evaluation plans. These include rules for selections, projections, joins, and their interactions. 3. A dynamic programming approach is used to efficiently consider all possible query plans by reusing computed costs for subsets of relations. Both join order and interesting sort orders are considered during optimization.

Uploaded by

amaan
Copyright
© © All Rights Reserved
Available Formats
Download as PPTX, PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
152 views

1.6 PPT - Query Optimization

The document discusses query optimization. It covers: 1. Alternative ways of evaluating a query and the need to estimate costs to choose the most efficient plan. Statistical information about relations is required for cost estimation. 2. Equivalence rules that can transform relational expressions into logically equivalent ones with different evaluation plans. These include rules for selections, projections, joins, and their interactions. 3. A dynamic programming approach is used to efficiently consider all possible query plans by reusing computed costs for subsets of relations. Both join order and interesting sort orders are considered during optimization.

Uploaded by

amaan
Copyright
© © All Rights Reserved
Available Formats
Download as PPTX, PDF, TXT or read online on Scribd
You are on page 1/ 53

DEPARTMENT OF COMPUTER

ENGINEERING
SEMESTER : 5th

SUBJECT : ADVANCED DATA MANAGEMENT TECHNOLOGY

CHAPTER : QUERY PROCESSING AND OPTIMIZATION

TOPIC : QUERY OPTIMIZATION

PRESENTED BY: PROF. PRIYANKA DESHMANE


1.6 Query Optimization

Introduction
Catalog Information for Cost Estimation
Estimation of Statistics
Dynamic Programming for Choosing
Evaluation Plans
Introduction
Alternative ways of evaluating a given query
 Equivalent expressions
 Different algorithms for each operation (Chapter 13)
Cost difference between a good and a bad way of evaluating a
query can be enormous
 Example: performing a r X s followed by a selection r.A =
s.B is much slower than performing a join on the same
condition
Need to estimate the cost of operations
 Depends critically on statistical information about relations which the
database must maintain
E.g. number of tuples, number of distinct values for join
attributes, etc.
 Need to estimate statistics for intermediate results to compute cost
of complex expressions
Introduction (Cont.)
Relations generated by two equivalent expressions have the
same set of attributes and contain the same set of tuples,
although their attributes may be ordered differently.
Introduction (Cont.)

Generation of query-evaluation plans for an expression involves


several steps:
1. Generating logically equivalent expressions
Use equivalence rules to transform an expression into an
equivalent one.
2. Annotating resultant expressions to get alternative query plans
3. Choosing the cheapest plan based on estimated cost
The overall process is called cost based optimization.
Overview of chapter
Statistical information for cost estimation
Equivalence rules
Cost-based optimization algorithm
Optimizing nested subqueries
Materialized views and view maintenance
Statistical Information for Cost
Estimation
nr: number of tuples in a relation r.
br: number of blocks containing tuples of r.
sr: size of a tuple of r.
fr: blocking factor of r — i.e., the number of tuples of r that
fit into one block.
V(A, r): number of distinct values that appear in r for
attribute A; same as the size of A(r).
SC(A, r): selection cardinality of attribute A of relation r;
average number of records that satisfy equality on A.
If tuples of r are stored together physically in a file,
n 
then: b  r
r  f 
 r
Catalog Information about Indices

fi: average fan-out of internal nodes of index i, for


tree-structured indices such as B+-trees.

HTi: number of levels in index i — i.e., the height


of i.
 For a balanced tree index (such as B+-tree) on
attribute A
of relation r, HTi = logfi(V(A,r)).

 For a hash index, HTi is 1.

 LBi: number of lowest-level index blocks in i — i.e, the


number of blocks at the leaf level of the index.
Measures of Query Cost
Recall that
 Typically disk access is the predominant cost, and is also
relatively easy to estimate.
 The number of block transfers from disk is used
as a measure of the actual cost of evaluation.
 It is assumed that all transfers of blocks have the same
cost.
Real life optimizers do not make this assumption, and
distinguish between sequential and random disk access
We do not include cost to writing output to disk.
We refer to the cost estimate of algorithm A as EA
Cost-Based Optimization
Consider finding the best join-order for r1 r2 . . . rn.
There are (2(n – 1))!/(n – 1)! different join orders for above
expression. With n = 7, the number is 665280, with n = 10,
the number is greater than 176 billion!
No need to generate all the join orders. Using dynamic
programming, the least-cost join order for any subset
of
{r1, r2, . . . rn} is computed only once and stored for
future use.
Dynamic Programming in Optimization

To find best join tree for a set of n relations:


 To find best plan for a set S of n relations, consider all possible
plans of the form: S1 (S – S1) where S1 is any non-
empty subset of S.
 Recursively compute costs for joining subsets of S to find the
cost of each plan. Choose the cheapest of the 2n – 1
alternatives.
 When plan for any subset is computed, store it and reuse it
when it is required again, instead of recomputing it
Dynamic programming
Interesting Orders in Cost-Based Optimization

Consider the expression (r1 r2 r3) r4 r5


An interesting sort order is a particular sort order of
tuples that could be useful for a later operation.
 Generating the result of r1 r2 r3 sorted on the attributes
common with r4 or r5 may be useful, but generating it sorted on
the attributes common only r1 and r2 is not useful.
 Using merge-join to compute r1 r2 r3 may be costlier,
but may provide an output sorted in an interesting order.
Not sufficient to find the best join order for each subset of the
set of n given relations; must find the best join order for
each subset, for each interesting sort order
 Simple extension of earlier dynamic programming algorithms
 Usually, number of interesting orders is quite small and doesn’t
affect time/space complexity significantly
Heuristic Optimization

Cost-based optimization is expensive, even with


dynamic programming.
Systems may use heuristics to reduce the number of
choices that must be made in a cost-based fashion.
Heuristic optimization transforms the query-tree by
using a set of rules that typically (but not in all cases)
improve execution performance:
 Perform selection early (reduces the number of tuples)
 Perform projection early (reduces the number of
attributes)
 Perform most restrictive selection and join operations
before other similar operations.
 Some systems use only heuristics, others combine
heuristics with partial cost-based optimization.
Objective :
RELATIONAL ALGEBRA is a widely used procedural query language. It
collects instances of relations as input and gives occurrences of relations as output

It uses various operations to perform this action. Relational algebra


operations are performed recursively on a relation

ARMIET/IT/SEM-5/SSA/ADMT
Transformation of Relational Expressions

Two relational algebra expressions are said to be equivalent if on


every legal database instance the two expressions generate the same
set of tuples
 Note: order of tuples is irrelevant
In SQL, inputs and outputs are multisets of tuples
 Two expressions in the multiset version of the relational algebra are
said to be equivalent if on every legal database instance the two
expressions generate the same multiset of tuples
An equivalence rule says that expressions of two forms are
equivalent
 Can replace expression of first form by second, or vice versa

ARMIET/IT/SEM-5/SSA/ADMT
Equivalence Rules

1. Conjunctive selection operations can be deconstructed into a


sequence of individual selections.
(E)  1 (2 (E))
  
1 2
2. Selection operations are commutative.
 ( (E))   ( (E))
1 2 2 1

3. Only the last in a sequence of projection operations is


needed, the others can be omitted.
t 1 (t 2 ((  t n (E ))))  t1(E )

4. Selections can be combined with Cartesian products and


theta joins.
 (E1 X E2) = E1
 E2
 1(E1 2 E2) = E1
1 2 E2

ARMIET/IT/SEM-5/SSA/ADMT
Pictorial Depiction of Equivalence Rules

ARMIET/IT/SEM-5/SSA/ADMT
Equivalence Rules (Cont.)

5. Theta-join operations (and natural joins) are commutative.


E1  E2 = E2
 E1
6. (a) Natural join operations are associative:
(E1 E2) E3 = E1 (E2 E3)

(b) Theta joins are associative in the following manner:

(E1
1 E2) 2  3 E3 = E1 2 3 (E2 2 E3)

where 2 involves attributes from only E2 and E3.

ARMIET/IT/SEM-5/SSA/ADMT
Equivalence Rules (Cont.)
7. The selection operation distributes over the theta join operation
under the following two conditions:
(a) When all the attributes in 0 involve only the attributes of
one

of the expressions (E1) being joined.


0E1  E2) = (0(E1))
 E2

(b) When  1 involves only the attributes of E1 and 2 involves


only the attributes of E2.
E2) = (1(E1)) ( (E2))
1 E1  

ARMIET/IT/SEM-5/SSA/ADMT
Equivalence Rules (Cont.)

8. The projections operation distributes over the theta join operation


as follows:
(a) if involves only attributes from L1  L2:
 L L
1 2 1....... E 2)  ( L1 (E1)) ...... ( L2 (E2 ))
(E
(b) Consider a join E1  E2.
 Let L1 and L2 be sets of attributes from E1 and E2, respectively.
 Let L3 be attributes of E1 that are involved in join condition , but are
not in L1  L2, and
 let L4 be attributes of E2 that are involved in join condition , but are
not in L1  L2.
 L1L (E1 ..... E2 )  L1  L ((L1  L (E1 ))...... (L2 L (E2 )))
2 2 3 4

ARMIET/IT/SEM-5/SSA/ADMT
Equivalence Rules (Cont.)

1. The set operations union and intersection are commutative


E1  E2 = E2  E1
E1  E2 = E2  E1
(set difference is not commutative).
2. Set union and intersection are associative.
(E1  E2)  E3 = E1  (E2  E3)
(E1  E2)  E3 = E1  (E2  E3)
4. The selection operation distributes over
,  and –.
 (E 1 – E2) =  (E1) – (E ) 2

Also:  (Eand– similarly


1 E ) = (E
2 for)– and
1 E 2

 insimilarly
and place offor  in place of

– but not for 


–,
L(E1  E2) = (L(E1))  (L(E2))
12. The projection operation distributes over union
ARMIET/IT/SEM-5/SSA/ADMT
Transformation Example

Query: Find the names of all customers who have an account


at some branch located in Brooklyn.
customer-name(branch-city = “Brooklyn”
(branch (account
depositor)))
Transformation using rule 7a.
customer-name
((branch-city =“Brooklyn” (branch))
(account depositor))
Performing the selection as early as possible reduces the size of
the relation to be joined.

ARMIET/IT/SEM-5/SSA/ADMT
YouTube Link: Cartesian Product

https://www.youtube.com/watch?v=_IUP114x2gM

ARMIET/IT/SEM-5/SSA/ADMT
Example with Multiple Transformations

Query: Find the names of all customers with an account


at a Brooklyn branch whose account balance is over
$1000.

customer-name((branch-city = “Brooklyn”  balance > 1000

(branch (account depositor)))


Transformation using join associatively (Rule 6a):

customer-name((branch-city = “Brooklyn”  balance > 1000

(branch (account)) depositor)


Second form provides an opportunity to apply the “perform
selections early” rule, resulting in the subexpression

branch-city = “Brooklyn” (branch)  balance > 1000 (account)

Thus a sequence of transformations can be useful


ARMIET/IT/SEM-5/SSA/ADMT
Multiple Transformations (Cont.)

ARMIET/IT/SEM-5/SSA/ADMT
Projection Operation Example

account) depositor)
customer-name((branch-city = “Brooklyn” (branch)

When we compute
(branch-city = “Brooklyn” (branch) account )
we obtain a relation whose schema is:
(branch-name, branch-city, assets, account-number, balance)
Push projections using equivalence rules 8a and 8b; eliminate
unneeded attributes from intermediate results to get:
 customer-name ((
 account-number ( (branch-city = “Brooklyn” (branch) account ))
depositor)

ARMIET/IT/SEM-5/SSA/ADMT
Join Ordering Example

For all relations r1, r2, and r3,


(r1 r2) r3 = r1 (r2 r3 )
If r2 r3 is quite large and r1 r2 is small, we choose

(r1 r2) r3
so that we compute and store a smaller temporary relation.
Join Ordering Example (Cont.)

Consider the expression


customer-name ((branch-city = “Brooklyn” (branch))
account

depositor)
Could compute account depositor first, and
join result with
branch-city = “Brooklyn” (branch)
but account depositor is likely to be a large
relation.
Since it is more likely that only a small fraction of the
account
branch-city = “Brooklyn”
bank’s customers have (branch)
accounts in branches located
first.
in Brooklyn, it is better to compute

ARMIET/IT/SEM-5/SSA/ADMT
1.7 Assignment

1. How to transform relational expression ? (University 05


Marks )
2. Explain in detail equivalence rule (University 10 Marks )

3. Consider following database. Update salary of employee those


having more than three year experience

ARMIET/IT/SEM-5/SSA/ADMT
Evaluation Plan

Objective :

An evaluation plan is a written document that describes how you will
monitor and evaluate your program.

how you intend to use evaluation results for program improvement


and decision making.

It serves to clarify the program's purpose and anticipated outcomes.


Evaluation Plan

An evaluation plan defines exactly what algorithm is used for each


operation, and how the execution of the operations is
coordinated.
Choice of Evaluation Plans
Must consider the interaction of evaluation techniques when
choosing evaluation plans: choosing the cheapest algorithm for
each operation independently may not yield best overall
algorithm. E.g.
 merge-join may be costlier than hash-join, but may provide a sorted
output which reduces the cost for an outer level aggregation.
 nested-loop join may provide opportunity for pipelining
Practical query optimizers incorporate elements of the following
two broad approaches:
1. Search all the plans and choose the best plan in a
cost-based fashion.
2. Uses heuristics to choose a plan.
Cost-Based Optimization
Consider finding the best join-order for r1 r2 . . . rn.
There are (2(n – 1))!/(n – 1)! different join orders for above
expression. With n = 7, the number is 665280, with n = 10,
the number is greater than 176 billion!
No need to generate all the join orders. Using dynamic
programming, the least-cost join order for any subset
of
{r1, r2, . . . rn} is computed only once and stored for
future use.
Dynamic Programming in Optimization

To find best join tree for a set of n relations:


 To find best plan for a set S of n relations, consider all possible
plans of the form: S1 (S – S1) where S1 is any non-
empty subset of S.
 Recursively compute costs for joining subsets of S to find the
cost of each plan. Choose the cheapest of the 2n – 1
alternatives.
 When plan for any subset is computed, store it and reuse it
when it is required again, instead of recomputing it
Dynamic programming
Join Order Optimization Algorithm
procedure findbestplan(S)
if (bestplan[S].cost 
) return bestplan[S]
// else bestplan[S] has not
been computed earlier,
compute it now
for each non-empty
subset S1 of S such
that S1  S
P1= findbestplan(S1) P2=
findbestplan(S - S1)
A = best algorithm for
joining results of P1 and
P2
cost = P1.cost + P2.cost +
cost of A
if cost < bestplan[S].cost
bestplan[S].cost =
Left Deep Join Trees
In left-deep join trees, the right-hand-side input for each
join is a relation, not the result of an intermediate join.
Cost of Optimization
With dynamic programming time complexity of optimization with
bushy trees is O(3n).
 With n = 10, this number is 59000 instead of 176 billion!
Space complexity is O(2n)
To find best left-deep join tree for a set of n relations:
 Consider n alternatives with one relation as right-hand side input
and the other relations as left-hand side input.
 Using (recursively computed and stored) least-cost join order for
each alternative on left-hand-side, choose the cheapest of the n
alternatives.
If only left-deep trees are considered, time complexity of finding
best join order is O(n 2n)
 Space complexity remains at O(2n)
Cost-based optimization is expensive, but worthwhile for queries
on large datasets (typical queries have small n, generally < 10)
Interesting Orders in Cost-Based Optimization

Consider the expression (r1 r2 r3) r4 r5


An interesting sort order is a particular sort order of
tuples that could be useful for a later operation.
 Generating the result of r1 r2 r3 sorted on the attributes
common with r4 or r5 may be useful, but generating it sorted on
the attributes common only r1 and r2 is not useful.
 Using merge-join to compute r1 r2 r3 may be costlier,
but may provide an output sorted in an interesting order.
Not sufficient to find the best join order for each subset of the
set of n given relations; must find the best join order for
each subset, for each interesting sort order
 Simple extension of earlier dynamic programming algorithms
 Usually, number of interesting orders is quite small and doesn’t
affect time/space complexity significantly
Heuristic Optimization

Cost-based optimization is expensive, even with


dynamic programming.
Systems may use heuristics to reduce the number of
choices that must be made in a cost-based fashion.
Heuristic optimization transforms the query-tree by
using a set of rules that typically (but not in all cases)
improve execution performance:
 Perform selection early (reduces the number of tuples)
 Perform projection early (reduces the number of
attributes)
 Perform most restrictive selection and join operations
before other similar operations.
 Some systems use only heuristics, others combine
heuristics with partial cost-based optimization.
Steps in Typical Heuristic
Optimization
1. Deconstruct conjunctive selections into a sequence of single
selection operations (Equiv. rule 1.).
2. Move selection operations down the query tree for the
earliest possible execution (Equiv. rules 2, 7a, 7b,
11).
3. Execute first those selection and join operations that will
produce the smallest relations (Equiv. rule 6).
4. Replace Cartesian product operations that are followed by a
selection condition by join operations (Equiv. rule 4a).
5. Deconstruct and move as far down the tree as possible lists
of projection attributes, creating new projections where
needed (Equiv. rules 3, 8a, 8b, 12).
6. Identify those subtrees whose operations can be pipelined,
and execute them using pipelining).
Structure of Query Optimizers
The System R/Starburst optimizer considers only left-deep join
orders. This reduces optimization complexity and generates
plans amenable to pipelined evaluation.
System R/Starburst also uses heuristics to push selections and
projections down the query tree.
Heuristic optimization used in some versions of Oracle:
 Repeatedly pick “best” relation to join next
Starting from each of n starting points. Pick best
among these.
For scans using secondary indices, some optimizers take into
account the probability that the page containing the tuple is
in the buffer.
Intricacies of SQL complicate query optimization
 E.g. nested subqueries
Structure of Query Optimizers (Cont.)

Some query optimizers integrate heuristic selection and the


generation of alternative access plans.
 System R and Starburst use a hierarchical procedure based on
the nested-block concept of SQL: heuristic rewriting followed
by cost-based join-order optimization.
Even with the use of heuristics, cost-based query optimization
imposes a substantial overhead.
This expense is usually more than offset by savings at query-
execution time, particularly by reducing the number of slow
disk accesses.
Optimizing Nested Subqueries**
SQL conceptually treats nested subqueries in the where clause as
functions that take parameters and return a single value or set of
values
 Parameters are variables from outer level query that are used in the
nested subquery; such variables are called correlation variables
E.g.
select customer-name
from borrower
where exists (select *
from depositor
where depositor.customer-name =
borrower.customer-name)
Conceptually, nested subquery is executed once for each tuple in
the cross-product generated by the outer level from clause
 Such evaluation is called correlated evaluation
 Note: other conditions in where clause may be used to compute a join
(instead of a cross-product) before executing the nested subquery
Optimizing Nested Subqueries (Cont.)
Correlated evaluation may be quite inefficient since
 a large number of calls may be made to the nested query
 there may be unnecessary random I/O as a result
SQL optimizers attempt to transform nested subqueries to joins
where possible, enabling use of efficient join techniques
E.g.: earlier nested query can be rewritten as
select customer-name
from borrower, depositor
where depositor.customer-name = borrower.customer-name
 Note: above query doesn’t correctly deal with duplicates, can be
modified to do so as we will see
In general, it is not possible/straightforward to move the entire
nested subquery from clause into the outer level query from
clause
 A temporary relation is created instead, and used in body of outer
level query
Optimizing Nested Subqueries (Cont.)
In general, SQL queries of the form below can be rewritten as shown
Rewrite: select …
from L1
where P1 and exists (select *
from L2
where P2)
To: create table t1 as
select distinct V
from L2
where P21
select …
from L1, t1
where P1 and P 22
 P2 contains predicates in P2 that do not involve any correlation variables
1

 P2 2 reintroduces predicates involving correlation variables, with


relations renamed appropriately
 V contains all attributes used in predicates with correlation variables
Optimizing Nested Subqueries (Cont.)
In our example, the original nested query would be transformed to
create table t1 as
select distinct customer-name
from depositor
select customer-name
from borrower, t1
where t1.customer-name = borrower.customer-name
The process of replacing a nested query by a query with a join
(possibly with a temporary relation) is called decorrelation.
Decorrelation is more complicated when
 the nested subquery uses aggregation, or
 when the result of the nested subquery is used to test for
equality, or
 when the condition linking the nested subquery to the other
query is not exists,
 and so on.
Materialized Views**
A materialized view is a view whose contents are computed
and stored.
Consider the view
create view branch-total-loan(branch-name, total-loan) as
select branch-name, sum(amount)
from loan
groupby branch-name
Materializing the above view would be very useful if the total loan
amount is required frequently
 Saves the effort of finding multiple tuples and adding up their
amounts
Materialized View Maintenance
The task of keeping a materialized view up-to-date with the
underlying data is known as materialized view maintenance
Materialized views can be maintained by recomputation on every
update
A better option is to use incremental view maintenance
 Changes to database relations are used to compute changes to
materialized view, which is then updated
View maintenance can be done by
 Manually defining triggers on insert, delete, and update of each
relation in the view definition
 Manually written code to update the view whenever database
relations are updated
 Supported directly by the database
Incremental View Maintenance
The changes (inserts and deletes) to a relation or expressions
are referred to as its differential
 Set of tuples inserted to and deleted from r are denoted ir and dr

To simplify our description, we only consider inserts and


deletes
 We replace updates to a tuple by deletion of the tuple followed by
insertion of the update tuple
We describe how to compute the change to the result of each
relational operation, given changes to its inputs
We then outline how to handle relational algebra expressions
Query Optimization and Materialized
Views
Rewriting queries to use materialized views:
 A materialized view v = r s is available
 A user submits a query r s t

 We can rewrite the query as v t


Whether to do so depends on cost estimates for the two alternative
Replacing a use of a materialized view by the view definition:
 A materialized view v = r
s is available, but without any index on it
 User submits a query A=10(v).
 Suppose also that s has an index on the common attribute B, and r has
an index on attribute A.
 The best plan for this query may be to replace v by r s, which
can lead to the query plan A=10(r) s

Query optimizer should be extended to consider all above


alternatives and choose the best overall plan
Materialized View Selection
Materialized view selection: “What is the best set of views to
materialize?”.
 This decision must be made on the basis of the system
workload
Indices are just like materialized views, problem of index
selection is closely related, to that of materialized view
selection, although it is simpler.
Some database systems, provide tools to help the database
administrator with index and materialized view selection.
1.8 Assignment
University Question :

1. Write note on evaluation plan

2. What is materialized view? Explain Query Optimization with


materialized view

Logical Question :

1. Consider employee database table. Retrieve record of


employee who having age more than 30. Make evaluation
plan for this query.
1.8 Bibliography
1.Korth, Slberchatz,Sudarshan, :”Database System
Concepts”, 6th Edition, McGraw –Hill

2.Elmasri and Navathe, “Fundamentals of Database


Systems”, 6th Edition, PEARSON Education.

3.Theraja Reema, “Data Warehousing”, Oxford


UniversityPress, 2009.
:
4.Raghu Ramakrishnan and Johannes Gehrke, “Database
Management Systems” 3rd Edition -McGraw Hil

You might also like