0% found this document useful (0 votes)
0 views

DBMS Unit 4 Final

The document discusses transaction states, including active, partially committed, failed, aborted, and committed, and outlines the ACID properties essential for maintaining database integrity. It also covers the importance of concurrency in transaction execution, the concept of schedules, and the significance of conflict serializability. Additionally, the document details query processing steps, optimization techniques, and measures of query cost, particularly focusing on selection operations and sorting methods.

Uploaded by

jeel.aghera
Copyright
© © All Rights Reserved
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
0 views

DBMS Unit 4 Final

The document discusses transaction states, including active, partially committed, failed, aborted, and committed, and outlines the ACID properties essential for maintaining database integrity. It also covers the importance of concurrency in transaction execution, the concept of schedules, and the significance of conflict serializability. Additionally, the document details query processing steps, optimization techniques, and measures of query cost, particularly focusing on selection operations and sorting methods.

Uploaded by

jeel.aghera
Copyright
© © All Rights Reserved
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 46

Unit 4: Transactions

Transaction State
Active – the initial state; the transaction stays in this state while it is
executing
Partially committed – after the final statement has been executed.
Failed -- after the discovery that normal execution can no longer
proceed.
Aborted – after the transaction has been rolled back and the
database restored to its state prior to the start of the transaction.
Two options after it has been aborted:
restart the transaction
 can be done only if no internal logical error
kill the transaction
Committed – after successful completion.
Transaction State (Cont.)
Transaction Concept
A transaction is a unit of program execution that accesses and
possibly updates various data items.
E.g. transaction to transfer $50 from account A to account B:
1. read(A)
2. A := A – 50
3. write(A)
4. read(B)
5. B := B + 50
6. write(B)
Two main issues to deal with:
Failures of various kinds, such as hardware failures and
system crashes
Concurrent execution of multiple transactions
Example of Fund Transfer
Transaction to transfer $50 from account A to account B:
1. read(A)
2. A := A – 50
3. write(A)
4. read(B)
5. B := B + 50
6. write(B)
Atomicity requirement
if the transaction fails after step 3 and before step 6, money will be
“lost” leading to an inconsistent database state
 Failure could be due to software or hardware
the system should ensure that updates of a partially executed
transaction are not reflected in the database
Durability requirement — once the user has been notified that the transaction
has completed (i.e., the transfer of the $50 has taken place), the updates to
the database by the transaction must persist even if there are software or
hardware failures.
Example of Data Access
buffer
Buffer Block A input(A)
X A
Buffer Block B Y B
output(B)
read(X)
write(Y)

x2
x1
y1

work area work area


of T1 of T2

memory disk
Example of Fund Transfer (Cont.)
Transaction to transfer $50 from account A to account B:
1. read(A)
2. A := A – 50
3. write(A)
4. read(B)
5. B := B + 50
6. write(B)
Consistency requirement in above example:
the sum of A and B is unchanged by the execution of the transaction
In general, consistency requirements include
 Explicitly specified integrity constraints such as primary keys and
foreign keys
 Implicit integrity constraints
– e.g. sum of balances of all accounts, minus sum of loan amounts
must equal value of cash-in-hand
A transaction must see a consistent database.
During transaction execution the database may be temporarily inconsistent.
When the transaction completes successfully the database must be
consistent
 Erroneous transaction logic can lead to inconsistency
Example of Fund Transfer (Cont.)
Isolation requirement — if between steps 3 and 6, another
transaction T2 is allowed to access the partially updated database, it
will see an inconsistent database (the sum A + B will be less than it
should be).
T1 T2
1. read(A)
2. A := A – 50
3. write(A)
read(A), read(B), print(A+B)
4. read(B)
5. B := B + 50
6. write(B
Isolation can be ensured trivially by running transactions serially
that is, one after the other.
However, executing multiple transactions concurrently has
significant benefits, as we will see later.
ACID Properties
A transaction is a unit of program execution that accesses and possibly
updates various data items.To preserve the integrity of data the database
system must ensure:
Atomicity. Either all operations of the transaction are properly reflected
in the database or none are.
Consistency. Execution of a transaction in isolation preserves the
consistency of the database.
Isolation. Although multiple transactions may execute concurrently,
each transaction must be unaware of other concurrently executing
transactions. Intermediate transaction results must be hidden from
other concurrently executed transactions.
That is, for every pair of transactions Ti and Tj, it appears to Ti that
either Tj, finished execution before Ti started, or Tj started execution
after Ti finished.
Durability. After a transaction completes successfully, the changes it
has made to the database persist, even if there are system failures.
Concurrent Executions
Multiple transactions are allowed to run concurrently in the system.
Advantages are:
increased processor and disk utilization, leading to better
transaction throughput
 E.g. one transaction can be using the CPU while another is
reading from or writing to the disk
reduced average response time for transactions: short
transactions need not wait behind long ones.
Concurrency control schemes – mechanisms to achieve isolation
that is, to control the interaction among the concurrent
transactions in order to prevent them from destroying the
consistency of the database.
Schedules
Schedule – a sequences of instructions that specify the chronological
order in which instructions of concurrent transactions are executed
a schedule for a set of transactions must consist of all instructions
of those transactions
must preserve the order in which the instructions appear in each
individual transaction.
A transaction that successfully completes its execution will have a
commit instructions as the last statement
by default transaction assumed to execute commit instruction as its
last step
A transaction that fails to successfully complete its execution will have
an abort instruction as the last statement
Schedule 1
Let T1 transfer $50 from A to B, and T2 transfer 10% of the
balance from A to B.
A serial schedule in which T1 is followed by T2 :
Schedule 2
• A serial schedule where T2 is followed by T1
Schedule 3
Let T1 and T2 be the transactions defined previously. The
following schedule is not a serial schedule, but it is
equivalent to Schedule 1.

In Schedules 1, 2 and 3, the sum A + B is preserved.


Schedule 4
The following concurrent schedule does not preserve the
value of (A + B ).
Serializability
Basic Assumption – Each transaction preserves database
consistency.
Thus serial execution of a set of transactions preserves
database consistency.
A (possibly concurrent) schedule is serializable if it is
equivalent to a serial schedule. Different forms of schedule
equivalence give rise to the notions of:
1. conflict serializability
2. view serializability
Simplified view of transactions

We ignore operations other than read and write


instructions
We assume that transactions may perform arbitrary
computations on data in local buffers in between reads
and writes.
Our simplified schedules consist of only read and write
instructions.
Conflicting Instructions
Instructions li and lj of transactions Ti and Tj respectively, conflict
if and only if there exists some item Q accessed by both li and lj,
and at least one of these instructions wrote Q.
1. li = read(Q), lj = read(Q). li and lj don’t conflict.
2. li = read(Q), lj = write(Q). They conflict.
3. li = write(Q), lj = read(Q). They conflict
4. li = write(Q), lj = write(Q). They conflict
Intuitively, a conflict between li and lj forces a (logical) temporal
order between them.
If li and lj are consecutive in a schedule and they do not
conflict, their results would remain the same even if they had
been interchanged in the schedule.
Conflict Serializability
If a schedule S can be transformed into a schedule S´ by a series of
swaps of non-conflicting instructions, we say that S and S´ are
conflict equivalent.
We say that a schedule S is conflict serializable if it is conflict
equivalent to a serial schedule
Basic Steps in Query Processing
1. Parsing and translation
2. Optimization
3. Evaluation
Basic Steps in Query Processing:
Parsing and translation
translate the query into its internal form. This is then
translated into relational algebra.
Parser checks syntax, verifies relations
Evaluation
The query-execution engine takes a query-evaluation plan,
executes that plan, and returns the answers to the query.
Basic Steps in Query Processing : Optimization
A relational algebra expression may have many equivalent
expressions
E.g., salary75000(salary(instructor)) is equivalent to
salary(salary75000(instructor))
Each relational algebra operation can be evaluated using one of
several different algorithms
Correspondingly, a relational-algebra expression can be
evaluated in many ways.
Annotated expression specifying detailed evaluation strategy is
called an evaluation-plan.
E.g., can use an index on salary to find instructors with salary <
75000,
or can perform complete relation scan and discard instructors
with salary  75000
Basic Steps: Optimization (Cont.)
Query Optimization: Amongst all equivalent evaluation plans
choose the one with lowest cost.
Cost is estimated using statistical information from the
database catalog
 e.g. number of tuples in each relation, size of tuples, etc.
In this chapter we study
How to measure query costs
Algorithms for evaluating relational algebra operations
How to combine algorithms for individual operations in
order to evaluate a complete expression
In Chapter 14
We study how to optimize queries, that is, how to find an
evaluation plan with lowest estimated cost
Measures of Query Cost
Cost is generally measured as total elapsed time for answering
query
Many factors contribute to time cost
 disk accesses, CPU, or even network communication
Typically disk access is the predominant cost, and is also
relatively easy to estimate. Measured by taking into account
Number of seeks * average-seek-cost
Number of blocks read * average-block-read-cost
Number of blocks written * average-block-write-cost
 Cost to write a block is greater than cost to read a block
– data is read back after being written to ensure that the
write was successful
Measures of Query Cost (Cont.)

For simplicity we just use the number of block transfers from disk
and the number of seeks as the cost measures
tT – time to transfer one block
tS – time for one seek
Cost for b block transfers plus S seeks
b * tT + S * tS
We ignore CPU costs for simplicity
Real systems do take CPU cost into account
We do not include cost to writing output to disk in our cost formulae
Measures of Query Cost (Cont.)

Several algorithms can reduce disk IO by using extra buffer


space
Amount of real memory available to buffer depends on other
concurrent queries and OS processes, known only during
execution
 We often use worst case estimates, assuming only the
minimum amount of memory needed for the operation is
available
Required data may be buffer resident already, avoiding disk I/O
But hard to take into account for cost estimation
Selection Operation
File scan
Algorithm A1 (linear search). Scan each file block and test all
records to see whether they satisfy the selection condition.
Cost estimate = br block transfers + 1 seek
 br denotes number of blocks containing records from relation r
If selection is on a key attribute, can stop on finding record
 cost = (br /2) block transfers + 1 seek

Linear search can be applied regardless of


 selection condition or
 ordering of records in the file, or
 availability of indices

Note: binary search generally does not make sense since data is not
stored consecutively
except when there is an index available,
and binary search requires more seeks than index search
Selections Using Indices

Index scan – search algorithms that use an index


selection condition must be on search-key of index.
A2 (primary index, equality on key). Retrieve a single record
that satisfies the corresponding equality condition
Cost = (hi + 1) * (tT + tS)
A3 (primary index, equality on nonkey) Retrieve multiple
records.
Records will be on consecutive blocks
 Let b = number of blocks containing matching records
Cost = hi * (tT + tS) + tS + tT * b
Selections Using Indices

A4 (secondary index, equality on nonkey).


Retrieve a single record if the search-key is a candidate key
 Cost = (hi + 1) * (tT + tS)
Retrieve multiple records if search-key is not a candidate key
 each of n matching records may be on a different block
 Cost = (hi + n) * (tT + tS)
– Can be very expensive!
Sorting
We may build an index on the relation, and then use the index
to read the relation in sorted order. May lead to one disk block
access for each tuple.
For relations that fit in memory, techniques like quicksort can
be used. For relations that don’t fit in memory, external
sort-merge is a good choice.
External Sort-Merge
Let M denote memory size (in pages).
1. Create sorted runs. Let i be 0 initially.
Repeatedly do the following till the end of the relation:
(a) Read M blocks of relation into memory
(b) Sort the in-memory blocks
(c) Write sorted data to run Ri; increment i.
Let the final value of i be N
2. Merge the runs (next slide)…..
External Sort-Merge (Cont.)
2. Merge the runs (N-way merge). We assume (for now) that N <
M.
1. Use N blocks of memory to buffer input runs, and 1 block to
buffer output. Read the first block of each run into its buffer
page
2. repeat
1. Select the first record (in sort order) among all buffer
pages
2. Write the record to the output buffer. If the output buffer
is full write it to disk.
3. Delete the record from its input buffer page.
If the buffer page becomes empty then
read the next block (if any) of the run into the buffer.
3. until all input buffer pages are empty:
External Sort-Merge (Cont.)

If N  M, several merge passes are required.


In each pass, contiguous groups of M - 1 runs are merged.
A pass reduces the number of runs by a factor of M -1, and
creates runs longer by the same factor.
 E.g. If M=11, and there are 90 runs, one pass reduces
the number of runs to 9, each 10 times the size of the
initial runs
Repeated passes are performed till all runs have been
merged into one.
Example: External Sorting Using Sort-Merge
a 19 a 19
g 24 d 31 a 14
b 14
a 19 g 24 a 19
c 33
d 31 b 14
b 14 d 31
c 33 c 33
c 33 e 16
b 14 d 7
e 16 g 24
e 16 d 21
r 16 d 21 d 31
a 14
d 21 m 3 e 16
d 7
m 3 r 16 g 24
d 21
p 2 m 3
m 3
d 7 a 14 p 2
p 2
a 14 d 7 r 16
r 16
p 2
initial sorted
relation runs runs output
create merge merge
runs pass–1 pass–2
External Merge Sort (Cont.)
Cost analysis:
1 block per run leads to too many seeks during merge
 Instead use bb buffer blocks per run
➔ read/write bb blocks at a time
 Can merge M/bb–1 runs in one pass
Total number of merge passes required: log M/bb–1(br/M).
Block transfers for initial run creation as well as in each pass is 2br
 for final pass, we don’t count write cost
– we ignore final write cost for all operations since the output
of an operation may be sent to the parent operation without
being written to disk
 Thus total number of block transfers for external sorting:
br ( 2 log M/bb–1 (br / M) + 1) 

Seeks: next slide


External Merge Sort (Cont.)
Cost of seeks
During run generation: one seek to read each run and one
seek to write each run
 2 br / M
During the merge phase
 Need 2 br / bb seeks for each merge pass
– except the final one which does not require a write
 Total number of seeks:
2 br / M + br / bb (2 logM/bb–1(br / M) -1)
Other Operations

Duplicate elimination can be implemented via hashing or


sorting.
On sorting duplicates will come adjacent to each other, and all
but one set of duplicates can be deleted.
Optimization: duplicates can be deleted during run generation
as well as at intermediate merge steps in external sort-merge.
Hashing is similar – duplicates will come into the same
bucket.
Projection:
perform projection on each tuple
followed by duplicate elimination.
Other Operations : Aggregation
Aggregation can be implemented in a manner similar to duplicate
elimination.
Sorting or hashing can be used to bring tuples in the same
group together, and then the aggregate functions can be
applied on each group.
Optimization: combine tuples in the same group during run
generation and intermediate merges, by computing partial
aggregate values
 For count, min, max, sum: keep aggregate values on tuples
found so far in the group.
– When combining partial aggregate for count, add up the
aggregates
 For avg, keep sum and count, and divide sum by count at
the end
Other Operations : Set Operations

Set operations (,  and ⎯): can either use variant of merge-join
after sorting, or variant of hash-join.
E.g., Set operations using hashing:
1. Partition both relations using the same hash function
2. Process each partition i as follows.
1. Using a different hashing function, build an in-memory hash
index on ri.
2. Process si as follows

r  s:
1. Add tuples in si to the hash index if they are not
already in it.
2. At end of si add the tuples in the hash index to the
result.
Other Operations : Set Operations

E.g., Set operations using hashing:


1. as before partition r and s,
2. as before, process each partition i as follows
1. build a hash index on ri
2. Process si as follows
r  s:
1. output tuples in si to the result if they are already
there in the hash index
r – s:
1. for each tuple in si, if it is there in the hash index,
delete it from the index.
2. At end of si add remaining tuples in the hash
index to the result.
Other Operations : Outer Join
Outer join can be computed either as
A join followed by addition of null-padded non-participating
tuples.
by modifying the join algorithms.
Modifying merge join to compute r s
In r s, non participating tuples are those in r – R(r s)
Modify merge-join to compute r s:
 During merging, for every tuple tr from r that do not match
any tuple in s, output tr padded with nulls.
Right outer-join and full outer-join can be computed similarly.
Other Operations : Outer Join

Modifying hash join to compute r s


If r is probe relation, output non-matching r tuples padded
with nulls
If r is build relation, when probing keep track of which
r tuples matched s tuples. At end of si output
non-matched r tuples padded with nulls
Discretionary vs Mandatory Access Control

DAC: Access based on user identity and privileges.


MAC: Access based on clearance and data classification.
MAC is stricter; used in military/security-sensitive systems.
A University System

Let’s say you’re working with a university's


student database: names, grades, fees,
medical records.
Who can access what?
Issue: In the university:
Discretionary Access Control A professor can grant access to
(DAC)Access is controlled by the owner of a TA to view grades.
the data. (Google Docs)
Risk of accidental leaks. Easy to
overshare.
Example: TA forwards grade
sheet to the wrong email.
Audit Trails

Log of database activity: who accessed what, when.


Used for security monitoring and debugging.
Essential for compliance and forensics.
Multi-Level Security (MLS)

Security labels (e.g., Confidential, Secret) on data.


Users access data based on their clearance level.
Prevents unauthorized information flow.

You might also like