0% found this document useful (0 votes)
42 views65 pages

Notes_Unit 3-5 DBMS RGPV

The document provides an overview of normalization in database management systems, detailing the processes of decomposition, lossless and lossy decomposition, and various normal forms (1NF, 2NF, 3NF, and higher forms). It explains the importance of reducing redundancy and anomalies in databases through normalization, with examples illustrating each concept. Additionally, it discusses functional dependencies, multivalued dependencies, and the significance of dependency preservation in maintaining data integrity.

Uploaded by

Hiren Kodwani
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
42 views65 pages

Notes_Unit 3-5 DBMS RGPV

The document provides an overview of normalization in database management systems, detailing the processes of decomposition, lossless and lossy decomposition, and various normal forms (1NF, 2NF, 3NF, and higher forms). It explains the importance of reducing redundancy and anomalies in databases through normalization, with examples illustrating each concept. Additionally, it discusses functional dependencies, multivalued dependencies, and the significance of dependency preservation in maintaining data integrity.

Uploaded by

Hiren Kodwani
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 65

UNIT- III

Normalization Introduction, Non loss decomposition and functional dependencies,


First, Second, and third normal forms dependency preservation, Boyce/Codd normal
form. Higher Normal Forms - Introduction, Multi-valued dependencies and Fourth
normal form, Join dependencies and Fifth normal form

Decomposition: the process of breaking up or dividing a single relation into two or more sub
relations is called as decomposition of a relation.

Decomposition in DBMS removes redundancy, anomalies and inconsistencies from a


database by dividing the table into multiple tables.

Lossless Decomposition

o If the information is not lost from the relation that is decomposed, then the
decomposition will be lossless.
o The lossless decomposition guarantees that the join of relations will result in the same
relation as it was decomposed.
o The relation is said to be lossless decomposition if natural joins of all the
decomposition give the original relation.

Example:

DATABASE MANAGEMENT SYSTEMS Page 94


o EMPLOYEE_DEPARTMENT table:

EMP_ID EMP_NAME EMP_AGE EMP_CITY DEPT_ID DEPT_N


AME

22 Denim 28 Mumbai 827 Sales

33 Alina 25 Delhi 438 Market


ing

46 Stephan 30 Bangalore 869 Financ


e

52 Katherine 36 Mumbai 575 Product


ion

60 Jack 40 Noida 678 Testing

o The above relation is decomposed into two relations EMPLOYEE and


DEPARTMENT
o EMPLOYEE table:

EMP_ID EMP_NAME EMP_AGE EMP_CITY

22 Denim 28 Mumbai

33 Alina 25 Delhi

46 Stephan 30 Bangalore

52 Katherine 36 Mumbai

60 Jack 40 Noida

DATABASE MANAGEMENT SYSTEMS Page 95


o DEPARTMENT table
o Now, when these two relations are joined on the common column "EMP_ID", then
the resultant relation will look like:
Employee Department

EMP_I EMP_NAM EMP_AG EMP_CIT DEPT_I DEPT_NAM


D E E Y D E

22 Denim 28 Mumbai 827 Sales

33 Alina 25 Delhi 438 Marketing

46 Stephan 30 Bangalore 869 Finance

52 Katherine 36 Mumbai 575 Production

60 Jack 40 Noida 678 Testing

o Hence, the decomposition is Lossless join decomposition.

Lossy Decomposition

As the name suggests, when a relation is decomposed into two or more relational schemas,
the loss of information is unavoidable when the original relation is retrieved.

<EmpInfo>

Emp_ID Emp_Name Emp_Age Emp_Location Dept_ID Dept_Name

E001 Jacob 29 Alabama Dpt1 Operations

E002 Henry 32 Alabama Dpt2 HR

E003 Tom 22 Texas Dpt3 Finance

DATABASE MANAGEMENT SYSTEMS Page 96


<EmpDetails>

Emp_ID Emp_Name Emp_Age Emp_Location

E001 Jacob 29 Alabama

E002 Henry 32 Alabama

E003 Tom 22 Texas

<DeptDetails>

Dept_ID Dept_Name

Dpt1 Operations

Dpt2 HR

Dpt3 Finance

Emp_ID
the DeptDetails relation.

Therefore, the above relation has lossy decomposition.

Dependency Preserving

o It is an important constraint of the database.

o In the dependency preservation, at least one decomposed table must satisfy every
dependency.

o If a relation R is decomposed into relation R1 and R2, then the dependencies of R


either must be a part of R1 or R2 or must be derivable from the combination of
functional dependencies of R1 and R2.

DATABASE MANAGEMENT SYSTEMS Page 97


o For example, suppose there is a relation R (A, B, C, D) with functional dependency
set (A->BC). The relational R is decomposed into R1(ABC) and R2(AD) which is
dependency preserving because FD A->BC is a part of relation R1(ABC).

Multivalued Dependency

o Multivalued dependency occurs when two attributes in a table are independent of each
other but, both depend on a third attribute.
o A multivalued dependency consists of at least two attributes that are dependent on a
third attribute that's why it always requires at least three attributes.

Example: Suppose there is a bike manufacturer company which produces two colors(white
and black) of each model every year.

BIKE_MODEL MANUF_YEAR COLOR

M2011 2008 White

M2001 2008 Black

M3001 2013 White

M3001 2013 Black

M4006 2017 White

M4006 2017 Black

Here columns COLOR and MANUF_YEAR are dependent on BIKE_MODEL and


independent of each other.

In this case, these two columns can be called as multivalued dependent on BIKE_MODEL.
The representation of these dependencies is shown below:

BIKE_MODEL MANUF_YEAR
BIKE_MODEL COLOR

DATABASE MANAGEMENT SYSTEMS Page 98


This can be read as "BIKE_MODEL multidetermined MANUF_YEAR" and
"BIKE_MODEL multidetermined COLOR".

Normalization: Normalization is a process of organizing the data in database to avoid data


redundancy, insertion anomaly, update anomaly & deletion anomaly.

o Normalization is the process of organizing the data in the database.


o Normalization is used to minimize the redundancy from a relation or set of relations.
It is also used to eliminate the undesirable characteristics like Insertion, Update and
Deletion Anomalies.
o Normalization divides the larger table into the smaller table and links them using
relationship.
o The normal form is used to reduce redundancy from the database table.

Anomalies in DBMS

There are three types of anomalies that occur when the database is not normalized. These are
Insertion, update and deletion anomaly.

Example: Suppose a manufacturing company stores the employee details in a table named

department details in which the employee works. At some point of time the table looks like
this:

Update anomaly: we have two rows for employee Rick as he belongs to two departments of
the company. If we want to update the address of Rick then we have to update the same in
two rows or the data will become inconsistent. If somehow, the correct address gets updated
in one department but not in other then as per the database, Rick would be having two
different addresses, which is not correct and would lead to inconsistent data.

Insert anomaly: Suppose a new employee joins the company, who is under training and
currently not assigned to any department then we would not be able to insert the data into the

DATABASE MANAGEMENT SYSTEMS Page 99


Delete anomaly: Suppose, if at a point of time the company closes the department D890 then
deleting the rows that are having emp_dept as D890 would also delete the information of
employee Maggie since she is assigned only to this department.

To overcome these anomalies we need to normalize the data. In the next section we will
discuss about normalization.

First Normal Form (1NF)

o A relation will be 1NF if it contains an atomic value.

o It states that an attribute of a table cannot hold multiple values. It must hold only
single-valued attribute.

o First normal form disallows the multi-valued attribute, composite attribute, and their
combinations.

Example: Relation EMPLOYEE is not in 1NF because of multi-valued attribute


EMP_PHONE.

EMPLOYEE table:

EMP_ID EMP_NAME EMP_PHONE EMP_STATE

14 John 7272826385, UP
9064738238

20 Harry 8574783832 Bihar

12 Sam 7390372389, Punjab


8589830302

DATABASE MANAGEMENT SYSTEMS Page 100


EMP_ID EMP_NAME EMP_PHONE EMP_STATE

14 John 7272826385 UP

14 John 9064738238 UP

20 Harry 8574783832 Bihar

12 Sam 7390372389 Punjab

12 Sam 8589830302 Punjab

Ex2:First normal form (1NF)

As per the rule of first normal form, an attribute (column) of a table cannot hold multiple
values. It should hold only atomic values.

Example: Suppose a company wants to store the names and contact details of its employees.
It creates a table that looks like this:

emp_id emp_name emp_address emp_mobile

101 Herschel New Delhi 8912312390

8812121212
102 Jon Kanpur

9900012222

103 Ron Chennai 7778881212

DATABASE MANAGEMENT SYSTEMS Page 101


9990000123
104 Lester Bangalore
8123450987

Two employees (Jon & Lester) are having two mobile numbers so the company stored them
in the same field as you can see in the table above.

This table is not in 1NF

To make the table complies with 1NF we should have the data like this:

emp_id emp_name emp_address emp_mobile

101 Herschel New Delhi 8912312390

102 Jon Kanpur 8812121212

102 Jon Kanpur 9900012222

103 Ron Chennai 7778881212

104 Lester Bangalore 9990000123

DATABASE MANAGEMENT SYSTEMS Page 102


104 Lester Bangalore 8123450987

Example 3

ID Name Courses

------------------

1 A c1, c2

2 E c3

3 M C2, c3

In the above table Course is a multi valued attribute so it is not in 1NF.

Below Table is in 1NF as there is no multi valued attribute

ID Name Course

------------------

1 A c1

1 A c2

2 E c3

3 M c2

3 M c3

DATABASE MANAGEMENT SYSTEMS Page 103


Second Normal Form (2NF)

o In the 2NF, relational must be in 1NF.


o In the second normal form, all non-key attributes are fully functional dependent on the
primary key

Second normal form (2NF)

A table is said to be in 2NF if both the following conditions hold:

Table is in 1NF (First normal form)


No non-prime attribute is dependent on the proper subset of any candidate key of table.

An attribute that is not part of any candidate key is known as non-prime attribute.

Example: Let's assume, a school can store the data of teachers and the subjects they teach. In
a school, a teacher can teach more than one subject.

TEACHER table

TEACHER_ID SUBJECT TEACHER_AGE

25 Chemistry 30

25 Biology 30

47 English 35

83 Math 38

83 Computer 38

DATABASE MANAGEMENT SYSTEMS Page 104


In the given table, non-prime attribute TEACHER_AGE is dependent on TEACHER_ID
which is a proper subset of a candidate key. That's why it violates the rule for 2NF.

To convert the given table into 2NF, we decompose it into two tables:

TEACHER_DETAIL table:

TEACHER_ID TEACHER_AGE

25 30

47 35

83 38

TEACHER_SUBJECT table:

TEACHER_ID SUBJECT

25 Chemistry

25 Biology

47 English

83 Math

83 Computer

Example: Suppose a school wants to store the data of teachers and the subjects they teach.
They create a table that looks like this: Since a teacher can teach more than one subjects, the
table can have multiple rows for a same teacher.

DATABASE MANAGEMENT SYSTEMS Page 105


teacher_id subject teacher_age

111 Maths 38

111 Physics 38

222 Biology 38

333 Physics 40

333 Chemistry 40

Candidate Keys: {teacher_id, subject}


Non prime attribute: teacher_age

The table is in 1 NF because each attribute has atomic values. However, it is not in 2NF
because non prime attribute teacher_age is dependent on teacher_id alone which is a proper
no non-prime

To make the table complies with 2NF we can break it in two tables like this:
teacher_details table:

DATABASE MANAGEMENT SYSTEMS Page 106


teacher_id teacher_age

111 38

222 38

333 40

teacher_subject table:

teacher_id subject

111 Maths

111 Physics

222 Biology

333 Physics

DATABASE MANAGEMENT SYSTEMS Page 107


333 Chemistry

Now the tables comply with Second normal form (2NF).

Second Normal Form

a relation must be in first normal form and relation must not contain any partial
dependency.
A relation is in 2NF if it has No Partial Dependency, i.e., no non-prime attribute
(attributes which are not part of any candidate key) is dependent on any proper
subset of any candidate key of the table.

Partial Dependency If the proper subset of candidate key determines non-prime


attribute, it is called partial dependency.

Example 1 Consider table-3 as following below.

STUD_NO COURSE_NO COURSE_FEE

1 C1 1000

2 C2 1500

1 C4 2000

4 C3 1000

4 C1 1000

2 C5 2000

Note that, there are many courses having the same course fee. }

Here,
COURSE_FEE cannot alone decide the value of COURSE_NO or STUD_NO;

DATABASE MANAGEMENT SYSTEMS Page 108


COURSE_FEE together with STUD_NO cannot decide the value of COURSE_NO;

COURSE_FEE together with COURSE_NO cannot decide the value of STUD_NO;

Hence,
COURSE_FEE would be a non-prime attribute, as it does not belong to the one only
candidate key {STUD_NO, COURSE_NO} ;

But, COURSE_NO -> COURSE_FEE , i.e., COURSE_FEE is dependent on


COURSE_NO, which is a proper subset of the candidate key. Non-prime attribute
COURSE_FEE is dependent on a proper subset of the candidate key, which is a partial
dependency and so this relation is not in 2NF.

To convert the above relation to 2NF,

we need to split the table into two tables such as :

Table 1: STUD_NO, COURSE_NO

Table 2: COURSE_NO, COURSE_FEE

Table 1 Table 2
STUD_NO COURSE_NO COURSE_NO COURSE_FEE
1 C1 C1 1000
2 C2 C2 1500
1 C4 C3 1000
4 C3 C4 2000
4 C1 C5 2000

DATABASE MANAGEMENT SYSTEMS Page 109


Example 2 Consider following functional dependencies in relation R (A, B , C, D )

AB -> C [A and B together determine C]

BC -> D [B and C together determine D]

In the above relation, AB is the only candidate key and there is no partial dependency,
-prime attribute.

Third Normal Form (3NF)

o A relation will be in 3NF if it is in 2NF and not contain any transitive partial
dependency.
o 3NF is used to reduce the data duplication. It is also used to achieve the data integrity.
o If there is no transitive dependency for non-prime attributes, then the relation must be
in third normal form.

A relation is in third normal form if it holds atleast one of the following conditions for every
non-

1. X is a super key.
2. Y is a prime attribute, i.e., each element of Y is part of some candidate key.

Example:

EMPLOYEE_DETAIL table:

EMP_ID EMP_NAME EMP_ZIP EMP_STATE EMP_CITY

222 Harry 201010 UP Noida

333 Stephan 02228 US Boston

444 Lan 60007 US Chicago

DATABASE MANAGEMENT SYSTEMS Page 110


555 Katharine 06389 UK Norwich

666 John 462007 MP Bhopal

Super key in the table above:

1. {EMP_ID}, {EMP_ID, EMP_NAME}, {EMP_ID, EMP_NAME, EMP_ZIP}....so on

Candidate key: {EMP_ID}

Non-prime attributes: In the given table, all attributes except EMP_ID are non-
prime.

Here, EMP_STATE & EMP_CITY dependent on EMP_ZIP and EMP_ZIP dependent


on EMP_ID. The non-prime attributes (EMP_STATE, EMP_CITY) transitively
dependent on super key(EMP_ID). It violates the rule of third normal form.

That's why we need to move the EMP_CITY and EMP_STATE to the new
<EMPLOYEE_ZIP> table, with EMP_ZIP as a Primary key.

EMPLOYEE table:

EMP_ID EMP_NAME EMP_ZIP

222 Harry 201010

333 Stephan 02228

444 Lan 60007

555 Katharine 06389

666 John 462007

DATABASE MANAGEMENT SYSTEMS Page 111


EMPLOYEE_ZIP table:

EMP_ZIP EMP_STATE EMP_CITY

201010 UP Noida

02228 US Boston

60007 US Chicago

06389 UK Norwich

462007 MP Bhopal

Third Normal form (3NF)

A table design is said to be in 3NF if both the following conditions hold:

Table must be in 2NF


Transitive functional dependency of non-prime attribute on any super key should be
removed.

An attribute that is not part of any candidate key is known as non-prime attribute.

In other words 3NF can be explained like this: A table is in 3NF if it is in 2NF and for each
functional dependency X-> Y at least one of the following conditions hold:

X is a super key of table


Y is a prime attribute of table

An attribute that is a part of one of the candidate keys is known as prime attribute.

DATABASE MANAGEMENT SYSTEMS Page 112


Example: Suppose a company wants to store the complete address of each employee, they
create a table named employee_details that looks like this:

emp_id emp_name emp_zip emp_state emp_city emp_district

1001 John 282005 UP Agra Dayal Bagh

1002 Ajeet 222008 TN Chennai M-City

1006 Lora 282007 TN Chennai Urrapakkam

1101 Lilly 292008 UK Pauri Bhagwan

1201 Steve 222999 MP Gwalior Ratan

Super keys

Candidate Keys: {emp_id}

Non-prime attributes: all attributes except emp_id are non-prime as they are not part of any
candidate keys.

DATABASE MANAGEMENT SYSTEMS Page 113


Here, emp_state, emp_city & emp_district dependent on emp_zip. And, emp_zip is
dependent on emp_id that makes non-prime attributes (emp_state, emp_city & emp_district)
transitively dependent on super key (emp_id). This violates the rule of 3NF.

To make this table complies with 3NF we have to break the table into two tables to remove
the transitive dependency:

employee table:

emp_id emp_name emp_zip

1001 John 282005

1002 Ajeet 222008

1006 Lora 282007

1101 Lilly 292008

1201 Steve 222999

employee_zip table:

DATABASE MANAGEMENT SYSTEMS Page 114


emp_zip emp_state emp_city emp_district

282005 UP Agra Dayal Bagh

222008 TN Chennai M-City

282007 TN Chennai Urrapakkam

292008 UK Pauri Bhagwan

222999 MP Gwalior Ratan

Third Normal Form

A relation is in third normal form, if there is no transitive dependency for non-prime


attributes as well as it is in second normal form.
A relation is in 3NF if at least one of the following condition holds in every non-trivial
function dependency X > Y
1. X is a super key.
2. Y is a prime attribute (each element of Y is part of some candidate key).

DATABASE MANAGEMENT SYSTEMS Page 115


Transitive dependency If A->B and B->C are two FDs then A->C is called transitive
dependency.
Example 1 In relation STUDENT given in Table 4,
FD set: {STUD_NO -> STUD_NAME, STUD_NO -> STUD_STATE,
STUD_STATE -> STUD_COUNTRY, STUD_NO -> STUD_AGE}
Candidate Key: {STUD_NO}

For this relation in table 4, STUD_NO -> STUD_STATE and STUD_STATE ->
STUD_COUNTRY are true. So STUD_COUNTRY is transitively dependent on
STUD_NO. It violates the third normal form. To convert it in third normal form,
we will decompose the relation STUDENT (STUD_NO, STUD_NAME,
STUD_PHONE, STUD_STATE, STUD_COUNTRY_STUD_AGE) as:
STUDENT (STUD_NO, STUD_NAME, STUD_PHONE, STUD_STATE,
STUD_AGE)
STATE_COUNTRY (STATE, COUNTRY)

Example 2 Consider relation R(A, B, C, D, E)


A -> BC,
CD -> E,
B -> D,
E -> A
All possible candidate keys in above relation are {A, E, CD, BC} All attribute
are on right sides of all functional dependencies are prime.

Fourth normal form (4NF)


o A relation will be in 4NF if it is in Boyce Codd normal form and has no multi-valued
dependency.
o

the relation will be a multi-valued dependency.

DATABASE MANAGEMENT SYSTEMS Page 116


Example

STUDENT

STU_ID COURSE HOBBY

21 Computer Dancing

21 Math Singing

34 Chemistry Dancing

74 Biology Cricket

59 Physics Hockey

The given STUDENT table is in 3NF, but the COURSE and HOBBY are two independent
entity. Hence, there is no relationship between COURSE and HOBBY.

In the STUDENT relation, a student with STU_ID, 21 contains two


courses, Computer and Math and two hobbies, Dancing and Singing. So there is a Multi-
valued dependency on STU_ID, which leads to unnecessary repetition of data.

So to make the above table into 4NF, we can decompose it into two tables:

STUDENT_COURSE

STU_ID COURSE

21 Computer

DATABASE MANAGEMENT SYSTEMS Page 117


21 Math

34 Chemistry

74 Biology

59 Physics

STUDENT_HOBBY

STU_ID HOBBY

21 Dancing

21 Singing

34 Dancing

74 Cricket

59 Hockey

Fourth normal form (4NF)

o A relation will be in 4NF if it is in Boyce Codd normal form and has no multi-valued
dependency.
o

the relation will be a multi-valued dependency.

DATABASE MANAGEMENT SYSTEMS Page 118


Example

STUDENT

STU_ID COURSE HOBBY

21 Computer Dancing

21 Math Singing

34 Chemistry Dancing

74 Biology Cricket

59 Physics Hockey

The given STUDENT table is in 3NF, but the COURSE and HOBBY are two independent
entity. Hence, there is no relationship between COURSE and HOBBY.

In the STUDENT relation, a student with STU_ID, 21 contains two


courses, Computer and Math and two hobbies, Dancing and Singing. So there is a Multi-
valued dependency on STU_ID, which leads to unnecessary repetition of data.

So to make the above table into 4NF, we can decompose it into two tables:

STUDENT_COURSE

STU_ID COURSE

21 Computer

DATABASE MANAGEMENT SYSTEMS Page 119


21 Math

34 Chemistry

74 Biology

59 Physics

STUDENT_HOBBY

STU_ID HOBBY

21 Dancing

21 Singing

34 Dancing

74 Cricket

59 Hockey

Example Consider the database table of a class whaich has two relations R1 contains
student ID(SID) and student name (SNAME) and R2 contains course id(CID) and course
name (CNAME).

Table R1(SID, SNAME)

DATABASE MANAGEMENT SYSTEMS Page 120


SID SNAME

S1 A

S2 B

CID CNAME

C1 C

C2 D

When there cross product is done it resulted in multivalued dependencies:

Table R1 X R2

SID SNAME CID CNAME

S1 A C1 C

S1 A C2 D

S2 B C1 C

S2 B C2 D

Multivalued dependencies (MVD) are:

SID->->CID; SID->->CNAME; SNAME->->CNAME

Multivalued Dependency

DATABASE MANAGEMENT SYSTEMS Page 121


o Multivalued dependency occurs when two attributes in a table are independent of each
other but, both depend on a third attribute.
o A multivalued dependency consists of at least two attributes that are dependent on a
third attribute that's why it always requires at least three attributes.

Example: Suppose there is a bike manufacturer company which produces two colors(white
and black) of each model every year.

BIKE_MODEL MANUF_YEAR COLOR

M2011 2008 White

M2001 2008 Black

M3001 2013 White

M3001 2013 Black

M4006 2017 White

M4006 2017 Black

Here columns COLOR and MANUF_YEAR are dependent on BIKE_MODEL and


independent of each other.

In this case, these two columns can be called as multivalued dependent on BIKE_MODEL.
The representation of these dependencies is shown below:

1. BIKE_MODEL MANUF_YEAR
2. BIKE_MODEL COLOR

This can be read as "BIKE_MODEL multidetermined MANUF_YEAR" and


"BIKE_MODEL multidetermined COLOR".

Join Dependency

DATABASE MANAGEMENT SYSTEMS Page 122


o Join decomposition is a further generalization of Multivalued dependencies.
o If the join of R1 and R2 over C is equal to relation R, then we can say that a join
dependency (JD) exists.
o Where R1 and R2 are the decompositions R1(A, B, C) and R2(C, D) of a given
relations R (A, B, C, D).
o Alternatively, R1 and R2 are a lossless decomposition of R.
o A JD {R1, R2,..., Rn} is said to hold over a relation R if R1, R2,....., Rn is a
lossless-join decomposition.
o The *(A, B, C, D), (C, D) will be a JD of R if the join of join's attribute is equal to the
relation R.
o Here, *(R1, R2, R3) is used to indicate that relation R1, R2, R3 and so on are a JD of
R.

Fifth normal form (5NF)


o A relation is in 5NF if it is in 4NF and not contains any join dependency and joining
should be lossless.

o 5NF is satisfied when all the tables are broken into as many tables as possible in order
to avoid redundancy.

o 5NF is also known as Project-join normal form (PJ/NF).

Example

SUBJECT LECTURER SEMESTER

Computer Anshika Semester 1

Computer John Semester 1

DATABASE MANAGEMENT SYSTEMS Page 123


Math John Semester 1

Math Akash Semester 2

Chemistry Praveen Semester 1

In the above table, John takes both Computer and Math class for Semester 1 but he doesn't
take Math class for Semester 2. In this case, combination of all these fields required to
identify a valid data.

Suppose we add a new Semester as Semester 3 but do not know about the subject and who
will be taking that subject so we leave Lecturer and Subject as NULL. But all three columns
together acts as a primary key, so we can't leave other two columns blank.

So to make the above table into 5NF, we can decompose it into three relations P1, P2 & P3:

P1

SEMESTER SUBJECT

Semester 1 Computer

Semester 1 Math

Semester 1 Chemistry

Semester 2 Math

P2

DATABASE MANAGEMENT SYSTEMS Page 124


SUBJECT LECTURER

Computer Anshika

Computer John

Math John

Math Akash

Chemistry Praveen

P3

SEMSTER LECTURER

Semester 1 Anshika

Semester 1 John

Semester 1 John

Semester 2 Akash

Semester 1 Praveen

DATABASE MANAGEMENT SYSTEMS Page 125


UNIT-4

TRANSACTION MANAGEMENT IN DBMS:

A transaction is a set of logically related operations.

Now that we understand what is transaction, we should understand what are the
problems associated with it.

The main problem that can happen during a transaction is that the transaction can fail
before finishing the all the operations in the set. This can happen due to power failure,
system crash etc.

This is a serious problem that can leave database in an inconsistent state. Assume that
transaction fail after third operation (see the example above) then the amount would
be deducted from your account but your friend will not receive it.

To solve this problem, we have the following two operations

Commit: If all the operations in a transaction are completed successfully then commit those
changes to the database permanently.

Rollback: If any of the operation fails then rollback all the changes done by previous
operations.

STATES OF TRANSACTION
Transactions can be implemented using SQL queries and Server. In the below-given
diagram, you can see how transaction states works.

DATABASE MANAGEMENT SYSTEMS Page 126


Active state

o The active state is the first state of every transaction. In this state, the transaction is
being executed.
o For example: Insertion or deletion or updating a record is done here. But all the
records are still not saved to the database.

Partially committed

o In the partially committed state, a transaction executes its final operation, but the data
is still not saved to the database.
o In the total mark calculation example, a final display of the total marks step is
executed in this state.

Committed

A transaction is said to be in a committed state if it executes all its operations successfully. In


this state, all the effects are now permanently saved on the database system.

Failed state

o If any of the checks made by the database recovery system fails, then the transaction
is said to be in the failed state.

o In the example of total mark calculation, if the database is not able to fire a query to
fetch the marks, then the transaction will fail to execute.

DATABASE MANAGEMENT SYSTEMS Page 127


Aborted

o If any of the checks fail and the transaction has reached a failed state then the
database recovery system will make sure that the database is in its previous consistent
state. If not then it will abort or roll back the transaction to bring the database into a
consistent state.

o If the transaction fails in the middle of the transaction then before executing the
transaction, all the executed transactions are rolled back to its consistent state.

o After aborting the transaction, the database recovery module will select one of the two
operations:
1. Re-start the transaction
2. Kill the transaction

TRANSACTION PROPERTY

The transaction has the four properties. These are used to maintain consistency in a database,
before and after the transaction.

Property of Transaction
1. Atomicity
2. Consistency
3. Isolation
4. Durability

Atomicity
o It states that all operations of the transaction take place at once if not, the transaction
is aborted.

o There is no midway, i.e., the transaction cannot occur partially. Each transaction is
treated as one unit and either run to completion or is not executed at all.

Atomicity involves the following two operations:

Abort: If a transaction aborts then all the changes made are not visible.

DATABASE MANAGEMENT SYSTEMS Page 128


Commit: If a transaction commits then all the changes made are visible.

Consistency
o The integrity constraints are maintained so that the database is consistent before and
after the transaction.
o The execution of a transaction will leave a database in either its prior stable state or a
new stable state.
o The consistent property of database states that every transaction sees a consistent
database instance.
o The transaction is used to transform the database from one consistent state to another
consistent state.

Isolation
o It shows that the data which is used at the time of execution of a transaction cannot be
used by the second transaction until the first one is completed.
o In isolation, if the transaction T1 is being executed and using the data item X, then
that data item can't be accessed by any other transaction T2 until the transaction T1
ends.
o The concurrency control subsystem of the DBMS enforced the isolation property.

Durability
o The durability property is used to indicate the performance of the database's
consistent state. It states that the transaction made the permanent changes.

o They cannot be lost by the erroneous operation of a faulty transaction or by the


system failure. When a transaction is completed, then the database reaches a state
known as the consistent state. That consistent state cannot be lost, even in the event of
a system's failure.
o The recovery subsystem of the DBMS has the responsibility of Durability property.

IMPLEMENTATION OF ATOMICITY AND DURABILITY

The recovery-management component of a database system can support atomicity


and durability by a variety of schemes.

DATABASE MANAGEMENT SYSTEMS Page 129


E.g. the shadow-database scheme:

Shadow copy:

In the shadow-copy scheme, a transaction that wants to update the database first creates a
complete copy of the database.
All updates are done on the new database copy, leaving the original copy, the shadow copy,
untouched. If at any point the transaction has to be aborted, the system merely deletes the
new copy. The old copy of the database has not been affected.
This scheme is based on making copies of the database, called shadow copies, assumes that
only one transaction is active at a time.
The scheme also assumes that the database is simply a file on disk. A pointer called db-
pointer is maintained on disk; it points to the current copy of the database.

If the transaction completes, it is committed as follows:

First, the operating system is asked to make sure that all pages of the new copy of the
database have been written out to disk. (Unix systems use the flush command for this
purpose.)
After the operating system has written all the pages to disk, the database system updates the
pointer db-pointer to point to the new copy of the database;
the new copy then becomes the current copy of the database. The old copy of the database is
then deleted.

Figure below depicts the scheme, showing the database state before and after the update.

DATABASE MANAGEMENT SYSTEMS Page 130


SCHEDULE

A series of operation from one transaction to another transaction is known as schedule. It is


used to preserve the order of the operation in each of the individual transaction.

1. SERIAL SCHEDULE

The serial schedule is a type of schedule where one transaction is executed completely before
starting another transaction. In the serial schedule, when the first transaction completes its
cycle, then the next transaction is executed.

DATABASE MANAGEMENT SYSTEMS Page 131


For example: Suppose there are two transactions T1 and T2 which have some operations. If
it has no interleaving of operations, then there are the following two possible outcomes:

1. Execute all the operations of T1 which was followed by all the operations of T2.
2. Execute all the operations of T1 which was followed by all the operations of T2.

o In the given (a) figure, Schedule A shows the serial schedule where T1 followed by
T2.
o In the given (b) figure, Schedule B shows the serial schedule where T2 followed by
T1.

2. NON-SERIAL SCHEDULE
o If interleaving of operations is allowed, then there will be non-serial schedule.
o It contains many possible orders in which the system can execute the individual
operations of the transactions.
o In the given figure (c) and (d), Schedule C and Schedule D are the non-serial
schedules. It has interleaving of operations.

3. SERIALIZABLE SCHEDULE
o The serializability of schedules is used to find non-serial schedules that allow the
transaction to execute concurrently without interfering with one another.

o It identifies which schedules are correct when executions of the transaction have
interleaving of their operations.
o A non-serial schedule will be serializable if its result is equal to the result of its
transactions executed serially.

SERIALIZABILITY IN DBMS

Some non-serial schedules may lead to inconsistency of the database.


Serializability is a concept that helps to identify which non-serial schedules are correct and
will maintain the consistency of the database.

DATABASE MANAGEMENT SYSTEMS Page 132


Types of Serializability

Serializability is mainly of two types-

1. Conflict Serializability
2. View Serializability

Conflict Serializability

If a given non-serial schedule can be converted into a serial schedule by swapping its non-
conflicting operations, then it is called as a conflict serializable schedule.

Conflicting Operations

Two operations are called as conflicting operations if all the following conditions hold true
for them-

Both the operations belong to different transactions


Both the operations are on the same data item
At least one of the two operations is a write operation

Example-

DATABASE MANAGEMENT SYSTEMS Page 133


Consider the following schedule-

In this schedule,

W1 (A) and R2 (A) are called as conflicting operations.


This is because all the above conditions hold true for them.
Checking Whether a Schedule is Conflict Serializable Or Not-

Follow the following steps to check whether a given non-serial schedule is conflict
serializable or not-

Follow the following steps to check whether a given non-serial schedule is conflict
serializable or not-

Step-01:

Find and list all the conflicting operations.

Step-02:

Start creating a precedence graph by drawing one node for each transaction.

Step-03:

DATABASE MANAGEMENT SYSTEMS Page 134


Draw an edge for each conflict pair such that if Xi (V) and Yj (V) forms a conflict pair then
draw an edge from Ti to Tj.

This ensures that Ti gets executed before Tj.

Step-04:

Check if there is any cycle formed in the graph.

If there is no cycle found, then the schedule is conflict serializable otherwise not.
VIEW SERIALIZABILITY?

View Serializability is a process to find out that a given schedule is view serializable or not.

To check whether a given schedule is view serializable, we need to check whether the given
schedule is View Equivalent to its serial schedule. Lets take an example to understand what I
mean by that.

View Serializability
o A schedule will view serializable if it is view equivalent to a serial schedule.
o If a schedule is conflict serializable, then it will be view serializable.
o The view serializable which does not conflict serializable contains blind writes.

View Equivalent

Two schedules S1 and S2 are said to be view equivalent if they satisfy the following
conditions:

1. Initial Read:

An initial read of both schedules must be the same. Suppose two schedule S1 and S2. In
schedule S1, if a transaction T1 is reading the data item A, then in S2, transaction T1 should
also read A.

DATABASE MANAGEMENT SYSTEMS Page 135


Above two schedules are view equivalent because Initial read operation in S1 is done by T1
and in S2 it is also done by T1.

2. Updated Read

In schedule S1, if Ti is reading A which is updated by Tj then in S2 also, Ti should read A


which is updated by Tj.

3. Final Write

A final write must be the same between both the schedules. In schedule S1, if a transaction
T1 updates A at last then in S2, final writes operations should also be done by T1.

DATABASE MANAGEMENT SYSTEMS Page 136


Above two schedules is view equal because Final write operation in S1 is done by T3
and in S2, the final write operation is also done by T3.

Recoverability of Schedule

Sometimes a transaction may not execute completely due to a software issue, system crash or
hardware failure. In that case, the failed transaction has to be rollback. But

TRANSACTION ISOLATION LEVELS IN DBMS

some other transaction may also have used value produced by the failed transaction. So we
also have to rollback those transactions.

The SQL standard defines four isolation levels :

1. Read Uncommitted Read Uncommitted is the lowest isolation level. In this level,
one transaction may read not yet committed changes made by other transaction, thereby
allowing dirty reads. In this level, transactions are not isolated from each other.
2. Read Committed This isolation level guarantees that any data read is committed at
the moment it is read. Thus it does not allows dirty read. The transaction holds a read or
write lock on the current row, and thus prevent other transactions from reading,
updating or deleting it.
3. Repeatable Read This is the most restrictive isolation level. The transaction holds
read locks on all rows it references and writes locks on all rows it inserts, updates, or

DATABASE MANAGEMENT SYSTEMS Page 137


deletes. Since other transaction cannot read, update or delete these rows, consequently it
avoids non-repeatable read.
4. Serializable This is the Highest isolation level. A serializable execution is guaranteed
to be serializable. Serializable execution is defined to be an execution of operations in
which concurrently executing transactions appears to be serially executing.

FAILURE CLASSIFICATION

To find that where the problem has occurred, we generalize a failure into the following
categories:

1. Transaction failure
2. System crash
3. Disk failure

1. Transaction failure

The transaction failure occurs when it fails to execute or when it reaches a point from
where it can't go any further. If a few transaction or process is hurt, then this is called
as transaction failure.

Reasons for a transaction failure could be -

1. Logical errors: If a transaction cannot complete due to some code error or an


internal error condition, then the logical error occurs.
2. Syntax error: It occurs where the DBMS itself terminates an active
transaction because the database system is not able to execute it. For
example, The system aborts an active transaction, in case of deadlock or
resource unavailability.

2. System Crash

o System failure can occur due to power failure or other hardware or software
failure. Example: Operating system error.

DATABASE MANAGEMENT SYSTEMS Page 138


Fail-stop assumption: In the system crash, non-volatile storage is assumed
not to be corrupted.

3. Disk Failure

o It occurs where hard-disk drives or storage drives used to fail frequently. It


was a common problem in the early days of technology evolution.
o Disk failure occurs due to the formation of bad sectors, disk head crash, and
unreachability to the disk or any other failure, which destroy all or part of disk
storage.

CONCURRENT EXECUTION OF TRANSACTION

In the transaction process, a system usually allows executing more than one transaction
simultaneously. This process is called a concurrent execution.

Advantages of concurrent execution of a transaction

1. Decrease waiting time or turnaround time.


2. Improve response time
3. Increased throughput or resource utilization.

Problems with Concurrent Execution

In a database transaction, the two main operations are READ and WRITE operations. So,
there is a need to manage these two operations in the concurrent execution of the transactions
as if these operations are not performed in an interleaved manner, and the data may become
inconsistent. So, the following problems occur with the Concurrent Execution of the
operations:

1: Lost Update Problems (W - W Conflict)

2. Dirty Read Problems (W-R Conflict)

3. Unrepeatable Read Problem (W-R Conflict)

DATABASE MANAGEMENT SYSTEMS Page 139


1. Lost update problem (Write Write conflict)
This type of problem occurs when two transactions in database access the same data item and
have their operations in an interleaved manner that makes the value of some database item
incorrect.
If there are two transactions T1 and T2 accessing the same data item value and then update it,
then the second record overwrites the first record.
Example:

Time Transaction T1 Transaction T2

t1 Read(A)

t2 A=A-50

t3 Read(A)

t4 A=A+50

t5 Write(A)

t6 Write(A)

Here,
At t1 time, T1 transaction reads the value of A i.e., 100.
At t2 time, T1 transaction deducts the value of A by 50.
At t3 time, T2 transactions read the value of A i.e., 100.
At t4 time, T2 transaction adds the value of A by 150.
At t5 time, T1 transaction writes the value of A data item on the basis of value seen at time t2
i.e., 50.

DATABASE MANAGEMENT SYSTEMS Page 140


At t6 time, T2 transaction writes the value of A based on value seen at time t4 i.e., 150.
So at time T6, the update of Transaction T1 is lost because Transaction T2 overwrites the
value of A without looking at its current value.
Such type of problem is known as the Lost Update Problem.

Dirty read problem (W-R conflict)

This type of problem occurs when one transaction T1 updates a data item of the database, and
then that transaction fails due to some reason, but its updates are accessed by some other
transaction.
Example:

Time Transaction T1 Transaction T2

t1 Read(A)

t2 A=A+20

t3 Write(A)

t4 Read(A)

t5 A=A+30

t6 Write(A)

t7 Write(B)

Here,
At t1 time, T1 transaction reads the value of A i.e., 100.

DATABASE MANAGEMENT SYSTEMS Page 141


At t2 time, T1 transaction adds the value of A by 20.
At t3 time, T1transaction writes the value of A (120) in the database.
At t4 time, T2 transactions read the value of A data item i.e., 120.
At t5 time, T2 transaction adds the value of A data item by 30.
At t6 time, T2transaction writes the value of A (150) in the database.
At t7 time, a T1 transaction fails due to power failure then it is rollback according to
atomicity property of transaction (either all or none).
So, transaction T2 at t4 time contains a value which has not been committed in the database.
The value read by the transaction T2 is known as a dirty read.

Unrepeatable read (R-W Conflict)

It is also known as an inconsistent retrieval problem. If a transaction T 1 reads a value of data


item twice and the data item is changed by another transaction T 2 in between the two read
operation. Hence T1 access two different values for its two read operation of the same data
item.
Example:

Time Transaction T1 Transaction T2

t1 Read(A)

t2 Read(A)

t3 A=A+30

t4 Write(A)

t5 Read(A)

Here,
At t1 time, T1 transaction reads the value of A i.e., 100.

DATABASE MANAGEMENT SYSTEMS Page 142


At t2 time, T2transaction reads the value of A i.e., 100.
At t3 time, T2 transaction adds the value of A data item by 30.
At t4 time, T2 transaction writes the value of A (130) in the database.
Transaction T2 updates the value of A. Thus, when another read statement is performed by
transaction T1, it accesses the new value of A, which was updated by T2. Such type of
conflict is known as R-W conflict.

CONCURRENCY CONTROL

Concurrency Control is the working concept that is required for controlling and managing the
concurrent execution of database operations and thus avoiding the inconsistencies in the
database. Thus, for maintaining the concurrency of the database, we have the concurrency
control protocols.

Concurrency Control Protocols

The concurrency control protocols ensure the atomicity, consistency, isolation,


durability and serializability of the concurrent execution of the database transactions.
Therefore, these protocols are categorized as:

o Lock Based Concurrency Control Protocol


o Time Stamp Concurrency Control Protocol
o Validation Based Concurrency Control Protocol

Lock-Based Protocol

In this type of protocol, any transaction cannot read or write data until it acquires an
appropriate lock on it. There are two types of lock:

1. Shared lock:

o It is also known as a Read-only lock. In a shared lock, the data item can only read by
the transaction.
o It can be shared between the transactions because when the transaction holds a lock,
then it can't update the data on the data item.

DATABASE MANAGEMENT SYSTEMS Page 143


2. Exclusive lock:

o In the exclusive lock, the data item can be both reads as well as written by the
transaction.
o This lock is exclusive, and in this lock, multiple transactions do not modify the same
data simultaneously.

TWO-PHASE LOCKING (2PL)

o The two-phase locking protocol divides the execution phase of the transaction into
three parts.

o In the first part, when the execution of the transaction starts, it seeks permission for
the lock it requires.
o In the second part, the transaction acquires all the locks. The third phase is started as
soon as the transaction releases its first lock.
o In the third phase, the transaction cannot demand any new locks. It only releases the
acquired locks.

There are two phases of 2PL:

Growing phase: In the growing phase, a new lock on the data item may be acquired by the
transaction, but none can be released.

DATABASE MANAGEMENT SYSTEMS Page 144


Shrinking phase: In the shrinking phase, existing lock held by the transaction may be
released, but no new locks can be acquired.

In the below example, if lock conversion is allowed then the following phase can happen:

1. Upgrading of lock (from S(a) to X (a)) is allowed in growing phase.


2. Downgrading of lock (from X(a) to S(a)) must be done in shrinking phase.

Example:

The following way shows how unlocking and locking work with 2-PL.

Transaction T1:

o Growing phase: from step 1-3


o Shrinking phase: from step 5-7

DATABASE MANAGEMENT SYSTEMS Page 145


o Lock point: at 3

Transaction T2:

o Growing phase: from step 2-6


o Shrinking phase: from step 8-9
o Lock point: at 6

4. Strict Two-phase locking (Strict-2PL)

o The first phase of Strict-2PL is similar to 2PL. In the first phase, after acquiring all the
locks, the transaction continues to execute normally.

o The only difference between 2PL and strict 2PL is that Strict-2PL does not release a
lock after using it.
o Strict-2PL waits until the whole transaction to commit, and then it releases all the
locks at a time.
o Strict-2PL protocol does not have shrinking phase of lock release.

TIMESTAMP ORDERING PROTOCOL

DATABASE MANAGEMENT SYSTEMS Page 146


o The Timestamp Ordering Protocol is used to order the transactions based on their
Timestamps. The order of transaction is nothing but the ascending order of the
transaction creation.

o The priority of the older transaction is higher that's why it executes first. To determine
the timestamp of the transaction, this protocol uses system time or logical counter.
o The lock-based protocol is used to manage the order between conflicting pairs among
transactions at the execution time. But Timestamp based protocols start working as
soon as a transaction is created.

Basic Timestamp ordering protocol works as follows:

1. Check the following condition whenever a transaction Ti issues a Read (X) operation:

o If W_TS(X) >TS(Ti) then the operation is rejected.


o If W_TS(X) <= TS(Ti) then the operation is executed.
o Timestamps of all the data items are updated.

2. Check the following condition whenever a transaction Ti issues a Write(X) operation:

o If TS(Ti) < R_TS(X) then the operation is rejected.


o If TS(Ti) < W_TS(X) then the operation is rejected and Ti is rolled back otherwise the
operation is executed.

Where,

TS(TI) denotes the timestamp of the transaction Ti.

R_TS(X) denotes the Read time-stamp of data-item X.

W_TS(X) denotes the Write time-stamp of data-item X.

Validation Based Protocol

Validation phase is also known as optimistic concurrency control technique. In the validation
based protocol, the transaction is executed in the following three phases:

DATABASE MANAGEMENT SYSTEMS Page 147


1. Read phase: In this phase, the transaction T is read and executed. It is used to read
the value of various data items and stores them in temporary local variables. It can
perform all the write operations on temporary variables without an update to the
actual database.
2. Validation phase: In this phase, the temporary variable value will be validated
against the actual data to see if it violates the serializability.
3. Write phase: If the validation of the transaction is validated, then the temporary
results are written to the database or system otherwise the transaction is rolled back.

Here each phase has the following different timestamps:

Start(Ti): It contains the time when Ti started its execution.

Validation (Ti): It contains the time when Ti finishes its read phase and starts its validation
phase.

Finish(Ti): It contains the time when Ti finishes its write phase.

o This protocol is used to determine the time stamp for the transaction for serialization
using the time stamp of the validation phase, as it is the actual phase which
determines if the transaction will commit or rollback.
o Hence TS(T) = validation(T).
o The serializability is determined during the validation process. It can't be decided in
advance.
o While executing the transaction, it ensures a greater degree of concurrency and also
less number of conflicts.
o Thus it contains transactions which have less number of rollbacks.

THOMAS WRITE RULE

Thomas Write Rule provides the guarantee of serializability order for the protocol. It
improves the Basic Timestamp Ordering Algorithm.

The basic Thomas write rules are as follows:

DATABASE MANAGEMENT SYSTEMS Page 148


o If TS(T) < R_TS(X) then transaction T is aborted and rolled back, and operation is
rejected.
o If TS(T) < W_TS(X) then don't execute the W_item(X) operation of the transaction
and continue processing.

o If neither condition 1 nor condition 2 occurs, then allowed to execute the WRITE
operation by transaction Ti and set W_TS(X) to TS(T).

MULTIPLE GRANULARITY

Let's start by understanding the meaning of granularity.

Granularity: It is the size of data item allowed to lock.

Multiple Granularity:

o It can be defined as hierarchically breaking up the database into blocks which can be
locked.

o The Multiple Granularity protocol enhances concurrency and reduces lock overhead.
o It maintains the track of what to lock and how to lock.
o It makes easy to decide either to lock a data item or to unlock a data item. This type of
hierarchy can be graphically represented as a tree.

o The first level or higher level shows the entire database.


o The second level represents a node of type area. The higher level database consists of
exactly these areas.
o The area consists of children nodes which are known as files. No file can be present in
more than one area.
o Finally, each file contains child nodes known as records. The file has exactly those
records that are its child nodes. No records represent in more than one file.
o Hence, the levels of the tree starting from the top level are as follows:
o Database

DATABASE MANAGEMENT SYSTEMS Page 149


o Area
o File
o Record

DATABASE MANAGEMENT SYSTEMS Page 150


UNIT-5

Recovery and Atomicity Log Based Recovery Recovery with Concurrent Transactions
Check Points - Buffer Management Failure with loss of nonvolatile storage-Advance
Recovery systems- ARIES Algorithm, Remote Backup systems. File organization various
kinds of indexes - B+ Trees- Query Processing Relational Query Optimization.

Recovery and Atomicity:

When a system crashes, it may have several transactions being executed and various
files opened for them to modify the data items.
But according to ACID properties of DBMS, atomicity of transactions as a whole
must be maintained, that is, either all the operations are executed or none.
Database recovery means recovering the data when it get deleted, hacked or
damaged accidentally.
Atomicity is must whether is transaction is over or not it should reflect in the database
permanently or it should not effect the database at all.

When a DBMS recovers fro

It should check the states of all the transactions, which were being executed.

A transaction may be in the middle of some operation; the DBMS must ensure
the atomicity of the transaction in this case.

It should check whether the transaction can be completed now or it needs to be


rolled back.

No transactions would be allowed to leave the DBMS in an inconsistent state.

There are two types of techniques, which can help a DBMS in recovering as well as
maintaining th

Maintaining the logs of each transaction, and writing them onto some stable storage
before actually modifying the database.

Maintaining shadow paging, where the changes are done on a volatile memory, and
later, the actual database is updated.

DATABASE MANAGEMENT SYSTEMS Page 151


Log-Based Recovery

o The log is a sequence of records. Log of each transaction is maintained in some stable
storage so that if any failure occurs, then it can be recovered from there.
o If any operation is performed on the database, then it will be recorded in the log.
o But the process of storing the logs should be done before the actual transaction is
applied in the database.

There are two approaches to modify the database:

1. Deferred database modification:

o The deferred modification technique occurs if the transaction does not modify the
database until it has committed.
o In this method, all the logs are created and stored in the stable storage, and the
database is updated when a transaction commits.

2. Immediate database modification:

o The Immediate modification technique occurs if database modification occurs while


the transaction is still active.
o In this technique, the database is modified immediately after every operation. It
follows an actual database modification.

Recovery with Concurrent Transactions

Concurrency control means that multiple transactions can be executed at the same time and
then the interleaved logs occur. But there may be changes in transaction results so maintain
the order of execution of those transactions.
During recovery, it would be very difficult for the recovery system to backtrack all the logs
and then start recovering.

Recovery with concurrent transactions can be done in the following four ways.

1. Interaction with concurrency control


2. Transaction rollback
3. Checkpoints

DATABASE MANAGEMENT SYSTEMS Page 152


4. Restart recovery
Interaction with concurrency control:

In this scheme, the recovery scheme depends greatly on the concurrency control scheme
that is used. So, to rollback a failed transaction, we must undo the updates performed by the
transaction.

Transaction rollback :
In this scheme, we rollback a failed transaction by using the log.
The system scans the log backward a failed transaction, for every log record found in
the log the system restores the data item.
Checkpoints :
Checkpoints is a process of saving a snapshot of the applications state so that it can
restart from that point in case of failure.
Checkpoint is a point of time at which a record is written onto the database form the
buffers.
Checkpoint shortens the recovery process.
When it reaches the checkpoint, then the transaction will be updated into the database,
and till that point, the entire log file will be removed from the file. Then the log file is
updated with the new step of transaction till the next checkpoint and so on.
The checkpoint is used to declare the point before which the DBMS was in the
consistent state, and all the transactions were committed.

Restart recovery:

When the system recovers from a crash, it constructs two lists.


The undo-list consists of transactions to be undone, and the redo-list consists of
transaction to be redone.
The system constructs the two lists as follows: Initially, they are both empty. The
system scans the log backward, examining each record, until it finds the first
<checkpoint> record.

DATABASE MANAGEMENT SYSTEMS Page 153


Check Points:

Checkpoints are a process of saving a snapshot of the applications state so that it can
restart from that point in case of failure.
Checkpoint is a point of time at which a record is written onto the database form the
buffers.
Checkpoint shortens the recovery process.
When it reaches the checkpoint, then the transaction will be updated into the database,
and till that point, the entire log file will be removed from the file. Then the log file is
updated with the new step of transaction till the next checkpoint and so on.
The checkpoint is used to declare the point before which the DBMS was in the consistent
state, and all the transactions were committed.

BUFFER MANAGEMENT
The buffer manager is the software layer that is responsible for bringing pages from
physical disk to main memory as needed. The buffer manages the available main memory by
dividing the main memory into a collection of pages, which we called as buffer pool. The
main memory pages in the buffer pool are called frames.

Data must be in RAM for DBMS to operate on it!


Buffer manager hides the fact that not all data is in RAM.

DATABASE MANAGEMENT SYSTEMS Page 154


Buffer Manager

o A Buffer Manager is responsible for allocating space to the buffer in order to store
data into the buffer.
o If a user request a particular block and the block is available in the buffer, the buffer
manager provides the block address in the main memory.
o If the block is not available in the buffer, the buffer manager allocates the block in the
buffer.
o If free space is not available, it throws out some existing blocks from the buffer to
allocate the required space for the new block.
o The blocks which are thrown are written back to the disk only if they are recently
modified when writing on the disk.
o If the user requests such thrown-out blocks, the buffer manager reads the requested
block from the disk to the buffer and then passes the address of the requested block to
the user in the main memory.
o However, the internal actions of the buffer manager are not visible to the programs
that may create any problem in disk-block requests. The buffer manager is just like a
virtual machine

Failure with Loss of Nonvolatile Storage

Loss of Volatile Storage

A volatile storage like RAM stores all the active logs, disk buffers, and related data. In
addition, it stores all the transactions that are being currently executed. What happens if such
a volatile storage crashes abruptly? It would obviously take away all the logs and active

DATABASE MANAGEMENT SYSTEMS Page 155


copies of the database. It makes recovery almost impossible, as everything that is required to
recover the data is lost.

We can have checkpoints at multiple stages so as to save the contents of the database
periodically.

A state of active database in the volatile memory can be periodically dumped onto a
stable storage, which may also contain logs and active transactions and buffer
blocks.

<dump> can be marked on a log file, whenever the database contents are dumped
from a non-volatile memory to a stable one.

Recovery

When the system recovers from a failure, it can restore the latest dump.

It can maintain a redo-list and an undo-list as checkpoints.

It can recover the system by consulting undo-redo lists to restore the state of all
transactions up to the last checkpoint.

ARIES Algorithm:

Algorithm for Recovery and Isolation Exploiting Semantics (ARIES) is based on the Write
Ahead Log (WAL) protocol. Every update operation writes a log record which is one of the
following :

1. Undo-only log record:


Only the before image is logged. Thus, an undo operation can be done to retrieve the
old data.
2. Redo-only log record:
Only the after image is logged. Thus, a redo operation can be attempted.
3. Undo-redo log record:
Both before images and after images are logged.

DATABASE MANAGEMENT SYSTEMS Page 156


In it, every log record is assigned a unique and monotonically increasing log
sequence number (LSN).

Every data page has a page LSN field that is set to the LSN of the log record
corresponding to the last update on the page.

WAL requires that the log record corresponding to an update make it to stable
storage before the data page corresponding to that update is written to disk.

For performance reasons, each log write is not immediately forced to disk. A log tail
is maintained in main memory to buffer log writes.

The log tail is flushed to disk when it gets full. A transaction cannot be declared
committed until the commit log record makes it to disk.

Once in a while the recovery subsystem writes a checkpoint record to the log. The
checkpoint record contains the transaction table and the dirty page table.

A master log record is maintained separately, in stable storage, to store the LSN of
the latest checkpoint record that made it to disk.

On restart, the recovery subsystem reads the master log record to find the
ere on.

The recovery process actually consists of 3 phases:

1. Analysis:
The recovery subsystem determines the earliest log record from which the next pass
must start. It also scans the log forward from the checkpoint record to construct a
snapshot of what the system looked like at the instant of the crash.
2. Redo:
Starting at the earliest LSN, the log is read forward and each update redone.
3. Undo:
The log is scanned backward and updates corresponding to loser transactions are
undone.

DATABASE MANAGEMENT SYSTEMS Page 157


Remote Backup

Remote backup provides a sense of security in case the primary location where the database
is located gets destroyed. Remote backup can be offline or real-time or online. In case it is
offline, it is maintained manually.

Online backup systems are more real-time and lifesavers for database administrators and
investors. An online backup system is a mechanism where every bit of the real-time data is
backed up simultaneously at two distant places. One of them is directly connected to the
system and the other one is kept at a remote place as backup.

As soon as the primary database storage fails, the backup system senses the failure and
switches the user system to the remote storage. Sometimes this is so instant that the users

File A file is named collection of related information that is recorded on secondary


storage such as magnetic disks, magnetic tables and optical disks.

DATABASE MANAGEMENT SYSTEMS Page 158

You might also like