UNIT-2,3: Hierarchical Model
UNIT-2,3: Hierarchical Model
Database Model
A Database model defines the logical design of data. The model describes the relationships between
different parts of the data. In history of database design, three models have been in use.
Hierarchical Model
Network Model
Relational Model
Hierarchical Model
In this model each entity has only one parent but can have several children . At the top of hierarchy there
is only one entity which is called Root.
Network Model
In the network model, entities are organised in a graph,in which some entities can be accessed through
sveral path
Relational Model
In this model, data is organised in two-dimesional tables called relations. The tables or relation are
related to each other.
UNIT-4
Database Keys
Keys are very important part of Relational database. They are used to establish and identify
relation between tables. They also ensure that each record within a table can be uniquely
identified by combination of one or more fields within a table.
Super Key
Super Key is defined as a set of attributes within a table that uniquely identifies each record
within a table. Super Key is a superset of Candidate key.
Candidate Key
Candidate keys are defined as the set of fields from which primary key can be selected. It is an
attribute or set of attribute that can act as a primary key for a table to uniquely identify each
record in that table.
Primary Key
Primary key is a candidate key that is most appropriate to become main key of the table. It is a
key that uniquely identify each record in a table.
Composite Key
Key that consist of two or more attributes that uniquely identify an entity occurance is
called Composite key. But any attribute that makes up the Composite key is not a simple key
in its own.
Non-key Attribute
Non-key attributes are attributes other than candidate key attributes in a table.
Non-prime Attribute
Non-prime Attributes are attributes other than Primary attribute.
Functional Dependencies
FD's are constraints on well-formed relations and represent a formalism on the infrastructure of
relation.
Definition: an FD is a relationship between an attribute "Y" and a determinant (1 or more other attributes)
"X" such that for a given value of a determinant the value of the attribute is uniquely defined.
X is a determinant
X determines Y
Y is functionally dependent on X
X→Y
X →Y is trivial if Y ⊆ X
A key constraint is a special kind of functional dependency: all attributes of relation occur on the right-
hand side of the FD:
Normalization of Database
Updation Anamoly : To update address of a student who occurs twice or more than twice in a table,
we will have to update S_Address column in all the rows, else data will become inconsistent.
Insertion Anamoly : Suppose for a new admission, we have a Student id(S_id), name and address of
a student but if student has not opted for any subjects yet then we have to insert NULL there,
Deletion Anamoly : If (S_id) 401 has only one subject and temporarily he drops it, when we delete
that row, entire student record will be deleted along with it.
Normalization Rule
Normalization rule are divided into following normal form.
1. First Normal Form
4. BCNF
Alex 14 Maths
Stuart 17 Maths
In First Normal Form, any row must not have a column in which more than one value is saved, like
separated with commas. Rather than that, we must separate such data into multiple rows.
Student Table following 1NF will be :
Adam 15 Biology
Adam 15 Maths
Alex 14 Maths
Stuart 17 Maths
Using the First Normal Form, data redundancy increases, as there will be many columns with same data
in multiple rows but each row as a whole will be unique.
Student Age
Adam 15
Alex 14
Stuart 17
In Student Table the candidate key will be Student column, because all other column i.e Age is
dependent on it.
New Subject Table introduced for 2NF will be :
Student Subject
Adam Biology
Adam Maths
Alex Maths
Stuart Maths
In Subject Table the candidate key will be {Student, Subject} column. Now, both the above tables
qualifies for Second Normal Form and will never suffer from Update Anomalies. Although there are a
few complex cases in which table in Second Normal Form suffers Update Anomalies, and to handle those
scenarios Third Normal Form is there.
Address Table :
UNIT-5
A transaction can be defined as a group of tasks. A single task is the minimum
processing unit which cannot be divided further.
Let’s take an example of a simple transaction. Suppose a bank employee transfers
Rs 500 from A's account to B's account. This very simple and small transaction
involves several low-level tasks.
A’s Account
Open_Account(A)
Old_Balance = A.balance
New_Balance = Old_Balance - 500
A.balance = New_Balance
Close_Account(A)
B’s Account
Open_Account(B)
Old_Balance = B.balance
New_Balance = Old_Balance + 500
B.balance = New_Balance
Close_Account(B)
ACID Properties
A transaction is a very small unit of a program and it may contain several lowlevel
tasks. A transaction in a database system must
maintain Atomicity, Consistency, Isolation, and Durability − commonly known as
ACID properties − in order to ensure accuracy, completeness, and data integrity.
Atomicity − This property states that a transaction must be treated as an
atomic unit, that is, either all of its operations are executed or none. There
must be no state in a database where a transaction is left partially
completed. States should be defined either before the execution of the
transaction or after the execution/abortion/failure of the transaction.
Consistency − The database must remain in a consistent state after any
transaction. No transaction should have any adverse effect on the data
residing in the database. If the database was in a consistent state before the
execution of a transaction, it must remain consistent after the execution of
the transaction as well.
Durability − The database should be durable enough to hold all its latest
updates even if the system fails or restarts. If a transaction updates a chunk
of data in a database and commits, then the database will hold the modified
data. If a transaction commits but the system fails before the data could be
written on to the disk, then that data will be updated once the system
springs back into action.
Isolation − In a database system where more than one transaction are
being executed simultaneously and in parallel, the property of isolation
states that all the transactions will be carried out and executed as if it is the
only transaction in the system. No transaction will affect the existence of
any other transaction.
Serializability:-
When multiple transactions are being executed by the operating system in a
multiprogramming environment, there are possibilities that instructions of one
transactions are interleaved with some other transaction.
Schedule − A chronological execution sequence of a transaction is called a
schedule. A schedule can have many transactions in it, each comprising of a
number of instructions/tasks.
Serial Schedule − It is a schedule in which transactions are aligned in such
a way that one transaction is executed first. When the first transaction
completes its cycle, then the next transaction is executed. Transactions are
ordered one after the other. This type of schedule is called a serial schedule,
as transactions are executed in a serial manner.
In a multi-transaction environment, serial schedules are considered as a
benchmark. The execution sequence of an instruction in a transaction cannot be
changed, but two transactions can have their instructions executed in a random
fashion. This execution does no harm if two transactions are mutually independent
and working on different segments of data; but in case these two transactions are
working on the same data, then the results may vary. This ever-varying result
may bring the database to an inconsistent state.
To resolve this problem, we allow parallel execution of a transaction schedule, if its
transactions are either serializable or have some equivalence relation among them.
Equivalence Schedules
An equivalence schedule can be of the following types −
Result Equivalence
If two schedules produce the same result after execution, they are said to be
result equivalent. They may yield the same result for some value and different
results for another set of values. That's why this equivalence is not generally
considered significant.
View Equivalence
Two schedules would be view equivalence if the transactions in both the schedules
perform similar actions in a similar manner.
For example −
If T reads the initial data in S1, then it also reads the initial data in S2.
If T reads the value written by J in S1, then it also reads the value written by
J in S2.
If T performs the final write on the data value in S1, then it also performs
the final write on the data value in S2.
Conflict Equivalence
Two schedules would be conflicting if they have the following properties −
Active − In this state, the transaction is being executed. This is the initial
state of every transaction.
Partially Committed − When a transaction executes its final operation, it is
said to be in a partially committed state.
Failed − A transaction is said to be in a failed state if any of the checks
made by the database recovery system fails. A failed transaction can no
longer proceed further.
Aborted − If any of the checks fails and the transaction has reached a failed
state, then the recovery manager rolls back all its write operations on the
database to bring the database back to its original state where it was prior
to the execution of the transaction. Transactions in this state are called
aborted. The database recovery module can select one of the two operations
after a transaction aborts −
o Re-start the transaction
o Kill the transaction
Committed − If a transaction executes all its operations successfully, it is
said to be committed. All its effects are now permanently established on the
database system.
COCURRENCY CONTROL:-
In a multiprogramming environment where multiple transactions can be executed
simultaneously, it is highly important to control the concurrency of transactions.
We have concurrency control protocols to ensure atomicity, isolation, and
serializability of concurrent transactions. Concurrency control protocols can be
broadly divided into two categories –
Lock based protocols
Time stamp based protocols
Lock-based Protocols
Database systems equipped with lock-based protocols use a mechanism by which
any transaction cannot read or write data until it acquires an appropriate lock on
it. Locks are of two kinds −
Binary Locks − A lock on a data item can be in two states; it is either
locked or unlocked.
Shared/exclusive − This type of locking mechanism differentiates the
locks based on their uses. If a lock is acquired on a data item to perform a
write operation, it is an exclusive lock. Allowing more than one transaction
to write on the same data item would lead the database into an inconsistent
state. Read locks are shared because no data value is being changed.
There are four types of lock protocols available −
Simplistic Lock Protocol
Simplistic lock-based protocols allow transactions to obtain a lock on every object
before a 'write' operation is performed. Transactions may unlock the data item
after completing the ‘write’ operation.
Pre-claiming Lock Protocol
Pre-claiming protocols evaluate their operations and create a list of data items on
which they need locks. Before initiating an execution, the transaction requests the
system for all the locks it needs beforehand. If all the locks are granted, the
transaction executes and releases all the locks when all its operations are over. If
all the locks are not granted, the transaction rolls back and waits until all the locks
are granted.
Two-Phase Locking 2PL
This locking protocol divides the execution phase of a transaction into three parts.
In the first part, when the transaction starts executing, it seeks permission for the
locks it requires. The second part is where the transaction acquires all the locks. As
soon as the transaction releases its first lock, the third phase starts. In this phase,
the transaction cannot demand any new locks; it only releases the acquired locks.
Two-phase locking has two phases, one is growing, where all the locks are being
acquired by the transaction; and the second phase is shrinking, where the locks
held by the transaction are being released.
To claim an exclusive (write) lock, a transaction must first acquire a shared (read)
lock and then upgrade it to an exclusive lock.
Strict Two-Phase Locking
The first phase of Strict-2PL is same as 2PL. After acquiring all the locks in the first
phase, the transaction continues to execute normally. But in contrast to 2PL,
Strict-2PL does not release a lock after using it. Strict-2PL holds all the locks until
the commit point and releases all the locks at a time.
Strict-2PL does not have cascading abort as 2PL does.
Timestamp-based Protocols
The most commonly used concurrency protocol is the timestamp based protocol.
This protocol uses either system time or logical counter as a timestamp.
Lock-based protocols manage the order between the conflicting pairs among
transactions at the time of execution, whereas timestamp-based protocols start
working as soon as a transaction is created.
Every transaction has a timestamp associated with it, and the ordering is
determined by the age of the transaction. A transaction created at 0002 clock time
would be older than all other transactions that come after it. For example, any
transaction 'y' entering the system at 0004 is two seconds younger and the priority
would be given to the older one.
In addition, every data item is given the latest read and write-timestamp. This lets
the system know when the last ‘read and write’ operation was performed on the
data item.
Timestamp Ordering Protocol
The timestamp-ordering protocol ensures serializability among transactions in their
conflicting read and write operations. This is the responsibility of the protocol
system that the conflicting pair of tasks should be executed according to the
timestamp values of the transactions.
The recovery system reads the logs backwards from the end to the last
checkpoint.
It maintains two lists, an undo-list and a redo-list.
If the recovery system sees a log with <Tn, Start> and <Tn, Commit> or just
<Tn, Commit>, it puts the transaction in the redo-list.
If the recovery system sees a log with <Tn, Start> but no commit or abort
log found, it puts the transaction in undo-list.
All the transactions in the undo-list are then undone and their logs are removed.
All the transactions in the redo-list and their previous logs are removed and then
redone before saving their logs.