Unit V Dbms Ppts
Unit V Dbms Ppts
1. Read Operation
2. Write Operation
Concurrency Control
Concurrency Control
Concurrency Control in Database Management
System is a procedure of managing simultaneous
operations without conflicting with each other.
It ensures that Database transactions are
performed concurrently and accurately to
produce correct results without violating data
integrity of the respective Database.
Potential problems of Concurrency
• Strict-Two phase locking system is almost similar to 2PL. The only difference is that
Strict-2PL never releases a lock after using it. It holds all the locks until the commit
point and releases all the locks at one go when the process is over.
• Centralized 2PL
• In Centralized 2 PL, a single site is responsible for lock management process. It has
only one lock manager for the entire DBMS.
• Primary copy 2PL
• Primary copy 2PL mechanism, many lock managers are distributed to different
sites. After that, a particular lock manager is responsible for managing the lock for
a set of data items. When the primary copy has been updated, the change is
propagated to the slaves.
• Distributed 2PL
• In this kind of two-phase locking mechanism, Lock managers are distributed to all
sites. They are responsible for managing locks for data at that site. If no data is
replicated, it is equivalent to primary copy 2PL. Communication costs of
Distributed 2PL are quite higher than primary copy 2PL
Two Phase Locking Protocol
If TS(Ti) < W-timestamp(X). Operation rejected and Ti rolled back. Otherwise, the operation is
executed
Timestamp Based Protocol
• Read Operations
• For read operations, if TS(Ti) < W-TS(X), this violates the time-stamp order of Ti with
regard to the previous writer of X. Thus, Ti is aborted and restarted with a new time-
stamp.
• Otherwise, the read is valid, and Ti is allowed to read X. The DBMS then updates R-
TS(X) to be the max of R-TS(X) and TS(Ti). It also has to make a local copy of X to ensure
repeatable reads for Ti.
• Write Operations
• For write operations, if TS(Ti) < R-TS(X) or TS(Ti) < W-TS(X), Ti must be restarted.
Otherwise, the DBMS allows Ti to write X and updates W-TS(X). Again, it needs to make
a local copy of X to ensure repeatable reads for Ti.
• Hard Disk Drive (HDD): A traditional storage device that uses spinning magnetic disks to
store and access data.
• Solid-State Drive (SSD): A storage device that uses flash memory and has no moving
parts, resulting in faster data access and greater reliability than HDDs.
• USB Flash Drive: A small, portable storage device that also uses flash memory and is
typically connected to a computer via a USB port.
• Optical Discs (CD, DVD, Blu-ray): A storage medium that uses laser technology to read
and write data on plastic discs, often used for multimedia files or software distribution.
• Cloud Storage: A storage service that allows users to save and access data on remote
servers via the internet, enabling easy sharing and accessibility from multiple devices.
Secondary Storage Devices
• Hard disk drives are the most common secondary storage devices in
present computer systems. These are called magnetic disks because
they use the concept of magnetization to store information. Hard
disks consist of metal disks coated with magnetizable material. These
disks are placed vertically on a spindle. A read/write head moves in
between the disks and is used to magnetize or de-magnetize the spot
under it. A magnetized spot can be recognized as 0 (zero) or 1 (one).
• There are two main types of buffering: input buffering and output buffering. Input
buffering refers to the temporary storage of data that is being received from an
external source, such as a file on a hard drive or data being transmitted over a
network. Output buffering refers to the temporary storage of data that is being sent
to an external destination, such as a printer or a file on a hard drive.
Reduced risk of data loss − Buffering can help to prevent data loss by temporarily
storing data in a buffer before it is written to a permanent storage location.
Fixed-size block buffering − In this approach, the buffer is divided into a fixed number of blocks, and
each block is given a fixed size. When data is written to the buffer, it is divided into blocks of the
specified size and written to the appropriate block in the buffer. This approach is simple to
implement, but it can be inefficient if the block size does not match the size of the data being
written.
Dynamic block buffering − In this approach, the size of the blocks in the buffer is not fixed. Instead,
the buffer is divided into a series of linked blocks, and the size of each block is determined by the
amount of data that it contains. This approach is more flexible than fixed-size block buffering, but it
can be more complex to implement.
Circular block buffering − In this approach, the buffer is treated as a circular buffer, with data being
written to the buffer and then overwriting the oldest data as the buffer becomes full. This approach
is simple to implement and can be efficient, but it can lead to data loss if the data is not processed
quickly enough.